AI-Augmented JavaScript — Free Sample

A 5-scenario sample of the AI-Augmented JavaScript family — recognizing AI-generated JS pitfalls, knowing when to author code directly vs lean on AI assistance, validating output against runtime + type-system edge cases, and combining JavaScript fluency with model-collaboration judgment. Items are originally authored by AIEH editorial; they're not drawn from a copyrighted bank. For a verified Skills Passport credential, take the full AI-Augmented JavaScript assessment.

1. An AI assistant generates a Promise-based function: `async function fetchAll(urls) { return Promise.all(urls.map(u => fetch(u).then(r => r.json()))); }`. You'll be calling it with ~500 URLs against a third-party API. What's the highest-leverage critique?
2. You're refactoring a complex React component with deeply nested state. AI suggests `const [state, setState] = useState({...largeNestedObject})` and then `setState(prev => ({...prev, nested: {...prev.nested, foo: bar}}))` for updates. What's the right judgment call?
3. AI generates: `const result = items.map(async i => await processItem(i)).filter(r => r.success);`. The filter never matches. What's the most likely cause?
4. You ask AI to write a deep-equality check. It generates: `function isEqual(a, b) { return JSON.stringify(a) === JSON.stringify(b); }`. The function passes your test cases. What's worth knowing before shipping?
5. AI generates a TypeScript generic: `function pick<T, K extends keyof T>(obj: T, keys: K[]): Pick<T, K> { return keys.reduce((acc, k) => ({ ...acc, [k]: obj[k] }), {} as Pick<T, K>); }`. The runtime behavior is correct. What's the highest-leverage type-system review?