What AI-generated async pattern deadlocks under load, and why?

A recurring pattern in AI-generated Python async code is a function that calls asyncio.run() from inside an already running event loop, or that creates a fresh event loop per request and synchronously awaits coroutines that depend on the outer loop’s resources. Both patterns deadlock or raise under realistic load — and both look fine in a tutorial-style example with one task. This item probes whether the candidate can read AI-suggested async code, recognize the loop-misuse pattern, and replace it with a correct top-level entrypoint plus inner await calls.

What this question tests

The concept is event-loop ownership and the single-loop-per-thread invariant. asyncio.run() is a top-level entrypoint that creates a new event loop, runs the coroutine to completion, and closes the loop. Calling it inside an already-running loop raises RuntimeError: asyncio.run() cannot be called from a running event loop. Calling it in production code that is itself called from an async function — say, a FastAPI request handler that calls a helper that calls asyncio.run() internally — produces a runtime error that only surfaces when the helper is invoked from an async context, not when tested in isolation.

AI tools reproduce this pattern because the training corpus contains many “minimal viable async example” snippets that wrap the example’s body in asyncio.run(main()) at the bottom of the file. When the model is asked to write a helper function inside an existing async codebase, it sometimes imports the example pattern wholesale rather than writing the helper as async def helper(...): ... await ... and letting the caller integrate it into the existing loop.

Why this is the right answer

The correct option identifies the nested asyncio.run() (or the equivalent fresh-loop creation) as the bug, and proposes making the helper an async def that callers await. Here’s the canonical AI-generated bug:

# AI-generated: helper creates its own event loop
import asyncio

async def fetch_one(client, url):
    response = await client.get(url)
    return response.json()

def fetch_all(client, urls):
    # AI suggests: "wrap async work in asyncio.run for sync callers"
    async def runner():
        return await asyncio.gather(*[fetch_one(client, u) for u in urls])
    return asyncio.run(runner())

# Caller is itself async:
async def handler(request):
    client = request.app["http_client"]
    urls = request.json()["urls"]
    return fetch_all(client, urls)  # RuntimeError under load

When handler runs inside FastAPI/Starlette/aiohttp, an event loop is already running. asyncio.run() checks for this and raises RuntimeError. In tests that mock the request handler and call fetch_all from a synchronous test runner, the bug is invisible — there’s no outer loop to conflict with — so unit tests pass while production fails.

The fix is structural: make fetch_all async and let callers await:

async def fetch_all(client, urls):
    return await asyncio.gather(*[fetch_one(client, u) for u in urls])

async def handler(request):
    client = request.app["http_client"]
    urls = request.json()["urls"]
    return await fetch_all(client, urls)

If the helper genuinely needs a sync interface (e.g., for a sync codepath that isn’t migrating soon), the correct pattern is asyncio.run() at exactly one place near the top of the sync caller’s stack — not nested inside an async caller’s stack. The “one event loop per thread, owned by the top-level entrypoint” invariant is the heuristic to remember.

What the wrong answers reveal

The three incorrect options each map to a common gap:

  • “The code is correct; asyncio.run() is the standard way to call async functions from anywhere.” Respondents picking this option are taking the AI snippet at face value. They’ve internalized “wrap in asyncio.run” as a rule rather than a top-level-only entrypoint pattern. This is the highest-risk gap because the bug ships and only surfaces in production async contexts.
  • “Replace asyncio.run with loop.run_until_complete to fix the bug.” This is the “wrong patch” gap. run_until_complete on an already-running loop also raises. The respondent recognizes a problem but reaches for a reflex from the pre-3.7 asyncio API rather than the structural fix.
  • “The bug is a missing await before asyncio.gather(...); replacing return asyncio.run(runner()) with return await runner() fixes it without other changes.” Closer, but the function is still def, not async def, so await is a syntax error. Respondents picking this option are partway to the right model but haven’t connected “this needs to be awaited” with “the enclosing function must therefore be async.”

The first wrong-answer pattern is the costliest in production: the candidate who would ship the AI suggestion as-is.

How the sample test scores you

In the AIEH 5-question AI-Augmented Python sample, this item contributes one of five datapoints aggregated into a single ai_python_proficiency score via the W3.2 normalize-by-count threshold. Binary scoring per item: 5 for the correct option, 1 for any of the three wrong options. With 5 binary items, the average ranges 1–5 and the level threshold maps avg ≤ 2 to low, ≤ 4 to mid, > 4 to high.

Data Notice: Sample-test results are directional. A 5-question sample can flag general async-code judgment but can’t distinguish a candidate who knows asyncio deeply from one who recognizes this specific pattern; for a verified Skills Passport credential, take the full AI-Augmented Python assessment.

The full assessment probes async cancellation, task groups, context propagation, and the specific failure modes of leading AI coding assistants on async code. See the scoring methodology for how scores map to the AIEH 300–850 Skills Passport scale.

  • asyncio.TaskGroup (Python 3.11+). The modern idiom for structured concurrency replaces ad-hoc gather patterns. AI tools sometimes still suggest pre-3.11 idioms; recognizing when a TaskGroup would be cleaner is part of the judgment.
  • nest_asyncio and why it’s a code smell. The nest_asyncio library patches the event loop to allow nested run calls. AI tools occasionally suggest it as a fix; in production code it’s almost always a sign that the structure is wrong, not that the patch is needed.
  • Event-loop policy in Jupyter and IPython. Jupyter runs its own event loop, which is why await foo() works at the top level of a notebook cell but asyncio.run(foo()) raises. AI tools often miss this when generating code for notebook contexts.

For deeper context on async patterns and AI-augmented coding judgment more broadly, see ai fluency in hiring and the backend engineering prep or ml engineering prep catalogs. Employers can use /hire/ to find candidates verified on AI-Augmented Python; learners start at /learn/.


Sources

Try the question yourself

This explainer covers what the item measures. To see how you score on the full ai augmented python family, take the free 5-question sample.

Take the ai augmented python sample