The Entry-Level Resume Is Broken. Evidence-Based Hiring Is the Fix.
The collapse of entry-level tech hiring isn’t only a talent problem — it’s a screening problem. Every recruiter knows the symptom: an open junior role posted Monday morning has 800 applicants by Friday, almost all of them holding similar degrees, similar GPAs, similar four-month internships. The traditional resume can’t distinguish them, and AI-augmented applicant tooling has multiplied the volume without improving the signal.
The same data that has new grads worried should also worry hiring teams. The pipeline is broken on both ends.
The Data Recruiters Should Be Looking At
Big Tech entry-level hiring fell 25% from 2023 to 2024, per SignalFire’s 2025 State of Talent Report. New graduates now account for just 7% of hires at the 15 largest tech firms, down from 15% pre-pandemic. Startups have moved in the same direction: from 30% of hires in 2019 to under 6% today.
Mid-level hiring grew in parallel. SignalFire found Big Tech increased hiring 27% for professionals with two to five years of experience in the same period. Startups hired 14% more in that range. Companies aren’t shrinking headcount across the board — they’re shifting it.
The Federal Reserve Bank of New York shows unemployment among recent college graduates up 30% since September 2022, versus 18% for all workers.
The headline reads “AI is eating entry-level jobs.” The truer reading is: companies have stopped paying the training cost that justifies entry-level hiring, because they can’t tell who’s worth training.
Why the Resume Has Stopped Working
A 22-year-old applicant in 2026 has, on paper:
- A CS degree from one of 4,000 accredited US institutions
- A 3.4–3.9 GPA (compressed range, hard to discriminate)
- One or two summer internships, often with self-reported scopes
- A handful of class projects, indistinguishable from the next applicant’s
- A LinkedIn profile shaped by AI-rewriting tools
- A resume tailored by ChatGPT to the job description
The recruiter has, on their end:
- 800 applications for one role
- An ATS that filters on keyword density
- 6 seconds per resume on initial review (the eye-tracking studies have been remarkably stable)
- No reliable way to verify any single claim
This is not a screening process. It is a lottery with a small skill-weighted component.
Worse: the rise of generative AI in candidate tooling means the noise floor has gone up. Every resume now reads professionally. Every cover letter is grammatically correct. Every LinkedIn summary is keyword-optimized. The traditional signal — “this candidate communicates clearly” — has collapsed because AI handles that for everyone.
What Evidence-Based Hiring Actually Means
The replacement isn’t more interviews. Most companies have already pushed interview load to the point of diminishing returns — 5+ rounds is now common, and offer-acceptance rates drop sharply past round three. Adding screening interviews increases cost and slows time-to-hire without meaningfully improving prediction.
The replacement is front-loaded, calibrated evidence.
In practice, this means a few things:
1. Standardized assessment, not vendor-locked scores. A HackerRank score, a CodeSignal score, and an iMocha score are not directly comparable. A candidate scoring 720 on one platform may score 580 on another for reasons that have nothing to do with skill. Evidence-based hiring requires a normalization layer — a Skills Passport that aggregates across vendors and surfaces a single, calibrated composite.
2. Multi-pillar evaluation, not just code tests. Junior developer success correlates with four pillars: technical core (language proficiency, problem decomposition), AI-augmented work (prompt-to-spec, AI output evaluation), cognitive (reasoning, communication), and behavioral (situational judgment, teamwork). A candidate strong in three pillars and weak in one is a different hiring decision than a candidate weak in three and strong in one — but a single LeetCode score collapses that distinction.
3. Recency-weighted scoring. A 2019 CodeSignal score isn’t worth what a 2024 score is — both because the candidate’s skills may have shifted and because the assessment instrument may have been recalibrated. Decay-aware composites reflect this.
4. Proof of work alongside test scores. A test result tells you what someone can do under timed conditions. A GitHub repository, a deployed project, or a public technical write-up tells you what someone does do without supervision. The combination is more predictive than either alone.
5. Candidate-owned credentials. The credential should travel with the candidate, not stay locked in the vendor account or buried in an old ATS. This both serves the candidate (one passport, many applications) and serves the recruiter (verifiable evidence rather than self-reported scores).
This is, of course, exactly the AIEH Skills Passport thesis — and it’s why we built it.
The Junior-Hire Decision in 2026
For a hiring team trying to make a junior hire today, the practical implication is straightforward:
- A traditional resume packet (resume, cover letter, transcripts) is worth what it has always been worth: ~6 seconds of attention and a coin-flip-quality predictor.
- A Skills Passport packet (calibrated 300-850 composite, multi-pillar breakdown, proof-of-work links, recency badges) lets a recruiter spend the same 6 seconds and walk away with a much more reliable read.
The signal-to-noise difference compounds over a recruiting funnel. A 5% improvement in initial screening accuracy translates to dramatic reductions in downstream interview load and false-positive offers.
What This Looks Like for the Candidate Side
Candidates who recognize the shift are starting to build their own evidence packets — sometimes ahead of the recruiters asking for them. The Proof-of-Work Hiring Packet — a candidate-side framework circulating among 2025-2026 grads — has them submitting monthly proof packets: usage receipts for AI tooling, weekly accomplishment reports, GitHub deliverables, resume-as-cover-sheet.
This is a healthy market response. When the supply side starts producing better evidence voluntarily, hiring teams that can intake and evaluate that evidence have a structural advantage over hiring teams stuck on resume-keyword screening.
The Short Version
The entry-level hiring market is broken in both directions. New grads can’t get hired despite credentials. Recruiters can’t distinguish quality candidates despite spending more on screening than ever before. Both sides are stuck inside an evaluation framework — the traditional resume — that AI has rendered obsolete.
The fix isn’t more interviews or more buzzword filters. It’s better evidence, captured at the front of the funnel, normalized across vendors, owned by the candidate, and surfaced at recruiter time.
That’s the bet behind AIEH’s three workspaces. For recruiters, it’s Hire — calibrated bundles, evidence-based candidate surfacing, no per-test fees. For candidates, it’s Learn — free sample tests, role guides, a Skills Passport that travels. For test providers, it’s Assess — an adapter SDK to reach an installed base.
If your hiring funnel is producing 800 applicants and 0 confident decisions, the problem isn’t volume. It’s the substrate. Time to switch substrates.
For candidates: If you’re on the supply side of this market — trying to break into tech in 2026 with the deck stacked against juniors — see the candidate-side framework: How to Get Hired in Tech When Entry-Level Hiring Is Down 50% →
Sources: SignalFire State of Talent Report 2025; IEEE Spectrum, “AI Shifts Expectations for Entry Level Jobs” (Feb 2026); Federal Reserve Bank of New York labor data; U.S. Bureau of Labor Statistics; SF Standard.