Hiring

Skills-Based Hiring: What the Evidence Actually Says (and Doesn't)

By Editorial Team — reviewed for accuracy Published
Last reviewed:

“Skills-based hiring” is the dominant talent-acquisition narrative of the early 2020s — the framing that organizations should evaluate candidates on demonstrated skills rather than on proxies like degrees, brand-name employer history, or industry-specific years of experience. The narrative is older than the contemporary movement (Cappelli’s 2012 work on the “skills gap” pushed this direction more than a decade ago), but the post-2020 acceleration came from a combination of pandemic-era talent shortages, the documented “paper ceiling” effect on workers without four-year degrees, and the AI-tooling shift that compressed which skills matter for which jobs.

This article walks through what skills-based hiring actually claims, where the empirical evidence supports those claims, where the movement gets oversold relative to the evidence, and how AIEH’s calibrated portable-credential approach fits the skills-based frame without the failure modes that have surfaced as some employers have moved fast on degree-removal alone.

Data Notice: Skills-based-hiring research is contemporary and still-emerging. Evidence cited here is qualitative or from research orgs publishing labor-market analysis (Burning Glass Institute / Lightcast, Harvard Business School’s Project on Workforce, Opportunity@Work) — verify against current publications before making policy decisions.

What skills-based hiring actually claims

The contemporary skills-based-hiring movement claims, broadly:

  • Degree requirements are weak proxies for ability. A four-year degree credential signals a bundle of correlated things (cognitive ability, conscientiousness, socioeconomic background, persistence) but doesn’t directly measure job-relevant skills. Employers routinely require degrees for roles where the actual job content doesn’t need degree-level general education.
  • “Years of experience” requirements over-fit on industry brand history. A candidate with 5 years at a non-prestigious employer doing the actual work may be more capable than a candidate with 3 years at a brand-name employer doing adjacent-but-different work. Years-of-experience filters systematically miss the former.
  • Skills can be measured directly. Pre-employment assessments, work samples, and portfolio review can produce evidence of skill that’s more predictive than credential proxies. The selection-research literature broadly supports this — work samples have validity coefficients comparable to or exceeding cognitive ability tests for job-specific performance prediction (Schmidt & Hunter, 1998).
  • Removing degree requirements expands the talent pool meaningfully. The “STARs” (Skilled Through Alternative Routes) population — workers without four-year degrees but with skill profiles equivalent to degree holders — is large. Opportunity@Work and Burning Glass Institute research has documented this population at tens of millions of US workers.

The movement has produced concrete policy changes — major employers (IBM, Walmart, Google, Bank of America among many others) have publicly removed degree requirements from large fractions of their job postings since 2020 — and the published research base supporting the underlying skills-vs-credentials claim is meaningful.

What the evidence actually supports

Three claims are well-supported by the contemporary research:

  • Degree requirements often inflate beyond what jobs need. Burning Glass Institute and Harvard Business School’s Project on Workforce documented this with detailed analysis of millions of job postings: the share of postings requiring a bachelor’s degree expanded substantially from 2000 to 2017 even for roles where the underlying job content didn’t change. The inflation reflects employer convenience (degree filters reduce applicant pool size) more than role requirements.
  • Direct skill measurement outperforms degree as a predictor. Schmidt & Hunter’s (1998) classic meta-analysis placed work samples (corrected validity ~0.54) and general mental ability (~0.51) well above years of education (~0.10) as predictors of job performance. Subsequent meta-analyses have refined the numbers but not the ordering. Cappelli’s labor-economics work has framed this with employer-specific data showing similar patterns.
  • Removing degree requirements expands the qualified-applicant pool. Where employers have publicly tracked outcomes after removing degree requirements, applicant pools have grown substantially without performance degradation in the ultimately-hired cohort. The research base here is still thinner than the academic-meta-analysis layer above — longitudinal performance data for non-degree hires is just starting to accumulate at scale — but the early evidence consistently supports the directional claim.

What the movement gets oversold

Three claims are weaker or misleading versus the evidence:

  • “Skills-based hiring eliminates bias.” Direct skill assessment shifts where bias enters but doesn’t eliminate it. Cognitive ability tests show large demographic group differences (Roth et al., 2001) — see cognitive ability in hiring for the validity-vs-adverse-impact tradeoff in detail. Personality measures show smaller but non-zero group differences (Hough & Oswald, 2008). Work samples can show their own patterns of disparate impact depending on test design and cultural context. Skills-based hiring is less biased than some pure-credential filters, not unbiased.
  • “Degree-removal alone expands access meaningfully.” Removing the degree requirement from a job posting without changing the rest of the hiring loop (resume screen, interview process, reference checks) often produces minimal real change in who gets hired — the other filters in the loop still encode credential-correlated proxies. The research orgs working in this space have documented this as the “removed-degree-but- still-filtering-on-pedigree” pattern. Real impact requires redesigning the full evaluation pipeline, not just the posting.
  • “Skills are easy to measure.” Many roles have skill components that are genuinely hard to assess in pre-hire contexts — judgment under ambiguity, cross-functional collaboration, taste and aesthetic sensibility. Pre-hire assessments are good at measuring discrete-task skills (coding, language proficiency, structured reasoning) and worse at measuring contextual judgment. The skills-based-hiring movement sometimes elides this distinction and oversells what assessments can capture.

The credential-portability gap

A subtler problem the skills-based movement has surfaced but not solved: when employers DO assess skills directly via vendor platforms (HackerRank, CodeSignal, TestGorilla, Pymetrics, and many others), the resulting credentials are platform-specific. A candidate’s HackerRank Skill Score lives in HackerRank; a TestGorilla score lives in TestGorilla. Candidates can’t take their skill evidence with them to a different recruiter without re-testing.

This is a real architectural limitation that contemporary skills-based-hiring research has flagged but most platform vendors haven’t addressed (see the HackerRank vs CodeSignal comparison or TestGorilla alternatives for treatments of the per-platform-account problem). The practical effect: candidates who interview at five employers running five different platforms typically take the underlying assessments multiple times, generating re-test fatigue that strong candidates increasingly opt out of.

How AIEH fits the skills-based frame

AIEH’s product approach is skills-based by construction — every score on a candidate’s Skills Passport reflects measured performance on a calibrated assessment, mapped onto a common 300–850 scale across providers. The architectural difference from single-vendor platforms is the credential portability: a candidate’s Passport lives at their URL on aieh.com, not inside a vendor account. Employers see verified evidence; candidates own the credential and can present it across employers without re-testing.

The four-pillar weighting in the AIEH composite (see scoring methodology) reflects the contemporary skills-research consensus on what predicts job performance: cognitive ability gets a meaningful weight (0.25) but doesn’t dominate, domain skill gets the largest weight (0.35) reflecting its role-conditional importance, AI fluency gets a separate weight (0.25) reflecting the post-2024 shift in workplace tooling, and communication gets a smaller weight (0.15) reflecting its role as necessary-but-not-sufficient across most knowledge work.

The Big Five personality family is available alongside the four pillars but contributes a smaller default weight in role bundles — personality assessment is real signal but has weaker validity for job performance than the validity literature initially suggested (see Big Five in hiring for the detailed evidence).

For specific role bundles applying these weights, see the AI Product Manager role page, the ML Engineer role page, or the full tests catalog for current and forthcoming assessment families.

Takeaway

Skills-based hiring is directionally correct: skills predict job performance better than credential proxies, and direct skill measurement is more defensible than years-of-degree filters. The research evidence supports the broad claim, even where individual oversells (bias-elimination claims, easy-measurement claims, degree-removal-alone claims) outrun the data.

The credential-portability gap remains. Skills-based hiring implemented as “let’s just remove degree requirements” or “let’s add a vendor assessment to the resume screen” leaves most of the underlying credential infrastructure unchanged — and re-creates new gates (vendor account access, platform-specific scoring) where old ones (degree requirements, brand-name employer history) used to sit. The next architectural step — calibrated, portable, candidate-owned credentials across providers — is where AIEH and similar skills-passport approaches differ from single-vendor skills-assessment platforms.

For employers committed to the skills-based direction and considering how to implement it well, the high-leverage moves are: multi-method assessment combining cognitive plus structured interview plus work sample, calibrated portable credentials where available, and explicit attention to adverse-impact mitigation that the broader selection-research literature documents (Sackett & Lievens, 2008).

The harder organizational move is internal: removing degree requirements from job postings is a single-day policy change; redesigning the rest of the hiring loop (recruiter intake guidelines, screening rubrics, structured-interview question banks, reference-check protocols) to actually evaluate skills rather than re-encode credential proxies is months of work and requires sustained executive air cover. Most public skills-based- hiring announcements have not been followed by the second-stage work, which is part of why the published outcome data so far shows expanded applicant pools but not yet large changes in who gets hired. The credential-portability and rest-of-the-loop gaps together explain most of the gap between skills-based hiring’s stated goals and its currently-realized outcomes — and both gaps are closeable with deliberate engineering on top of the policy foundation that the contemporary movement has established.


Sources

  • Cappelli, P. (2012). Why Good People Can’t Get Jobs: The Skills Gap and What Companies Can Do About It. Wharton Digital Press.
  • Hough, L. M., & Oswald, F. L. (2008). Personality testing and industrial-organizational psychology: Reflections, progress, and prospects. Industrial and Organizational Psychology, 1(3), 272–290.
  • Burning Glass Institute and Harvard Business School Project on Workforce. (2024). Research on degree-requirement inflation and skills-based hiring outcomes. https://www.burningglassinstitute.org and https://www.hbs.edu/managing-the-future-of-work
  • Opportunity@Work. (2024). Research on STARs (Skilled Through Alternative Routes) workforce population. https://opportunityatwork.org
  • Roth, P. L., Bevier, C. A., Bobko, P., Switzer, F. S., & Tyler, P. (2001). Ethnic group differences in cognitive ability in employment and educational settings: A meta-analysis. Personnel Psychology, 54(2), 297–330.
  • Sackett, P. R., & Lievens, F. (2008). Personnel selection. Annual Review of Psychology, 59, 419–450.
  • Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262–274.

About This Article

Researched and written by the AIEH editorial team using official sources. This article is for informational purposes only and does not constitute professional advice.

Last reviewed: · Editorial policy · Report an error