Selection Methods

Integrity Tests in Hiring: Validity Evidence and EEOC Considerations

By Editorial Team — reviewed for accuracy Published
Last reviewed:

Integrity tests are paper-and-pencil or online instruments that attempt to predict counterproductive workplace behavior — theft, absenteeism, substance use on the job, rule violations — and to predict overall job performance through the conscientiousness construct. The validity evidence is substantial but contested; Ones, Viswesvaran, and Schmidt’s 1993 meta-analytic synthesis reported corrected operational validity coefficients in the ~0.34 to ~0.41 range against counterproductive behavior criteria, and Schmidt and Hunter (1998) included integrity tests at approximately ~0.41 corrected against general job performance.

This article walks through the two major integrity-test formats, the validity evidence and its critical responses, the EEOC and ADA considerations that shape defensible use, the common implementation pitfalls, and how AIEH treats integrity-test evidence within the personality-pillar logic of the Skills Passport composite.

Data Notice: Validity coefficients cited reflect peer-reviewed meta-analytic evidence at time of writing. Specific weights AIEH applies to integrity-related evidence are documented in the scoring methodology and may evolve as calibration data accrues. Legal compliance with EEOC and ADA standards is jurisdiction-specific; consult counsel before deploying any integrity instrument in a regulated hiring context.

Two formats: overt and personality-based

Integrity tests come in two distinct formats, and the validity evidence and legal exposure differ materially between them.

Overt integrity tests ask candidates direct questions about attitudes toward dishonesty, prior counterproductive behavior, and beliefs about workplace rule-following. Items might include “Most employees steal from their employer at some point” (agreement is interpreted as projecting one’s own attitudes) or “I have never taken anything from a job that didn’t belong to me” (denial is interpreted skeptically under base-rate assumptions). The transparent question stem is the defining feature.

Personality-based integrity tests measure broader personality dimensions — conscientiousness, agreeableness, emotional stability — that correlate with counterproductive behavior without asking about the behavior directly. Items resemble standard Big Five personality items: “I am always prepared,” “I follow a schedule.” The construct overlap with conscientiousness is substantial, and several personality-based integrity instruments are functionally rebrandings of conscientiousness scales with workplace-behavior norming.

For the broader treatment of how personality dimensions predict job performance, see big five in hiring and personality vs cognitive in hiring.

The validity evidence

Ones, Viswesvaran, and Schmidt (1993) conducted the foundational meta-analytic synthesis of integrity-test validity, drawing on 665 validity coefficients. The headline findings:

  • Corrected operational validity against counterproductive workplace behavior of approximately ~0.39, with overt and personality-based formats producing comparable estimates.
  • Corrected operational validity against overall job performance of approximately ~0.34.
  • Validity generalization across organizational settings and applicant types — the coefficients held up reasonably well across industry, role level, and selection-stage placement.

Schmidt and Hunter’s 1998 synthesis reported corrected validity of approximately ~0.41 for integrity tests against general job performance — placing the method above structured interviews, work history, and education credentials in the ranking.

The 1993 meta-analysis remains influential but has attracted critical responses. Sackett and Schmitt (2012) and Van Iddekinge, Roth, Raymark, and Odle-Dusseau (2012) re-examined the underlying coefficient pool and reported lower estimates after restricting the sample to studies meeting tighter methodological standards — in particular, studies with predictive (not concurrent) designs, with non-self-report criteria, and with properly handled range restriction. The lower estimates clustered closer to ~0.20-0.25 corrected operational validity. The methodological dispute is unresolved in the literature; defensible practice is to treat integrity tests as moderately predictive rather than top-tier predictive, and to combine integrity evidence with other validated predictors rather than relying on it as a single signal.

EEOC and ADA considerations

Integrity tests sit in a particular regulatory zone in the United States. The major considerations:

  • Adverse impact monitoring. As with any selection instrument, integrity tests are subject to adverse impact analysis under Title VII. Differences in pass-rate by protected class trigger the four-fifths rule and require validity defense. Integrity tests generally show smaller race-based mean differences than cognitive-ability tests, but the analysis still applies.
  • Polygraph distinction. The Employee Polygraph Protection Act (EPPA) restricts polygraph use in most private-sector hiring. Paper-and-pencil and online integrity tests are not polygraphs and are not covered by EPPA, but adjacent state laws (notably Massachusetts) impose specific restrictions.
  • ADA and medical-inquiry boundaries. The Americans with Disabilities Act prohibits pre-offer medical inquiries. Integrity tests must not include items that probe mental-health diagnoses or symptoms in a way that constitutes a medical inquiry. Karraker v. Rent-A-Center (7th Cir. 2005) classified the MMPI as a medical examination because the underlying instrument was designed for clinical diagnosis. Properly designed integrity tests stay clear of clinical-diagnostic framing.
  • Use-of-criminal-history overlap. Some integrity test items probe arrest or conviction history. EEOC guidance restricts blanket criminal-history use in selection; integrity-test items overlapping with criminal-history inquiries inherit those restrictions.

For the broader treatment of legal-defensibility considerations across selection methods, see hiring bias mitigation and pre-employment screening evidence.

Practical workflow

A defensible integrity-testing workflow has five elements:

  1. Justification. Document why an integrity test is appropriate for the role. Cash-handling, security work, and access-to-confidential-data roles have stronger justification than generic knowledge-worker roles.
  2. Instrument selection. Choose a published instrument with documented validity evidence, adverse-impact data, and an ADA-compliant item pool. Avoid bespoke integrity instruments without validity documentation.
  3. Stage placement. Integrity tests are typically administered in the screening or pre-offer stages. Avoid placing them so late that rejected candidates have invested substantial time, and avoid placing them so early that they screen out candidates before a recruiter has reviewed basic fit.
  4. Scoring and cut-score derivation. Derive cut-scores from the validity-research base or from internal validation, not from arbitrary percentile points.
  5. Adverse-impact monitoring. Track pass-rate by protected class on a rolling basis. If four-fifths-rule thresholds are crossed, re-examine the cut-score and the validity evidence supporting current use.

Common pitfalls

  • Single-signal reliance. Integrity-test coefficients are moderate, not strong. A rejection decision based on a single integrity-test score crosses validity defense at the disparate-impact test if other predictors weren’t considered.
  • Construct narrowness. Personality-based integrity tests overlap heavily with conscientiousness; using both an integrity test and a Big Five battery double-counts a similar signal.
  • Faking and response distortion. Overt integrity tests are highly transparent about what they’re measuring. Empirical evidence shows candidates can and do fake somewhat. The validity coefficients incorporate this faking; the practical implication is that integrity tests are most defensible as one composite input rather than a single decision rule.

AIEH integration

The Skills Passport composite treats integrity-test evidence as one personality-pillar input alongside Big Five conscientiousness, emotional stability, and work-style scenarios. AIEH does not include integrity-test evidence in the default modal-role bundle because the validity coefficients are moderate and the legal-compliance burden is jurisdiction- specific. Roles with strong justification — cash handling, security, regulated access — can include integrity-test evidence as an optional input weighted appropriately within the personality pillar (see scoring methodology).

The candidate-owned framing means a candidate who has completed an integrity test through an AIEH partner sees the result on their Skills Passport and controls who else sees it. A candidate can elect not to publish integrity-test evidence on the public Passport; the credential framework supports selective-disclosure within the broader composite.

For practical guidance on integrating integrity evidence with other selection signals in a defensible loop, see the hiring loop design article.

For broader treatment of how personality-pillar evidence integrates with cognitive evidence in selection decisions, see personality vs cognitive in hiring. The two pillars contribute distinct, partly non-overlapping prediction variance, and integrity- test evidence sits within the personality-pillar logic when it’s used at all.

Takeaway

Integrity tests come in overt and personality-based formats and predict counterproductive workplace behavior plus general job performance with corrected validity coefficients in the ~0.20 to ~0.41 range depending on which methodological synthesis is applied. Legal-defensibility requires careful attention to EEOC adverse-impact analysis, ADA medical-inquiry boundaries, and instrument selection with documented validity evidence. The intervening methodological dispute between Ones et al. (1993) and the Van Iddekinge et al. (2012) re-analysis remains unresolved; defensible practice treats integrity evidence as moderately predictive rather than top-tier predictive. AIEH treats integrity-test evidence as an optional personality-pillar input rather than a default predictor in the Skills Passport composite, with selective-disclosure on the candidate-owned credential and appropriate weighting within the broader multi-method composite logic.

Sources

  • Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2), 262-274.
  • Sackett, P. R., & Lievens, F. (2008). Personnel selection. Annual Review of Psychology, 59, 419-450.
  • Ones, D. S., Viswesvaran, C., & Schmidt, F. L. (1993). Comprehensive meta-analysis of integrity test validities: Findings and implications for personnel selection and theories of job performance. Journal of Applied Psychology, 78(4), 679-703.
  • Van Iddekinge, C. H., Roth, P. L., Raymark, P. H., & Odle-Dusseau, H. N. (2012). The criterion-related validity of integrity tests: An updated meta-analysis. Journal of Applied Psychology, 97(3), 499-530.
  • Sackett, P. R., & Schmitt, N. (2012). On reconciling conflicting meta-analytic findings regarding integrity test validity. Journal of Applied Psychology, 97(3), 550-556.

About This Article

Researched and written by the AIEH editorial team using official sources. This article is for informational purposes only and does not constitute professional advice.

Last reviewed: · Editorial policy · Report an error