From the situational judgment sample test
What does the "teammate-feedback-with-personal-context" SJT scenario measure?
Note on framing: This is the first item-level explainer for the SJT (Situational Judgment) family — distinct from the AI-native families (ACL, AOE) and from the trait-level Big Five family. SJT items measure workplace judgment in everyday non-AI workplace contexts; the validity literature for SJTs dates back further (McDaniel et al 2001 meta-analysis, Whetzel & McDaniel 2009 reviews) than for AI-native scenario formats. Item-level explainers for the SJT family follow the scenario-ladder pattern documented in the Communication scenario explainer, adapted to general-workplace-judgment scenarios with citation grounding to the SJT validity literature. This explainer documents the SJT-explainer pattern for future SJT-family item explainers to inherit.
What this scenario measures
This scenario — a teammate delivering work that’s 80% complete and 100% missing a critical detail, with a personal-context complication, and a manager asking what’s going on — measures workplace judgment in cross-functional ambiguity under personal-context complication. Specifically, the item probes whether the respondent recognizes that:
- The manager’s question deserves a factual response grounded in specific work-quality evidence, not advocacy in either direction (cover-for vs throw-under-bus).
- Personal-context information should be acknowledged without speculation — naming that the context exists without trying to play psychiatrist or armchair-diagnose.
- A concrete suggestion for a process or pairing change is higher-leverage than either silence or escalation, because it operationalizes the manager’s question into a productive next step.
- The judgment under this kind of ambiguity is what distinguishes effective workplace contributors from ones who over-protect peers, over-throw-them-under-bus, or defer all judgment entirely to manager direction.
The pattern being measured is what the SJT validity literature (McDaniel et al, 2001; Whetzel & McDaniel, 2009) documents as context-specific judgment — the recognition that a particular response captures the underlying judgment skill more reliably than alternative responses. SJTs achieve corrected validity around 0.34 in the meta-analytic literature, meaningful and particularly valuable for roles where contextual workplace judgment is central.
Why this scenario captures SJT skill well
The scenario is doing real work as an item because it forces a choice between four genuinely-on-the-table responses, only one of which captures the productive-without-being-harsh-or-soft pattern. Three specific properties make the dual-constraint structure diagnostic:
- The complication is realistic. Real workplace situations routinely include personal-context complications that candidates have to navigate without becoming either cold-and-mechanical or paternalistic-and-fuzzy. The candidate’s response choice reflects how they actually approach this on the job.
- The graded option ladder catches direction-of-failure. The scoring uses calibrated quality values (5/3/2/1) rather than binary right/wrong. A respondent who picks the PIP-recommendation option (value 3) demonstrates partial competence — they recognize that work-quality issues matter but jump to formal escalation faster than warranted. A respondent who picks the cover-for-teammate option (value 2) signals an empathy-without-honesty failure mode that protects short-term comfort at long-term team-cost. The ladder distinguishes these failures in a way binary scoring cannot.
- The best response models a teachable pattern. “Factual description + acknowledged context + concrete suggestion + willingness-to-help-if-open” is not just the right answer for this scenario — it’s a generalizable template that applies to most workplace-feedback situations involving third parties. Strong respondents recognize the pattern; weaker respondents pattern-match to surface features (loyalty-to-teammate, deference-to-manager) without internalizing the productive-honest-with-care principle.
What the best response shows (and doesn’t)
Picking the value-5 option demonstrates situation-specific workplace-judgment skill — but it does not demonstrate broader SJT skill in the trait sense. Three specific misconceptions worth flagging:
- Picking the right option ≠ being a strong workplace contributor generally. A respondent can pattern-match to one well-known template (productive-honest feedback) without internalizing the underlying principle of context-aware honest communication. Stronger predictors of general SJT skill come from the full 40-scenario assessment, which probes workplace judgment across diverse contexts (multi-stakeholder priority triage, customer escalations, ethical dilemmas, scope-vs-timeline pushback).
- Picking a lower-tier option ≠ being a weak workplace contributor. Real workplace effectiveness includes dimensions the scenario doesn’t measure (technical skill, domain expertise, persistence under pressure, networking and influence-building). A respondent strong on those dimensions but weaker on the specific feedback pattern can still be a competent workplace contributor.
- The best response isn’t context-universal. In some contexts (high-stakes regulated industries with strict performance documentation requirements, organizations with explicit “no informal feedback to managers about peers” policies), more formal escalation is the correct pattern. The scenario’s value-5 framing assumes a typical knowledge-work context where direct factual feedback is appropriate.
How the sample test scores you
In the AIEH 5-scenario Situational Judgment sample, this scenario contributes one of the five datapoints that aggregate into your single sjt_quality score. The W3.2 scoring fix normalizes by item count, so your score is the average of your five scenario values mapped onto a 1–5 scale, then bucketed into low (≤2), mid (≤4), or high (>4) for the directional result.
Data Notice: Sample-test results are directional indicators only. Five-scenario SJT samples are too few to be psychometrically valid; for a verified Skills Passport credential, take the full 40-scenario assessment.
The full 40-scenario assessment expands coverage across more diverse workplace-judgment contexts and produces a calibrated score on the AIEH 300–850 scale via the scoring methodology. For broader treatment of how SJT fits into role-readiness scoring, see the hiring-loop design overview.
Related concepts
- Situational Judgment Test (SJT) format. A selection-research format that presents workplace scenarios with multiple-choice or rated responses; the McDaniel et al 2001 meta-analysis documented ~0.34 corrected validity for SJTs across the literature, comparable to other validated selection methods. SJTs show smaller adverse-impact exposure than cognitive testing in some studies, making them attractive in bias-conscious selection contexts.
- Behavioral vs situational interview questions. SJTs use the situational-question format (hypothetical future behavior in specific contexts) rather than the behavioral-question format (specific past behavior). Situational and behavioral questions both produce meaningful selection signal; the formats have different strengths and trade-offs (see interview question design for the broader treatment).
- Steel-manning the counterargument. The discipline of presenting the strongest version of the alternative view before refuting it. Strong workplace feedback often involves implicit steel-manning — acknowledging the context that complicates the situation while still delivering the substantive feedback.
- The “no surprises” principle. Senior stakeholders prefer to be told problems clearly and early rather than to discover them themselves later. The scenario’s value-5 response embodies this — the manager isn’t surprised later by either the work-quality issues or the personal-context complication, because the response surfaces both.
For role-specific bundles where SJT is moderately-to-highly weighted, see the UX Designer role page (SJT 0.70 — highest among role bundles to date because cross-functional design judgment is central to the role) and the Security Engineer role page (SJT 0.55, reflecting incident-response and risk-judgment dimensions).
Sources
- McDaniel, M. A., Morgeson, F. P., Finnegan, E. B., Campion, M. A., & Braverman, E. P. (2001). Use of situational judgment tests to predict job performance: A clarification of the literature. Journal of Applied Psychology, 86(4), 730–740.
- Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology. Psychological Bulletin, 124(2), 262–274.
- Whetzel, D. L., & McDaniel, M. A. (2009). Situational judgment tests: An overview of current research. Human Resource Management Review, 19(3), 188–202.