The science of effective candidate selection
Ben Schwencke explains why cognitive ability is the strongest predictor of job performance and how to build it into a practical selection process.
Most organisations believe they hire well. The data suggests otherwise. Research from Harvard Business Review puts the cost of a single bad hire at around five times their annual salary — and that's before accounting for the knock-on effects on team morale, manager time, and customer experience. The uncomfortable truth is that most hiring processes are built on methods that were never designed to predict job performance in the first place.
This guide looks at why that happens, what the research says about the most effective selection tools, and how to start making decisions based on evidence rather than instinct.
Hiring decisions are some of the most consequential choices an organisation makes — yet they often receive less structured attention than decisions of far lower stakes. When a selection process fails, the effects ripple outward: productivity drops, team performance suffers, and the managers responsible for fixing the problem lose weeks of their time doing so.
Team productivity drops by 12% when teams include even one low performer, and good employees are 54% more likely to leave when working alongside poor performers.
The irony is that most bad hires aren't the result of dishonest candidates or careless managers. They happen because the methods used to evaluate candidates simply don't measure what actually drives performance. Fix the method, and you fix most of the problem.
Two methods dominate most hiring processes: CV screening and the interview. Both are deeply familiar, both feel credible, and neither is particularly good at predicting job performance.
CVs tell you what someone has done — their history, their titles, their years of experience. What they can't tell you is whether any of that translates into performance in your role. Research shows that experience and qualifications alone predict job performance with roughly 9% accuracy in most roles. A candidate with a decade of experience isn't necessarily better than one with two years; what matters is their ability to learn, think, and adapt — and CVs don't measure any of that.
Unstructured interviews — the kind where managers ask different questions to different candidates and score based on gut feeling — fare only marginally better, at around 14% predictive accuracy. The problem isn't the conversation itself; it's that without a consistent framework, interviewers end up measuring how likeable or confident someone seems, not how well they'll perform. Bias creeps in easily, and the signal gets lost in the noise.
The core problem: Traditional methods measure what candidates have done, not what they can do. They assess history rather than potential — which is exactly the wrong thing to optimise for in most modern roles.
Decades of occupational psychology research have produced a reasonably clear picture of which selection tools actually predict performance. Here's how the most common methods compare:
| Selection method | Predictive validity | Key limitation |
|---|---|---|
| Cognitive ability tests | ~65% | Needs to be paired with other methods for a complete picture |
| Work sample / simulation tests | ~54% | Time-intensive to design and administer at scale |
| Structured interviews | ~51% | Requires training and consistency to work; doesn't scale easily |
| Personality questionnaires | ~25–40% | Most predictive when combined with cognitive ability data |
| Unstructured interviews | ~14% | Highly susceptible to bias; measures confidence, not competence |
| CV / experience screening | ~9% | Measures history, not potential; misses high performers from non-traditional backgrounds |
The evidence points clearly to cognitive ability as the single most effective predictor of job performance — across roles, industries, and levels of seniority. Cognitive ability tests measure problem-solving, learning speed, and the capacity to process and apply information under pressure. These are the foundational skills that underpin performance in virtually every knowledge-based role.
Used on its own, a cognitive ability test predicts performance with around 65% accuracy — more than four times better than a CV alone. Combined with a structured interview or personality questionnaire, that figure climbs further still, with some research suggesting combined approaches can reach up to 85% accuracy. The key is using tools that measure what actually drives performance, rather than what's simply convenient to assess.
Ben Schwencke explains why cognitive ability is the strongest predictor of job performance and how to build it into a practical selection process.
If your current process leans heavily on CVs and unstructured interviews, you're not alone — most organisations do. But you're also likely making decisions with far less accuracy than you could be. The good news is that switching to more effective methods doesn't require a complete overhaul of your process; it usually means adding one or two structured tools at the right point in the funnel.
Starting with cognitive ability assessments early — before interviews, not after — is the single highest-impact change most hiring teams can make. It filters more accurately, reduces time spent on weak candidates, and levels the playing field for candidates from non-traditional backgrounds.