How to Assess Soft Skills in Candidates: The Proven, Research-Backed Method
How to assess soft skills with reliable, science-backed methods proven to improve hiring accuracy and efficiency.
Hiring teams usually introduce candidate assessments because they are overloaded with applicants, lack useful information on who is likely to perform well, and want to shortlist strong candidates as early as possible.
If you are new to assessments and want to stop spending weeks reviewing CVs and running early phone interviews with people who were never right for the role, candidate assessments can help. The key is using the right methods at the right stage, and avoiding common pitfalls. This guide explains how to do that properly.
Candidate assessments is a broad term and refers to far more selection methods than many people realise. It includes any structured method used to evaluate candidates, such as cognitive ability tests, personality tests, work samples, assessment centres and even interviews.
Each of these methods is designed to measure different attributes. They are not interchangeable, and each provides insight into a different aspect of candidate suitability.
Cognitive ability assessments reflect how quickly someone can understand information, solve problems, and adapt when things change (these come in the forms of numerical, verbal, logical, etc.). These abilities underpin learning speed and problem-solving across most roles and are difficult to infer reliably from CVs.
Personality assessments measure behavioural preferences and working style. They add useful context around how someone is likely to approach work and interact with others.
Situational judgement tests (SJTs) assess how candidates judge and respond to job-related scenarios. They are designed to capture decision-making tendencies in contexts similar to those encountered on the job.
Work samples and job simulations evaluate how candidates perform on representative tasks. These methods focus on observable behaviour rather than inferred capability.
Interviews are a staple for all hiring processes (and rightly so), they explore experience, communication, motivation, and fit. There are two broad types: Unstructured interviews don't follow a strict framework and are closer to a free-flowing conversation, while structured interviews ask all candidates the same questions and assess responses in a more consistent way. We cover the differences in detail in our guide to structured vs unstructured interviews.
Each assessment method measures something different. Understanding what each one is designed to capture is the first step to using them effectively.
Candidate assessment methods differ not just in what they measure, but in the advantages they offer and how well they predict job performance. Some methods are highly scalable and objective, while others provide richer insight but require more time and resource.
The table below compares the most common candidate assessment methods, showing what each measures, how strongly it predicts job performance, and its key strengths.
| Method | What it measures | Predictive accuracy (r) | Key strengths | Limitations |
|---|---|---|---|---|
| Cognitive and ability assessments (GMA) | Learning ability, reasoning, problem-solving | 0.65 | Strongest single predictor of job performance; scalable and objective | Quality depends on test design and validation |
| Structured interviews | Experience, behaviour, communication | 0.58 | Consistent evaluation of experience, motivation, and fit | Time-intensive and harder to scale |
| Unstructured interviews | General impressions and conversation | 0.38–0.58 | Flexible and conversational | Highly inconsistent and prone to bias |
| Situational judgement tests (SJTs) | Judgement in work-related scenarios | ~0.34–0.41 | Role-specific insight without full simulations | Lower predictive power than cognitive ability |
| Work samples and job simulations | Performance on representative job tasks | ~0.33–0.54 | Direct observation of task performance | Time-consuming and difficult to scale |
| Personality assessments (conscientiousness) | Behavioural tendencies and working style | 0.22 | Useful context for team fit and working style | Weak predictor when used alone |
Predictive accuracy values are correlations from meta-analytic research (e.g. Schmidt & Hunter and subsequent studies) and vary by role, assessment design, and implementation quality. A score of 1.0 would indicate perfect prediction of job performance, so values such as 0.65 for cognitive ability represent a very strong indicator.
For context, CV-based indicators such as years of education or past experience have low predictive accuracy, typically around 0.10–0.35. This contrast helps explain why structured assessments, particularly cognitive ability assessments (0.65), are far more effective for predicting job performance.
Across the research, two methods stand out. Cognitive ability assessments show the strongest single relationship with job performance. Structured interviews are the most effective way to assess experience, motivation, and fit in a consistent and comparable way.
Used together, these methods complement each other well and result in strong predictive power.
Understanding the strengths of different assessment methods is only part of the picture. How effective an assessment is also depends on when it is used in the hiring process.
Early stages of hiring typically involve large applicant pools and limited time. At this point, methods that are objective, scalable, and predictive add the most value. Cognitive ability assessments, often combined with personality assessments, are well suited here because they allow teams to shortlist candidates based on skills and potential rather than background.
Using cognitive ability tests at the start is a much simpler and more effective way to shortlist candidates. They are delivered through a candidate assessment platform, meaning no supervision or manual screening is required. Whether you assess 10 candidates or 10,000, the time investment is effectively the same, and reduces the risk of missing high-potential candidates who might be less polished on paper.
Interviews and work samples work best later on, once the applicant pool is small enough to justify the time-investment. They're much more effective when you are assessing candidates who already meet the baseline requirements - allowing you to explore motivation and fit, rather than discovering gaps in capability within the first few minutes.
A simple rule of thumb:
Candidate assessments are powerful, but many problems arise from how they are applied rather than from the assessments themselves.
It's not as simple as adding more assessments to get better data and better decisions. There is a balance to strike. Each additional assessment adds friction for candidates (increasing fatigue and drop-off rates), and beyond a certain point the extra information you gain starts to level off. In practice, a small number of well-chosen assessments is usually more effective.
Structured interviews and work samples are valuable, but they are not designed to shortlist large pools. Using them early makes hiring slower and more resource-intensive than necessary.
It is easy to assume that two numerical reasoning tests from different providers will perform in the same way. In practice, assessment quality can vary significantly. Reliability, validity, question design, and whether tests are static or supported by large item banks all affect accuracy and fairness.
The science behind the assessment matters. Choosing a provider with strong psychometric foundations is essential. This is an area we focus on heavily at Test Partnership, our assessment science outlines how our assessments are developed and tested.
Early-career candidates often lack fully developed technical skills simply because they have had limited opportunity to build them. Screening heavily for hard skills at this stage can exclude high-potential candidates who would learn quickly on the job. We discuss this in more detail in our evidence-based early-careers hiring strategy.
Soft skills only add value when they are clearly linked to the role. Measuring generic traits without context can dilute decision-making rather than improve it. Success in certain roles is often underpinned by specific soft skills. For example, customer-facing roles often depend on communication, agreeableness and empathy.
Getting started with candidate assessments doesn't need to be complicated.
It's often wise to start with just one role. Decide what you need to measure to predict success in that role - often this will be a combination of cognitive ability and personality traits. If you are unsure what success in the role looks like in terms of cognitive ability or personality, this is where expert input helps.
TIP: You can discuss your role, requirements, and any existing competency frameworks you may have with one of our business psychologists, who can help curate an appropriate combination of assessments and explain what to look for in the results.
From the various data studies and our own experience, cognitive ability assessments are usually most effective as the first screening stage. Using them early allows you to shortlist candidates based on skills and potential, keeping the process focused and saving time for both hiring teams and candidates. You can then use interviews and work samples on your shortlist to add context and make your final decisions.
At Test Partnership, assessments are designed to support this approach. Our ability and personality assessments are scientifically validated, adaptive, and built using large item banks to ensure accuracy, fairness, and a strong candidate experience. Hiring teams can set up assessments quickly, review results clearly, and combine different tests depending on the role.
The next step is to explore specific assessment types in more detail and decide how they fit into your hiring workflow. You can book a free consultation with our business psychologist to understand how candidate assessments could fit into your hiring needs.