Skip to Content

What cut-off scores should you use for your employment assessments?

Written by
Ben Schwencke
Updated
decorative gradient bars

Cut-off scores represent the minimum level of performance an applicant must achieve to be considered for a job or to move on to the next stage of the selection process. Without cut-off scores, assessments themselves serve no purpose, as the utility of pre-employment testing is only realised when you screen out poor performers on that assessment.

Most organisations pick a cut-off score that sounds reasonable - the top third, a 70% pass mark, whatever the platform defaults to - and never revisit it. It's treated as a configuration detail rather than a strategic decision, which is a mistake. Your cut-off score is arguably the single most consequential setting in your entire assessment process. Get it wrong in either direction and you either screen out candidates you needed or render your assessments effectively useless. Here is how to actually decide.

What your cut-off score is actually doing

It is worth being clear about what a cut-off score is actually doing, because it is easy to think of it as a simple quality filter — set it higher, get better candidates. That is true, but it is only half the picture. The cut-off simultaneously controls the quality of candidates who progress and the proportion of candidates who progress, and those two things move in opposite directions. Raise the cut-off and you improve quality but shrink your shortlist. Lower it and you widen your pool but let in more risk. There is no setting that gives you both. Every cut-off decision is making that trade-off, whether you have thought about it explicitly or not.

Your applicant pool size and number of hires should determine the cut off score

The most common mistake organisations make is deciding on a cut-off score before thinking about how many people are actually going to take the assessment. The logic needs to run the other way.

If you need to hire 10 people and you are expecting 80 applicants, setting a cut-off at the 90th percentile means only around 8 people pass - and that is before you account for dropoff, interview rejections, or candidates who accept other offers. You have already undermined your own process before a single person has applied.

Work backwards from what you need. If you need 20 candidates to reach the interview stage and you are expecting 200 applicants, a cut-off at the 90th percentile gives you exactly what you need with no margin. A cut-off at the 80th percentile gives you 40 - a more comfortable buffer. A cut-off at the 95th percentile gives you 10, which probably is not enough.

It is also worth noting that performance on pre-employment assessments follows a normal distribution. The majority of applicants will score around the average, with fewer at either extreme. Organisations sometimes assume their applicant pool will be unusually strong by default and set aggressive cut-offs accordingly. Statistically, this is almost never the case. Plan your cut-offs around a realistic spread, not an optimistic one.

  1. High volume programmes (graduate schemes, apprenticeships, large contact centre intakes) can support higher cut-offs because the sheer number of applicants means you will still have plenty of candidates passing through even at the 80th or 90th percentile.
  2. Mid-volume experienced hire roles should sit more cautiously, typically somewhere between the 50th and 75th percentile depending on how competitive the role is.
  3. Low volume specialist roles require the most care. If you are only expecting 30 or 40 applicants for a niche position, a high cut-off can leave you with nobody to interview.

More cognitively demanding roles require higher cut-off scores

Not every role justifies the same threshold, and setting a high cut-off where the role does not warrant it creates problems without delivering any quality benefit.

A cognitively demanding role - a data analyst, a commercial lawyer, a software engineer - has a genuine ceiling on what someone needs to be capable of to do the job well. A high cut-off on a cognitive ability test makes sense here, because the assessment is measuring something the role actually requires. The cost of a weak hire is significant, and the assessment is doing real filtering work.

For roles where the cognitive or behavioural demands are more moderate, an aggressive cut-off starts screening for things the job does not require. You end up rejecting candidates who would have been perfectly capable in the role, purely because they did not score in the top quartile on a test measuring abilities that are only tangentially relevant to their day-to-day work.

The principle is straightforward: the higher the cut-off, the more confident you need to be that the assessment is measuring something genuinely important for performance in that specific role. If you are not sure that relationship holds, err towards a more moderate threshold.

Higher cut-off scores affect adverse impact

This is worth addressing directly, because it is both a legal consideration and a practical one that affects the quality of your shortlist.

Adverse impact occurs when a selection criterion, including a cut-off score, disproportionately screens out candidates from particular demographic groups, even if that was never the intention. Research consistently shows that certain types of pre-employment assessment produce score differences between demographic groups. When you set a high cut-off on an assessment with meaningful group differences, those disparities become more pronounced. More diverse candidates are screened out, and your shortlist becomes less representative than your applicant pool was.

The higher the cut-off, the greater the adverse impact tends to be. This is not an argument against using cut-offs - it is an argument for paying attention to the adverse impact profile of your assessment before deciding where to set the line.

Some assessments are designed and validated specifically to minimise group score differences while maintaining their predictive validity. If your assessment provider cannot give you adverse impact data, or cannot tell you how pass rates vary across demographic groups at different thresholds, that is important information. The lower the adverse impact of the underlying assessment, the more room you have to set a competitive cut-off without the legal exposure or the diversity cost.

To address the potential adverse impact of high cut-off scores, companies should carefully analyse their pre-employment tests and selection processes to identify and mitigate any unintended biases or disparities.”

Answer these three questions before setting a cut-off score

Rather than just picking a number and hoping for the best, use these three questions as a framework to help you find the right range:

  1. How many applicants do I expect, and how many do I need to progress? - Work out the ratio. If you need 30 candidates at interview stage and you expect 150 applicants, you need a pass rate of at least 20%, which puts your cut-off at around the 80th percentile at the most aggressive end. Build in a margin.
  2. How demanding is this role, genuinely? - Be honest about whether a high cut-off is reflecting a real job requirement or just a preference for high scorers. The assessment should be filtering for capabilities the role actually uses.
  3. What does the adverse impact data look like at different thresholds? - Ask your assessment provider for pass rate data broken down by demographic group at the cut-off you are considering. If the disparity is significant, consider whether a slightly lower cut-off achieves a similar quality outcome with meaningfully less adverse impact.

Conclusion and next steps

Getting your cut-off score right is one part of building an assessment process that actually works. At Test Partnership, we offer on-demand support throughout setup, so you can configure your assessments with confidence rather than working it out as you go. If you are thinking about using psychometric assessments in your hiring and want to make sure they are set up properly from the start, we would be happy to have a conversation.

author profile ben schwencke
Primary author

Ben Scwencke

Chief psychologist at Test Partnership. MSc in Organisational Psychology with over ten years experience in psychometric testing.