Candidate Selection: A Definitive Guide
Learn of candidate selection to improve your candidate selection process and build a high-performing workforce.
One of the major blockers preventing organisations from incorporating online testing into their selection processes is concern about fairness and equality.
Put simply, organisations simply aren’t sure how online testing will affect the fairness of their selection processes, and avoid them in fear of harming diversity and inclusion initiatives.
Although organisations are right to be concerned about this issue, they fail to recognise the real issue impacting fairness, which is the overuse of CV sifting and interviews.
From the research in occupational psychology, we know that CV sifting and interviewing are by far the most vulnerable selection tools regarding unconscious bias, and diversifying selection processes away from these tools is the best way to improve fairness and equality outcomes.
Audio Reading: How to Improve Fairness and Equality in Online Testing
Designed for accessibility: listen to the narrated post for your convenience.
In this article, I will outline how you can get the most from your online tests by maximising their fairness in employee selection.
One of the major drivers of differences in standardised testing more generally is the availability, quality, and cost of preparation. People from privileged, higher socioeconomic backgrounds simply have greater access to preparation materials than people from deprived backgrounds. This grants an unfair advantage, interfering with the validity of the assessments while unfairly putting others at a disadvantage.
Differences in access to preparation materials grant an unfair advantage to candidates from privileged backgrounds, interfering with the validity of the assessments.
This is particularly true for students at prestigious universities, who are substantially more likely to come from privileged backgrounds. Students at these universities are afforded every available advantage and opportunity, translating into unfairly improved performance.
Many organisations have concerns about providing practice and preparation materials, worrying that practice will distort the scores and reduce the predictive power of the assessments. For ability and aptitude tests, the opposite is likely to be true, as preparation improves the predictive validity of the assessments.
We know that cognitive ability is effectively the ability to learn and solve problems, and thus smart candidates will benefit disproportionately from preparation.
As a result, preparation makes it easier to identify high-performing candidates, boosting their predictive validity.
One of the biggest advantages of online testing that improves the fairness of selection processes is the buffer they create against unconscious (or conscious) bias. Because the process is automated, the personal biases of assessors can’t impact selection decisions. However, this assumes that all candidates are held to the same fixed standard, and becomes moot if hiring managers are allowed to interpret the results however they like. For example, if you allow hiring managers to decide what constitutes an acceptable score, different hiring managers will inevitably react differently, allowing room for bias. This could manifest as candidates from certain backgrounds having an easier time, with more stringent requirements for candidates from other backgrounds.
"It is imperative, therefore, that you decide beforehand what the pass mark or cut-score should be, leaving no room for bias."
Ideally, these decisions should be made by human resources or talent acquisition specialists, with no input from hiring managers. This helps standardise selection decisions and avoids the temptation for hiring managers to exert their will over early screening processes. This may be a bitter pill for hiring managers to swallow, especially those who are used to having free rein over the selection process, but the goal ultimately here is to maximise the fairness of online testing, not to appease hiring managers.
Deciding on pass marks and cut-scores can seem tricky, but in practice it's a simpler process than many expect. The rule is, the higher the pass mark, the better the quality of hire.
There isn’t an “optimal” pass mark as such; instead, the higher you can set it, the better the outcomes for the organisation. That being said, you don’t want to screen everyone out, such that you don’t have enough candidates for subsequent stages. Consequently, you must work backwards and choose a cut-score that allows you to meet your numbers for the next stage, allowing you to be highly selective but without making things harder for yourself later.
Differential item functioning analysis is a statistical test which evaluates actual bias within questions themselves, rather than just looking at differences between groups. When psychologists conduct an “adverse impact” analysis, this typically either looks at the difference in overall scores between groups, or differences in pass rates between groups.
The problem with this approach is that differences between groups aren’t the same as bias, and an assessment could well be biased even if there are no differences at the overall test level.
For example, if you compare ability test scores between a group of black PhD candidates and a group of white secondary school dropouts, you would expect to see differences in favour of the PhD group, and failing to do so could indicate bias.
Differential item functioning looks at actual bias by controlling for ability, and then investigating whether or not specific questions are “harder” or “easier” for specific groups.
If you give a test to a group of monolingual English-speaking candidates, and all but one question is in English, that non-English question would flag as biased, as it will appear to be “harder” than the data would otherwise suggest.
However, for bilingual candidates, the item would function normally, showing that the item behaves differentially according to the group. This is true question bias, and is a far more powerful tool to enhance fairness in online testing.
From a test design perspective, the main benefit of differential item functioning is you can remove inherently biased questions before launch, protecting candidates from the outset. This can also be conducted on an ongoing basis, helping to preserve the fairness of your online tests moving forward. As the assessment is used, these data can help refine and improve differential item functioning, flagging issues which may not have been detected initially.
Ultimately, differential item functioning analysis ensures that online assessments remain effective and equitable for all candidates.
Online assessments are an underrated and underutilised tool to improve the fairness of selection processes.
Not only can they act as a powerful buffer against unconscious and implicit bias, but they help diversify the recruitment process away from inherently vulnerable selection tools, including interviews, CV sifting, and academic requirements. However, to capitalise on these benefits, there are several steps one could take, which require some thought and effort. However, given the importance of fairness in online testing, this more than justifies this course of action, and organisations are well advised to consider online testing an enabler of fairness, diversity and inclusion, not a blocker.
For more information on how Test Partnership could help improve the fairness, equality, and scalability of your employee selection processes, feel free to book a call with us to discuss your requirements.