The impact of artificial intelligence (AI) has been undeniably transformative, but perhaps not always positive. It has been particularly disruptive in academia, where educators are struggling to identify AI-generated content from student-written content, raising uncomfortable questions about the future of educational assessment. These concerns are also creeping their way into the occupational space, particularly regarding employee selection and assessment. Increasingly, AI is being used to mass-produce CVs, cover letters, application form responses, and perhaps even on pre-employment tests, reducing the efficacy of these screening tools.

In this article, we will outline the impact of AI on employee selection and assessment in the workplace. In particular, we will highlight the specific areas of vulnerability, along with practical recommendations for risk mitigation such as our AI-proof assessments.

section one

CVs, application forms, cover letters, and written content

The most pervasive impact of AI has been on written content.

The internet is now flooded with AI-generated written content, much of which is indistinguishable from human-written content. Employee selection and assessment are no different, and savvy candidates are using AI to create huge volumes of applications. Although not every candidate uses AI to make applications, those who do have a significant advantage over those who don’t. AI allows candidates to significantly scale up their applications, potentially by orders of magnitude. Consequently, it would be safe to assume that most applications received by an organisation will be at least curated by AI, if not entirely generated by AI.

The most obvious AI use case for candidates is to create CVs and résumés. Historically, organisations have used CVs as a pre-screening tool, evaluating a candidate's written ability based on the quality of their CV. Additionally, if a CV appears to be tailored to a specific role, sector, or industry, this was indicative of a particular commitment to that field, scoring points with hiring managers. Now, however, AI can scalably create CVs entirely from scratch, making them highly specific and virtually free from spelling and grammatical errors. This could easily fool hiring managers, convincing them of candidates' writing skills under false pretences.

Application form responses and cover letters are another logical use case for AI, allowing candidates to quickly overcome this hurdle. Not only do many organisations use application form responses and cover letters as a proxy for writing ability, but many companies outsource marking of these materials to specialised third-party providers.

The use of AI nullifies this selection criterion, removing the association between response quality and writing ability.

Moreover, anti-AI software designed to identify AI-written content is notoriously ineffective and likely results in large numbers of false positives and false negatives.

section two

Video interviews

Video interviews are surprisingly vulnerable to AI-based cheating, and many organisations are simply unaware of how candidates are using AI to get ahead.

Unlike face-to-face interviews, video interviews are conducted online remotely, giving candidates easy access to AI tools before, during, and after the interview. Candidates can use AI in a number of different ways, and these methods are often very difficult to detect by interviewers in practice. Indeed, many interview-specific AI tools have been brought to market, specialising in giving candidates an advantage during a remote interview.

Perhaps the most vulnerable video interview format is the one-way, or asynchronous, video interview. These are effectively just pre-recorded answers to specific interview questions, which are uploaded to a video interview platform and evaluated by assessors.

Candidates can quickly input interview questions into their chosen AI and rapidly generate a script to read from. They could copy and paste the content, take a screenshot, or simply type in the question, rapidly generating a highly effective interview response to read out loud. Although the candidate still needs to read the script well in order to impress their interviewers, it likely reduces the efficacy of the interview as a screening tool, as reading clearly out loud is rarely an essential competency in the role.

Two-way interviews, also known as synchronous interviews, are also vulnerable to AI cheating, albeit less so. Specialised interview AI software exists which can listen to the interviewer's questions and create an idealised response automatically, allowing candidates to merely read a script in real-time. Naturally, this requires more planning and preparation than using a general AI for script generation, but it is a viable option for cheating. Once again, this still relies on effective communication skills in order to succeed, but almost any interviewer would take offence to a candidate employing AI in this way.

section three

Traditional online assessments

In our experience, organisations are most concerned about the impact of AI-based cheating on pre-employment tests.

Pre-employment tests are often essential in high-volume recruitment campaigns as they are extremely powerful predictors of future job performance, while being the only scalable selection tool. Unlike CV sifting and interviews, the time required to assess 10, 100, or 1,000 applicants is largely the same, saving significant time and resources for HR teams. However, this approach hinges on quality data, and if candidates are cheating on the assessments, it becomes an exercise in futility.

The most vulnerable type of online assessment when it comes to AI-based cheating is knowledge and skills tests. Any assessment which asks the candidate for a publicly available known answer is especially vulnerable to cheating, as AI models can rapidly scour the internet for the answer. Multiple-choice questions are particularly vulnerable to this kind of cheating, and candidates can simply take a screenshot of the entire question and upload it to their chosen AI model. Although publishers do their best to limit this by controlling the candidate’s ability to copy, screenshot, or use their clipboard, nothing stops candidates from taking a picture on their phone and finding the answers with relative ease.

man using laptop

Try our AI-proof candidate assessments

Learn more

Traditional aptitude tests, including verbal, numerical, and inductive reasoning tests, are generally less vulnerable to AI-based cheating than knowledge tests but are still not immune. Candidates can screenshot aptitude test questions and input them into their chosen AI model, receiving what the AI believes to be the correct answer. As of 2024, even the best language models struggle to outperform the typical human in this endeavour, showing fairly low to mediocre scores. However, this does mean that, for organisations that use relatively low pass marks, AI-based cheating could be a viable strategy to pass. Additionally, given that AI is advancing rapidly, language models are expected to meet and even exceed the performance of the average candidate, making this type of cheating increasingly effective.

section four

Gamified online assessments

Gamified assessments are uniquely able to nullify AI-based cheating attempts and represent a promising assessment method moving forward.

Gamified assessments are designed to measure the same underlying constructs as traditional online assessments but incorporate game mechanics to make the tasks more complex, enjoyable, and engaging. These game mechanics are impossible to convey using a screenshot alone, making language models ineffective for cheating attempts. This helps protect the integrity of the assessment and is likely to do so for quite some time.

gamified assessments benefits

Additionally, game mechanics help speed up the administration time by reducing the reading component of an online task. Traditional verbal and numerical reasoning tests both require significant reading, which subsequently requires a longer time limit to correctly appraise and answer. Gamified assessments, however, can replace excessive reading with additional game mechanics, maintaining the complexity of the tasks without needing long time limits. Shorter time limits minimise opportunities to cheat, as candidates have less time between questions to input data into an AI model.

Lastly, gamified assessments are particularly suited to mobile devices, and are disproportionately completed on mobile devices. Although AI models can be accessed on mobile devices, desktop and laptop devices are far better suited to AI-based cheating. For example, desktop devices could have multiple monitors, allowing users to open their chosen AI model simultaneously to their assessment, increasing ease of use. However, with mobile devices, this isn’t a concern, and shifting between tabs on a mobile device would be significantly less viable.

section five

Summary and Conclusions

AI has introduced numerous challenges and opportunities to almost every sector of the workforce. Although there are potential uses for AI in human resources and talent acquisition, the challenges seem to outweigh the opportunities. Candidates are greatly advantaged by using AI in the application process, raising important questions for employing organisations. A common theme, however, is the vulnerability of written and spoken information, as the most popular AI tools are large language models. Gamification represents a particularly promising avenue to reduce the verbal component of assessment tools, making it harder to cheat using AI.

For more information on how Test Partnership could help reduce AI-based cheating in employee selection and assessment, book a call with one of our business psychologists and let’s explore this issue.