Should you still be screening CVs now AI is here?
Discover how the introduction of AI has affected one of the most traditional screening methods used: CV screening.
According to recent research, 83% of candidates say they would use AI during a job application if they thought they wouldn't get caught. 88% of students are already using it for university assignments (HEPI, 2025), and those students are your next wave of applicants.
AI use in hiring processes isn't something that's coming - it's already here, and most hiring teams haven't updated their processes to account for it.
Thankfully, not every part of the process is equally at risk. Some methods are relatively AI-resistant, whilst others have been broken by it.
Traditional CV screening relied on two key assumptions: that strong written communication indicated genuine skill, and that well-tailored applications showed authentic interest in the role. AI has shattered both assumptions.
Discover how the introduction of AI has affected one of the most traditional screening methods used: CV screening.
Candidates can now produce dozens of perfectly tailored CVs and cover letters in minutes, each one appearing to demonstrate genuine interest and strong communication skills. Even application form responses, once used to assess cultural fit and thinking processes, can be easily generated to match any competency framework.
And because not every candidate is doing this, those who are have a straightforward advantage over those who aren't, regardless of who is more qualified.
That said, the vulnerability is most acute in high-volume hiring. For experienced hires where application numbers are low, CV sifting is still a reasonable way to decide who to invite to interview. When you're choosing between a small group of credible applicants, a CV still tells you enough to make that call. But once you're dealing with hundreds of applicants for early careers, graduate, or high-volume roles, CV quality stops being a useful signal and the AI vulnerability becomes a real problem.
| Threat level | Why it's vulnerable | Solution |
|---|---|---|
| High | AI can instantly generate tailored CVs and cover letters, severing the link between writing quality and candidate skill | Only use CV screening for low-volume experienced hires |
Phone and video interviews share the same core vulnerability: candidates can prepare AI-generated answers in advance to use during the interview. On a phone call there's nothing stopping a candidate from reading answers from a screen. Video interviews require a bit more subtlety, but hiding notes or a second screen is straightforward enough that most candidates can manage it without detection.
Discover how video interviews are susceptible to AI assistance.
One-way asynchronous video interviews (where the candidate records their responses to questions that appear on the screen) create the perfect environment for AI-assistance. Candidates can input questions into AI tools, receive polished responses, and deliver them as scripted answers. While they still need reasonable presentation skills, the content quality no longer reflects their actual thinking or knowledge.
Even live two-way interviews aren't immune. Some candidates now use tools that listen in and suggest answers as they speak. Even Google CEO, Sundar Pichai, encouraged his hiring managers to return to in-person interviews specifically to combat this problem.
Want to hear how far it's already gone? Some candidates are even experimenting with deepfake technology, raising serious concerns about identity verification.
Phone and video interviews sit in an awkward middle ground. They're too vulnerable to AI to be a reliable screening tool, too time-consuming to justify as a light-touch step, and less effective than in-person interviews for assessing the interpersonal qualities that actually matter. Their role in a well-designed process is a narrow one: a brief check that the person is broadly who they claim to be, that they can communicate clearly, and that it's worth bringing them in. Anything beyond that is better handled either by upstream assessment or an in-person interview.
| Threat level | Why it's vulnerable | Solution |
|---|---|---|
| Medium-High | Candidates can script answers using AI; real-time AI assistance tools make even live formats vulnerable | Use an ai-resistant screening method to create your shortlist, keep phone and video as a light confirmation before in-person interviews |
Online aptitude tests have historically been one of the most effective early-stage selection tools because they're objective, scalable, and strongly predictive of job performance.
However, traditional assessments that rely on static questions - question image and answer options - are particularly vulnerable to AI tools. Candidates can input text or screenshots and the AI tool will return correct answers in seconds.
EDITORS NOTE: Since this video was produced (2024), as predicted, the capabilities of AI have advanced significantly. This shift means AI-based cheating is no longer a fringe concern, but a major threat to the integrity of traditional assessments. As such, gamified assessments have become key to limit these effects.
Modern AI models score in the top percentiles on many standard verbal, numerical, and logical reasoning assessments, and will likely to continue to grow in ability.
Without robust anti-cheating measures for traditional assessments, you're no longer measuring candidate ability but instead their willingness and ability to use AI tools effectively. This means your assessment data may no longer be reliable.
Some platforms try to fix this by adding strict monitoring features like webcam recording and screen tracking. But these solutions often damage the candidate experience. They increase anxiety, raise privacy concerns, and can lead to higher dropout rates.
Cognitive ability is still the number one predictor of performance in the workplace, so ability tests are still incredibly valuable, the problem is the format not the concept.
| Threat level | Why it's vulnerable | Solution |
|---|---|---|
| Medium-High | AI can solve most traditional test questions with high accuracy; static formats are easily compromised by screenshot and copy-paste methods | Switch to AI-resistant gamified assessments or implement comprehensive anti-cheating measures |
The most effective response to AI cheating in assessments isn't better detection. It's designing tests that AI can't assist with in the first place. That's the thinking behind Test Partnership's MindmetriQ assessments.
MindmetriQ measures cognitive ability, the same construct that makes traditional aptitude tests valuable, but uses interactive mechanics that require genuine human engagement. Candidates drag, rotate, and respond to elements that move and change in real time. Because the content is dynamic and fast-paced, there's no time to screenshot or copy into an AI tool. The standard cheating methods simply don't have anything to work with.
The benefits go beyond just AI resistance. Each test takes just four to six minutes (compared to 20 minutes for traditional tests) while maintaining the same level of predictive accuracy and reliability. They're optimised for mobile devices, reduce candidate fatigue, and provide a more engaging experience that improves completion rates.
If you want to eliminate AI cheating from your assessment process, Test Partnership's MindmetriQ are the assessments to use. It delivers the predictive accuracy you need while avoiding the integrity risks of traditional formats.
| Threat level | How it's AI-resistant | Best used for |
|---|---|---|
| Low | Dynamic, interactive content makes it extremely difficult to use AI tools; results reflect genuine human ability | Faster completion, higher engagement, mobile-optimised, scientifically validated for job performance prediction |
Candidates can still rehearse answers and prepare thoroughly, but they can't read from a script or use real-time AI assistance without it being immediately obvious. What you observe is the actual person: how they think on their feet, how they handle unexpected questions, and how they communicate in a real interaction.
For assessing cultural fit, interpersonal skills, and genuine motivation, in-person is still the most effective interview format available and should be a staple of any hiring process if possible.
The main constraint is practical. Face-to-face interviews take more time and coordination, which means they work best at the final stages of a process where the candidate pool has already been reduced to a genuine shortlist. That's why pairing them with strong early-stage assessment matters: the more rigorous your screening upstream, the more valuable your interview time becomes.
| Threat level | How it's AI-resistant | Best used for |
|---|---|---|
| Low | No opportunity to use AI assistance undetected; candidates are assessed in real time, in person | Final-stage assessment of communication, interpersonal skills, and cultural fit |
Assessment centres and supervised work sample tests sit at the rigorous end of the selection toolkit. Candidates complete job-relevant tasks, exercises, or simulations in a controlled environment, under observation. Because everything happens in person and in real time, AI assistance isn't a realistic option.
The data they produce is strong. Work samples in particular are highly predictive of job performance because they measure how candidates actually approach relevant tasks, not just how well they can talk about them. Assessment centres also allow multiple assessors to evaluate the same candidates across different exercises, which can reduce individual bias and improve the reliability of hiring decisions.
However, they are extremely costly. Running an assessment centre requires significant time, coordination, and resource, which makes them most appropriate for final-stage or high-value hiring where that investment is justified.
| Threat level | How it's AI-resistant | Best used for |
|---|---|---|
| Low | Supervised, in-person format means AI assistance isn't a realistic option; tasks are observed in real time | Final-stage or high-value roles where investment is justified and resources are available |
Here's a recap of the vulnerability status of the different recruitment methods discussed:
| Method | Threat level | Why |
|---|---|---|
| CVs and written applications | High (high-volume roles) | AI produces polished, tailored applications regardless of actual candidate ability |
| Phone and video interviews | Medium-high | Candidates can script answers using AI in advance; real-time AI assistance tools make even live formats vulnerable |
| Traditional online assessments | Medium-high | Static formats are solvable in seconds via screenshot and AI input |
| MindmetriQ assessments | Low | Dynamic, interactive content blocks standard AI input methods |
| Face-to-face interviews | Low | No opportunity to use AI assistance undetected in a real-time, in-person setting |
| Assessment centres and work samples | Low | Supervised, in-person tasks observed in real time make AI assistance impractical |
The three AI-resistant methods each serve a different stage of the process. MindmetriQ handles early-stage screening at scale, turning large applicant pools into defensible shortlists quickly and objectively. Face-to-face interviews assess the person directly once ability has already been validated. Assessment centres and work samples provide the most rigorous evaluation available for final-stage or high-value decisions.
Used together, these methods give you a process where AI assistance is practically irrelevant at every stage. This allows organisations to not worry about whether their AI detection tools are working, but instead build a process where AI isn't invited.
Demo our AI-resistant MindmetriQ assessments to see how they work and the science behind them.