Skip to Content

How to prevent AI cheating in candidate assessments

Written by
Ben Schwencke
Updated
decorative gradient bars

Most candidates will use AI to help them in assessments, particularly in unsupervised, early-stage hiring. This undermines the accuracy of assessments, making it harder for employers to gauge a candidate’s genuine abilities.

It's hard to eliminate AI cheating entirely, especially with its ever increasing sophistication, but there are measures to ensure it's difficult, detectable, and not worth the effort.

We will cover the most popular methods of AI cheat prevention on the market, and suggest what an effective anti-cheating assessment looks like.

There are lots of methods, that when combined, offer strong AI-cheating prevention

Adaptive item banks

Adaptive item banks reduce traditional answer sharing by ensuring candidates don't see the same questions.

Questions are drawn from large pools and adjust based on the candidate's demonstrated ability level so far. Candidates will be shown easier or harder questions based on their performance on previous questions, this helps to accurately hone in on their ability level. This prevents testlets being leaked online, and should be a baseline feature of any serious assessment platform. However, this doesn't prevent AI cheating as candidates can still feed questions into AI tools in real time to receive answers. As such, adaptive item banks need to be combined with methods that specifically target AI-cheating.

Time tracking

Time data shows how long each candidate took on each question. This can be used to find unusual or suspicious patterns - excessively short or long responses, or certain timing patterns may indicate attempts at utlising AI tools.

This data adds useful context to scores and helps identify cases that may warrant closer inspection, however cannot be used as standalone proof of cheating.

illustration showing time tracking detection

Applicant agreement and AI warning

Clear warnings are surprisingly effective at reducing AI use before the assessment begins.

Candidates are asked to confirm they won't use external help and are informed that behaviour may be monitored or verified later. Simple, but worthwhile. Many candidates won't risk using AI if they believe there's a realistic chance of being caught.

Browser switching and mouse tracking

Behaviour tracking helps identify when candidates may be using AI tools during testing.

This feature detects if a candidate’s mouse moves outside the test window, helping to track whether additional tabs or browsers are being used during the assessment.

These signals are most useful when reviewed as patterns rather than isolated events. Consistent mouse-outs for each question is a strong indicator of cheating, however, a couple standalone instances does not strongly suggest any cheating. As such, they are best utilised in combination with other methods to create a clearer picture.

illustration showing detect mouse outside window

Screen recording and webcam monitoring

Involves continuous or periodic capture of the desktop and the user's webcam in order to capture possible cheating.

These methods can be effective and scare candidates from attempting to cheat, however they are not recommended. They raise significant privacy concerns, increase candidate drop-off, create accessibility issues, and harm employer brand - while still being bypassed by any candidate using a phone off-screen. Recording data also requires human or AI review, adding operational overhead for limited marginal gain.

Gamified assessments

Gamified assessments reduce AI cheating by making the game design very difficult to use AI tools on.

They're interactive, time-sensitive, and the test screen is constantly changing.

Candidates can't easily pause, paste a question into ChatGPT, and return with a usable answer, because there's no static question to paste. This shifts the approach from detecting cheating to preventing it through design, which is a meaningfully different problem to solve.

This allows you to test the genuine ability of all your candidate pool, without needing to remove candidates whom may have been strong candidates regardless of the extra assistance they sought.

However, it's important to choose a test publisher with gamified assessments that measure the constructs you want them to. Historically, gamified tests have focused on personality traits, and not cognitive ability. Choosing a test publisher that offers cognitive ability gamified assessments, like Test Partnership, is one of the most effective approaches to combatting AI-cheating.

Verification assessments

Verification assessments confirm whether a candidate can reproduce their earlier performance.

A shorter follow-up assessment is used later in the process, typically for your finalists. If scores diverge significantly from the original, that's a reliable signal something is off.

Even the threat of verifcation assessments being used can be enough to dissuade candidates from considering cheating.

Verification tests can help prevent AI cheating

  • clock, icon3 minutes

Find out how using verification tests can help prevent AI-cheating in your hiring process.

That said, there are two limitations worth noting. First, verification assessments don't protect the fairness of your initial screening stage - if a large proportion of candidates used AI to reach your finalist pool, the damage is already done. Second, there's an admin cost to running follow-up assessments for every finalist, which adds time to an already busy stage of the process.

As verification assessments shift the cheat-detection downstream, it is important to combine them with a rigorous method during your earlier screening stage.

What a good anti-cheating assessment setup looks like

The strongest approach combines prevention, detection, and verification. It's important your setup should include at a minimum:

  1. Adaptive item banks
  2. Non-intrusive monitoring like time-recorded scoring and mouse tracking
  3. Applicant agreement and AI warning

If using traditional assessments, on top of the three features above, you should introduce verification tests to provide downstream protection to check scores.

However, well-designed gamified assessments with adaptive item banks would be the ideal assessment choice - reducing the opportunity to cheat rather than simply catching it after the fact.

Many legacy platforms rely heavily on monitoring tools to compensate for weaker assessment design. The more effective approach is to improve the design first, then layer in targeted controls where needed.

testpartnership logomark Pro Tip

Gamified assessments combined with verification tests is a bulletproof method to prevent and catch any cheating on candidate assessments. The gamified assessment handles the early-screen - fast, engaging, and AI-resistant by design - while the verification stage could use a traditional format for variety.

Conclusion and next steps

Monitoring-based approaches catch AI cheating after the fact. That means removing candidates from your pipeline who've already been assessed - some of whom may have been strong hires regardless of the help they sought. Monitoring tells you who cheated. It doesn't tell you how good they actually were.

AI-resistant assessment design solves a different problem. When AI assistance provides no meaningful advantage, every candidate performs to their actual ability, scores reflect genuine capability, and nobody leaves your pipeline because they sought help that didn't work.

If AI-assisted cheating is a concern in your process, the most direct fix is assessment formats that are harder to game by design — not more surveillance.

Explore our MindmetriQ series of gamified assessments, combining adaptive item banks with AI-resistant design to measure numerical, verbal, and logical reasoning.

author profile ben schwencke
Primary author

Ben Schwencke

Chief psychologist at Test Partnership. MSc in Organisational Psychology with over ten years experience in psychometric testing.