Pipe Puzzle
Candidates must re-arrange the pipes to connect the two end points. Avoid the solid obstacles and make sure you have enough pipes. Typical time to complete: 6 minutes.
g facets: spatial scanning, visual memory, flexibility of closure.
Candidates must re-arrange the pipes to connect the two end points. Avoid the solid obstacles and make sure you have enough pipes. Typical time to complete: 6 minutes.
g facets: spatial scanning, visual memory, flexibility of closure.
Candidates spin the wheel so that each symbol is next to one of the same shape or the same shading, but not both. Make sure you check every symbol. Typical time to complete: 4 minutes.
g facets: speeded rotation, visualization, serial perceptual integration.
Candidates are asked to analyse verbal analogies and select the best option to complete the logical comparison. Typical time to complete: 6 minutes.
g facets: reading decoding, processing verbal information, cloze reasoning (missing information).
Candidates are presented with a pair of words and must swipe to indicate whether the pair of words are approximate synonyms, antonyms, or neither. Typical time to complete: 4 minutes.
g facets: lexical knowledge, processing verbal information, grammatical sensitivity.
Candidates must drag the highlighted net over the numbers which when summed give the largest possible answer. Remember the highest number may be negative. Typical time to complete: 6 minutes.
g facets: quantitative reasoning, working memory capacity, visual processing.
Candidates must check which numbers are falling and collect those which sum to the target number. Be careful to avoid the other falling numbers. Typical time to complete: 6 minutes.
g facets: quantitative reasoning, perceptual speed, memory span.
Gamified assessments have a lot of potential, if they’re developed properly, according to evidence-driven scientific methods and industry-standard guidelines. And that's what we've done with the MindmetriQ™ series of gamified assessments. We've used advanced psychometric methods and calibration data from tens of thousands of test completions to bring rigorous science to the candidate-first world of gamified assessments.
The driving principle behind the MindmetriQ™ series is the same principle behind all popular cognitive ability tests; general cognitive ability (defined by Spearman as 'g') is widely accepted as the single best predictor of job performance1. So accurately measure g and you can predict someone's likely performance in a job. To achieve a robust measure of g, the MindmetriQ™ series measures six established sub-traits of general cognitive ability to build a multi-faceted, stable g-metric. All six gamified assessments have high g-loading which closely maps to the measured variable.
Each of the MindmetriQ gamified assessments on their own have validity and reliability properties roughly equivalent to a traditional aptitude tests (i.e. numerical reasoning test, verbal reasoning test etc). However, each of our gamified assessments take between four and seven minutes, so you can easily include a few or all six of them to form a powerful battery of gamified assessments and build a robust measure of g.
MindmetriQ measures multiple constituent parts of g and combines them to report a robust measure of pure g.
Extensive validation studies including academic achievement and established measures of cognitive ability such as ICAR, Insights, and composite measures (Berlin Numeracy Test, Cognitive Reflections etc.) demonstrate significant correlations between MindmetriQ and cognitive ability; ranging from r = 0.45 to 0.69, p < .001.
Using Item Response Theory, MindmetriQ is reliable on two measures; person reliability and item reliability. All games individually have person reliability ranging between 0.71 and 0.84, and when multiple games are combined this improves even further. All of which are equivalent to exceeding Cronbach's α of 0.7.
Protected groups (including: gender; ethnicity; age; and sexual orientation) are treated fairly by MindmetriQ. Used in-line with our recommendations MindmetriQ demonstrates small to negligible Cohen's d group differences. All assessments are colour agnostic, device agnostic, and screen-reader compatible.
The MindmetriQ series has been calibrated using live data from over 19,000 people to achieve statistically-significant measures. We then correlated their performance with existing measures of cognitive ability. We trialled over 900 questions (items) to create the MindmetriQ series and each item collected at least 250 responses to achieve reliable calibration statistics.
We collected data on candidate experience and found that the MindmetriQ series (μ = 7.15, σ = 2.20) was considered significantly more engaging than traditional, non-gamified assessments (μ = 6.26, σ = 2.31); t(325) = 7.13, p <.01. The short time to administer each assessment (between 4 and 7 mins) also helps to reduce candidate drop-out rates.
When asked to rate their experiences of taking both the MindmetriQ assessments and traditional psychometric tests, 81% of candidates (n = 322) found MindmetriQ to be less anxiety-provoking than traditional assessments. It's also easy for us to skin the MindmetriQ series to your company branding for a more memorable candidate experience.
The MindmetriQ™ Series of gamified assessments are based on tasks which require high cognitive loading, with each assessment focusing on a particular element of cognitive ability (for example verbal reasoning, numerical reasoning. etc.).
General cognitive ability has been shown to be the single biggest predictor of job performance1. The MindmetriQ assessments are all scientifically-validated measures of general cognitive ability, validated using strict standards of test construction.
When used in accordance with our guidance, the MindmetriQ assessments are a series of powerful and legally-defensible assessments of candidate ability.
As part of test construction, we ran extensive adverse impact studies to ensure that MindmetriQ gamified assessments do not unfairly discriminate against any legally-protected group (e.g. gender, age, ethnicity etc). Using results from tens of thousands of test completions we can safely demonstrate that MindmetriQ does not unfairly discriminate against any protected group.
It’s no secret that traditional ability tests aren’t particularly fun to complete. This is especially true for candidates who experience test anxiety and find testing experiences stressful. For others, boredom is the issue, which increases the likelihood of candidate attrition. MindmetriQ™ however, has been designed to be both more enjoyable and less anxiety provoking than traditional assessments, providing a more positive candidate experience. This helps ensure that candidates stick with the assessment process, helping to reduce candidate attrition. Similarly, a great candidate experience encourages applicants to speak positively about the recruitment process, encouraging further applications.
Unlike most traditional aptitude tests, MindmetriQ is designed for use on mobile devices. Increasingly, GenZ are turning away from desktop and laptop devices, which are becoming increasingly obsolete in the face of improved mobile technology. Requiring that candidates’ complete assessments on a laptop or desktop device, therefore, may come as an unwelcome surprise. Moreover, young people from lower socioeconomic status backgrounds may not have immediate access to a laptop or desktop computer, discouraging them from applying. MindmetriQ™ however, is fully mobile enabled, and is just as easily completed on a mobile device as it is on a laptop or desktop computer.
Commonly asked questions.
Based on 68
candidate reviews
We have a large suite of off-the-shelf assessments, or bespoke options.