Is your recruiting AI fit for purpose?
The EU AI Act classifies AI used in employment as High-Risk by default. GDPR adds a parallel layer: lawful basis, special category data restrictions, data minimisation, and automated decision-making rights. Both apply simultaneously — and both carry significant fines.
EU AI Act fines: up to €15M / 3% global turnover (high-risk); €7.5M/1% misleading info · GDPR fines: up to €20M or 4% global turnover
✓
CV Screening
We use an AI-powered CV screener that ranks candidates based on pattern matching against historical successful hires. The system provides a match score 0–100 that is used to automatically filter out candidates below a threshold before human review.
✓
AI Interviews
Our platform conducts automated video interviews where candidates answer pre-recorded questions. An AI analyses facial expressions, tone of voice, and word choice to generate a personality and suitability score.
✓
Candidate Scoring
We use a predictive analytics tool that combines LinkedIn data, academic records, and assessment results to generate a predicted performance score. Recruiters see this score before reading CVs.
✓
Recruiting Chatbot
Our recruiting chatbot pre-screens candidates via WhatsApp or webchat, asks qualifying questions, and automatically progresses or rejects candidates based on their responses before any human sees the application.
✓
Online Proctoring
We require candidates to complete online assessments monitored by AI proctoring software that tracks eye movements, keystroke patterns, and facial expressions to detect cheating or suspicious behaviour.
✓
Emotion Detection
During video interviews, we use emotion AI software to detect candidates' emotional states and infer traits like stress resilience, enthusiasm, and honesty from micro-expressions and vocal tone.
✓
Reference & Background Check AI
We use an AI tool that automatically scrapes and scores candidates based on online presence, court records, credit history, and employment verification data. The system flags candidates for review or rejection before a human sees the findings.
✓
Job Description & Bias Detection AI
We use AI to generate job descriptions and audit them for biased language. The tool rewrites or scores JDs before publication and may suggest changes to role requirements, seniority levels, or preferred candidate profiles.
✓
Workforce Planning & Headcount AI
We use predictive AI tools to model future headcount needs, identify redundancy risk, or recommend restructuring based on performance, cost, and skills data. Outputs influence decisions about which roles or individuals are retained or displaced.
✓
Internal Mobility & Promotion AI
We use an AI system to identify internal candidates for open roles, flag employees for promotion, or score readiness for advancement. The tool analyses performance data, skills assessments, and career history to generate recommendations.
Banned AI Practices
Prohibited · Art. 5These uses are absolutely forbidden in the EU regardless of consent or business justification. A single "Yes" here is a critical violation requiring immediate redesign.
Art. 5(1)(a)
Does your AI system use subliminal techniques that operate below a person's consciousness to materially distort their behaviour in ways that may cause harm?
Art. 5(1)(c)
Does your system score, rank, or classify people based on their social behaviour or personal characteristics in a way that leads to detrimental treatment unrelated to the context in which the data was collected?
Art. 5(1)(f)
Does your AI system perform real-time remote biometric identification (e.g. facial recognition) of individuals in publicly accessible spaces in the context of screening or monitoring candidates?
Art. 5(1)(g)
Does your AI system use biometric categorisation to infer sensitive attributes such as political opinions, religious beliefs, race, or sexual orientation of candidates?
Art. 5(1)(b)
Does your AI system exploit the vulnerabilities of specific groups (age, disability, socioeconomic situation) to materially distort a candidate's or worker's behaviour in a way that causes harm?
Employment & Worker Management AI
High Risk · Annex III §4AI systems used for recruitment, selection, promotion, termination, or task allocation are explicitly listed as High-Risk. If your system falls here, extensive obligations apply.
Annex III §4(a)
Does your AI system make or meaningfully contribute to decisions about advertising, targeting, or filtering of job vacancies to specific individuals?
Annex III §4(b)
Does your AI system assist in screening, filtering, scoring, or ranking of applications, CVs, or candidates during recruitment?
Annex III §4(c)
Does your AI system assess, evaluate, or score candidates during job interviews or assessment processes?
Annex III §4(d)
Does your AI system monitor employee performance, allocate tasks, or make decisions about promotions, pay, or termination?
Compliance Requirements
Art. 8–15 · Art. 26If you operate a High-Risk AI system, these are your mandatory obligations. Non-compliance with any of these constitutes a breach of the Regulation.
Art. 9
Do you have a documented risk management system in place that is continuously reviewed throughout the AI system's lifecycle?
Art. 10
Are your AI training, validation, and testing datasets governed by data governance practices that address potential biases?
Art. 11
Do you maintain comprehensive technical documentation as specified in Annex IV before placing your AI system on the market?
Art. 13
Is your AI system designed to be sufficiently transparent so that deployers can interpret outputs and use the system appropriately?
Art. 14
Does your system allow for effective human oversight, including the ability for humans to override, interrupt, or correct decisions?
Art. 15
Has your AI system been tested for accuracy, robustness, and cybersecurity, including resilience against attempts to manipulate outputs?
Art. 16 / Art. 49
Has your AI system undergone a conformity assessment and been registered in the EU database before deployment?
Art. 26
As a deployer, do you conduct a Fundamental Rights Impact Assessment (FRIA) before deploying the AI system?
Candidate Transparency
Art. 50 · GDPR Art. 22Even for lower-risk tools, candidates interacting with AI have specific rights to know they're talking to a machine and to receive meaningful information about automated decisions.
Art. 50(1)
Are candidates clearly informed when they are interacting with an AI system rather than a human (e.g. in chatbot screening or AI-conducted interviews)?
GDPR Art. 22 + Art. 13 AI Act
Are candidates provided with meaningful information about the logic, significance, and consequences of automated decision-making that affects them?
Art. 86
Do affected candidates or workers have the right to obtain an explanation for AI-generated decisions that significantly affect them?
Art. 50(4)
If your AI generates synthetic content (e.g. AI-generated job descriptions, candidate-facing emails), is this content appropriately disclosed as AI-generated?
Art. 71 (Workers' consultation)
Have you consulted with worker representatives or employee bodies before deploying AI systems that monitor or evaluate workers?
Every piece of personal data processed by your AI must have a valid lawful basis under GDPR Art. 6, and special category data (biometrics, health, etc.) requires an additional condition under Art. 9. Recruitment creates particularly acute exposure.
GDPR Art. 6(1)
Have you identified and documented a valid lawful basis (consent, legitimate interests, legal obligation, etc.) for each category of personal data your AI system processes about candidates?
GDPR Art. 9(1)–9(2)
If your AI processes biometric data, health data, racial/ethnic origin, or other special category data, have you identified a valid Art. 9(2) condition (e.g. explicit consent, legal obligation)?
GDPR Art. 5(1)(b) · Art. 5(1)(c)
Is your AI system limited to processing only the minimum personal data necessary for the stated recruitment purpose (data minimisation), and are you not repurposing data collected for one role to assess candidates for another without fresh consent?
GDPR Art. 13–14
Do candidates receive a clear privacy notice at the point of data collection that specifically mentions AI processing, profiling, and any automated decision-making?
Automated Decision-Making & Profiling
GDPR Art. 22 · Recital 71Art. 22 gives candidates the right not to be subject to solely automated decisions with significant effects. This intersects directly with AI screening, scoring, and shortlisting — and is one of the most litigated GDPR provisions in HR.
GDPR Art. 22(1)
Are candidates subject to decisions based solely on automated processing — including AI scoring or screening — that produce significant effects such as rejection or shortlisting, without meaningful human review?
GDPR Art. 22(2)–(3)
If you rely on an Art. 22 exception (contract necessity or explicit consent), have you implemented suitable safeguards including the right to obtain human intervention, to express a view, and to contest the decision?
GDPR Art. 4(4) · Recital 71–72
Does your AI system create profiles of candidates — combining data points about behaviour, performance, location, or characteristics — and have you addressed this profiling activity explicitly in your data governance?
Candidates are data subjects with full GDPR rights — access, erasure, rectification, portability. AI systems that store candidate profiles, scores, or inferences must be built to support these rights operationally, not just in policy.
GDPR Art. 15
Can candidates submit a Subject Access Request (SAR) and receive, within one month, all personal data held about them — including AI-generated scores, inferences, and profile data?
GDPR Art. 17
Can candidates request erasure of their data ('right to be forgotten'), and does this erasure propagate through AI training datasets and model outputs where technically feasible?
GDPR Art. 21
Can candidates exercise their right to object to processing based on legitimate interests — including AI profiling — and does your system have a mechanism to record and act on such objections?
GDPR Art. 5(1)(e)
Do you have and enforce a data retention policy that limits how long candidate data — including AI scores and profiles — is stored, and are unsuccessful applicants' data deleted within a defined period?
AI recruiting tools almost always involve third-party processors (SaaS vendors, model providers). You remain the controller — responsible for vendor due diligence, DPAs, and international transfer compliance. This is frequently where breaches occur.
GDPR Art. 35
Have you conducted a Data Protection Impact Assessment (DPIA) before deploying your AI recruiting system? (A DPIA is mandatory for systematic profiling with significant effects and for large-scale processing of special category data.)
GDPR Art. 28
Do you have a Data Processing Agreement (DPA) in place with every AI vendor or third-party tool that processes candidate personal data on your behalf?
GDPR Art. 44–49
If your AI vendor processes candidate data outside the EEA (e.g. US-based SaaS, cloud infrastructure), have you verified that an adequate transfer mechanism is in place (e.g. adequacy decision, SCCs, BCRs)?
GDPR Art. 37–39
If your organisation processes personal data on a large scale as a core activity, have you appointed a Data Protection Officer (DPO) and involved them in the AI recruiting system design and review?
This tool is for informational and self-assessment purposes only and does not constitute legal advice. The EU AI Act (Regulation (EU) 2024/1689) and GDPR (Regulation (EU) 2016/679) are complex legal instruments — always engage qualified legal counsel before deploying AI systems in employment contexts. References to Articles are indicative and based on the Act as adopted in June 2024 and GDPR as enforced by EU/EEA Data Protection Authorities.