
Post: AI Hiring Legal Glossary: Bias, GDPR, and Compliance Terms
AI Hiring Legal Glossary: Bias, GDPR, and Compliance Terms
Deploying AI in your recruiting funnel without understanding its legal vocabulary is the fastest way to convert an efficiency investment into a discrimination liability. The terms in this glossary define the compliance perimeter for every AI-assisted hiring decision — from the moment a resume is parsed to the moment an offer is extended. This resource supports the broader strategy laid out in Talent Acquisition Automation: AI Strategies for Modern Recruiting. Bookmark it, share it with your legal team, and use it as a pre-deployment checklist before any AI screening tool goes live.
Core Terms: Bias and Discrimination
Bias and discrimination law form the primary legal risk layer in AI hiring. These terms define what the EEOC can act on, what plaintiffs can sue over, and what your audit program must detect and document.
AI Bias
AI bias is a systematic, repeatable error in an algorithm’s outputs that produces unfair outcomes along the lines of protected characteristics — race, gender, age, disability, or national origin. It is not random noise; it is a structural flaw in the model’s logic or training data.
How It Happens: When a hiring model trains on historical data reflecting past human decisions, it absorbs the demographic patterns of those decisions. If historically your company hired 80% male engineers, an uncorrected model learns to score male-presenting signals higher — not because of explicit instructions, but because the correlation exists in the data.
Why It Matters: AI bias creates direct exposure under Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). The EEOC has issued technical assistance guidance confirming that employers — not AI vendors — bear primary liability for discriminatory tool outputs. McKinsey research on AI risk identifies bias in automated systems as one of the highest-consequence failure modes in enterprise AI deployment.
Mitigation: Diverse and representative training datasets, pre-deployment bias testing across demographic groups, and ongoing algorithmic audits after deployment. See our guide on Combat AI Hiring Bias: Ethical Strategies for Talent Acquisition for a step-by-step mitigation framework.
Disparate Impact
Disparate impact is an unintentional discrimination theory: a facially neutral employment practice that produces a statistically significant difference in selection rates across protected groups. The practice itself need not be motivated by prejudice — only the outcome matters.
The Legal Standard: Under the EEOC’s Uniform Guidelines on Employee Selection Procedures, the four-fifths (80%) rule is the benchmark. If a protected group’s selection rate from an AI screening tool is below 80% of the highest-scoring group’s rate, adverse impact is presumed. The employer must then demonstrate the practice is job-related and consistent with business necessity.
AI Relevance: AI resume screening, automated assessments, and video interview scoring tools are all “selection procedures” under EEOC guidance. Employers cannot shield themselves from disparate impact claims by attributing outcomes to the vendor’s algorithm. Harvard Business Review has documented multiple cases where AI screening tools produced significant disparate impact before deployment audits caught the error.
Disparate Treatment
Disparate treatment is intentional discrimination — treating an individual differently because of a protected characteristic. In AI hiring, this most often arises when a tool is configured with explicit filters that screen out demographic groups (e.g., graduation year proxies for age, zip code proxies for race) or when a human reviewer applies different scrutiny to AI-flagged candidates based on demographic assumptions.
Unlike disparate impact, disparate treatment requires proof of discriminatory intent. However, intent can be inferred from tool configuration choices and documented communications about the tool’s design goals.
Adverse Impact
Adverse impact is the measurable condition — typically quantified by the four-fifths rule — that triggers legal scrutiny under disparate impact theory. It is the statistical output that demonstrates an AI screening tool is selecting (or rejecting) candidates unevenly across protected groups. Adverse impact is the signal; disparate impact is the legal theory built on that signal.
Practical Note: Adverse impact analysis should be run before deployment and at regular intervals thereafter. Selection rate data by demographic group must be documented and retained. SHRM recommends treating adverse impact reporting as a standing element of any AI tool vendor contract.
Transparency and Explainability
Transparency and explainability are both ethical principles and, increasingly, legal requirements. They define what you must be able to say about how your AI made a decision — and to whom.
Algorithmic Transparency
Algorithmic transparency is the principle that the existence, purpose, and general operating logic of an AI system should be openly disclosed to those it affects. It operates at the organizational and system level: what does this tool do, what data does it use, and who built it?
Transparency does not require revealing proprietary source code. It does require that candidates know an AI tool is being used in their evaluation, what categories of data it processes, and who is accountable for its decisions. New York City Local Law 144, effective January 2023, operationalizes this principle into a compliance mandate: employers using automated employment decision tools must conduct annual bias audits and publish the results.
Explainability (Right to Explanation)
Explainability operates at the individual decision level: why did the AI produce this specific output for this specific candidate? Under GDPR Article 22, candidates subject to purely automated decision-making that produces significant effects have the right to obtain “meaningful information about the logic involved.”
In practice, this means your AI hiring platform must be able to generate a candidate-facing explanation of the primary factors that drove a score or recommendation — not just a raw number. “Your application ranked in the 34th percentile” is not an explanation. “The primary factors evaluated were years of relevant experience and skills match to the job description” begins to meet the standard.
Deloitte’s responsible AI research frames explainability as a foundational governance control, not a post-hoc reporting feature. Build it into your vendor evaluation criteria before procurement.
Black Box Problem
The black box problem describes AI systems whose internal decision logic is opaque — even to their developers — because the model’s complexity (particularly in deep learning architectures) makes it impossible to trace any individual output back to a specific input or rule. In hiring, a black box tool produces scores or rankings without an auditable chain of reasoning.
Black box tools are legally high-risk. Without an auditable decision chain, employers cannot defend individual decisions, cannot demonstrate job-relatedness, and cannot satisfy GDPR Article 22 explainability requirements. When evaluating AI hiring vendors, require written documentation of explainability architecture as a contract condition.
Data Privacy and Legal Frameworks
Data privacy law creates the compliance perimeter for every data point you collect, store, and process about candidates. These frameworks impose obligations that exist independently of anti-discrimination law.
General Data Protection Regulation (GDPR)
The GDPR is the European Union’s comprehensive data protection regulation, effective May 2018. It applies to any organization — regardless of location — that processes personal data of EU residents. In hiring, every data point about an EU-resident job applicant is covered: resume content, assessment results, interview recordings, behavioral scores, and demographic data.
Key GDPR obligations for AI hiring:
- Lawful basis: Every data processing activity requires a documented lawful basis (consent, legitimate interests, contractual necessity, or legal obligation). Relying on consent for employment decisions is procedurally complex because consent must be freely given — which is difficult to establish when the power imbalance between employer and candidate is significant.
- Data subject rights: Candidates have the right to access their data, correct inaccuracies, request deletion (“right to be forgotten”), and object to processing. Your workflow must have a response mechanism for each right, operable within statutory timeframes.
- Article 22 — Automated decision-making: Purely automated decisions with significant effects require either explicit consent, contractual necessity, or a Member State legal authorization — plus the right to human review and the right to explanation.
- Accountability: Controllers must document all processing activities (Records of Processing Activities, or ROPA) and be able to demonstrate compliance on demand.
For a detailed compliance workflow, see our satellite on Master GDPR/CCPA with Automated HR Compliance.
California Consumer Privacy Act (CCPA) / CPRA
The CCPA (amended by the California Privacy Rights Act, CPRA) is California’s comprehensive privacy law. It grants California residents rights over their personal data, including job applicants. Employers covered by CCPA must provide a privacy notice at or before the point of data collection, honor deletion and access requests, and — under CPRA — restrict the use of sensitive personal information.
Key differences from GDPR: CCPA does not require a lawful basis for processing in the same explicit structure as GDPR. It is primarily a disclosure and opt-out framework rather than a consent-first framework. However, CPRA’s “sensitive personal information” category (which includes race, ethnicity, health data, and biometric data) imposes use-limitation requirements that overlap significantly with GDPR special-category data rules.
Data Minimization
Data minimization is the GDPR principle that organizations should collect only the personal data that is adequate, relevant, and limited to what is necessary for the specified purpose. In AI hiring, this principle is routinely violated by tools that harvest social media profiles, infer personality traits from writing style, or collect biometric data during video interviews — none of which is demonstrably necessary for most hiring decisions.
Why it matters beyond compliance: Every data point you collect that is not required for the decision is a data point that increases your breach liability, your deletion-request burden, and your adverse impact surface area. Gartner recommends treating data minimization as a default design constraint in AI tool procurement, not as a post-implementation cleanup activity.
Special Category Data
Special category data under GDPR is personal data that warrants heightened protection because of its sensitivity: racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data processed for unique identification, health data, and data concerning sex life or sexual orientation.
In AI hiring, special category data appears more often than teams expect: video interview tools that analyze facial geometry are processing biometric data; voice analysis tools may infer health conditions; personality assessments may reveal psychological characteristics. Processing any special category data requires explicit consent or another narrowly defined lawful basis, plus a Data Protection Impact Assessment (DPIA).
Data Retention and the Right to Erasure
Data retention refers to how long candidate data is stored after a hiring decision. Under GDPR, data should not be kept longer than necessary for the purpose for which it was collected. For unsuccessful candidates, this typically means a defined retention window — commonly six to twelve months — after which data must be deleted unless a specific legal obligation requires longer retention.
The right to erasure (Article 17 GDPR, “right to be forgotten”) allows candidates to request deletion of their personal data. AI hiring platforms must be technically capable of executing complete deletion — including any derived scores, model inputs, and training data contributions — within the statutory response window.
AI-Specific Legal Concepts
As AI hiring tools have proliferated, a distinct layer of AI-specific legal vocabulary has emerged — some from existing law applied to new contexts, some from new legislation targeting algorithmic systems specifically.
Automated Individual Decision-Making (AIDM)
Automated individual decision-making is any decision made exclusively by an algorithm, without meaningful human involvement, that produces a legal or similarly significant effect on a person. GDPR Article 22 restricts AIDM in hiring contexts. A fully automated resume rejection — where no human reviews the decision — is a textbook AIDM scenario.
The practical compliance response is a “human-in-the-loop” requirement: a qualified human reviewer must have genuine authority to override AI recommendations before a decision with significant consequences is finalized. A rubber-stamp review process where a human simply ratifies every AI output does not satisfy this requirement.
Algorithmic Audit
An algorithmic audit is a structured, documented evaluation of an AI system’s inputs, processing logic, and outputs to assess accuracy, fairness, and compliance. In hiring, audits test for adverse impact across demographic groups, validate that the model is measuring what it claims to measure (construct validity), and verify that outcomes correlate with job performance (criterion validity).
New York City Local Law 144 mandates annual independent bias audits of automated employment decision tools, with results published on the employer’s website. Several other jurisdictions are moving toward similar requirements. Treat audits as a standing operational practice, not a one-time pre-launch activity. Our case study on Boost Diversity 42% with Ethical AI Hiring documents what a compliant audit-and-correction cycle produces in practice.
Construct Validity
Construct validity is the degree to which an AI tool actually measures what it claims to measure. A video interview AI that claims to measure “communication skills” must demonstrate — through empirical validation — that its scores correlate with actual communication skill, not with accent, lighting quality, or camera equipment. Without construct validity evidence, the tool’s job-relatedness defense against discrimination claims collapses.
Criterion Validity
Criterion validity is the degree to which an AI assessment’s scores predict job performance outcomes — typically measured by correlating pre-hire scores with post-hire performance ratings or tenure. Under EEOC Uniform Guidelines, any selection procedure with adverse impact must demonstrate criterion validity to be defensible. Vendors should provide criterion validity studies conducted on populations representative of your candidate pool — not just their general customer base.
Data Protection Impact Assessment (DPIA)
A DPIA is a systematic process for identifying and mitigating privacy risks before deploying a new data processing activity. Under GDPR Article 35, a DPIA is mandatory when processing is “likely to result in a high risk to the rights and freedoms of natural persons.” AI hiring tools that process special category data, enable systematic evaluation of candidates, or make automated decisions almost always trigger this threshold.
A DPIA documents: what data is processed, why, the necessity and proportionality of the processing, risks identified, and mitigations implemented. It must be completed before deployment — not after. Our satellite on HR Data Readiness for AI: Essential Pre-Implementation Strategy covers the data governance groundwork that feeds into a compliant DPIA.
Related Terms
These terms appear frequently in AI hiring compliance discussions and connect the core concepts above to broader legal and organizational frameworks.
Equal Employment Opportunity Commission (EEOC)
The EEOC is the U.S. federal agency responsible for enforcing federal employment discrimination laws, including Title VII, ADEA, ADA, and GINA. The EEOC has issued technical assistance guidance confirming its authority to investigate and act on AI-related discrimination claims. Employers are the responsible party — not AI vendors — for discriminatory tool outputs.
Data Controller vs. Data Processor
Under GDPR, the data controller is the entity that determines the purposes and means of processing personal data — typically the employer. The data processor is the entity that processes data on behalf of the controller — typically the AI vendor. Controllers bear primary legal responsibility. Processors must operate under a Data Processing Agreement (DPA) that contractually binds them to GDPR-compliant practices. Every AI hiring vendor contract must include a compliant DPA.
Informed Consent
Informed consent in data processing is consent that is freely given, specific, informed, and unambiguous. A candidate must know precisely what data is being collected, how it will be processed, and by whom — and must actively agree, not merely fail to object. In hiring contexts, consent as a lawful basis for processing is procedurally fraught because candidates may feel coerced. GDPR guidance recommends using alternative lawful bases where possible and reserving explicit consent for genuinely optional processing activities.
Accountability Principle
The accountability principle (GDPR Article 5(2)) requires that controllers be able to demonstrate compliance — not just claim it. For AI hiring, this means maintaining documented evidence of: training data sources, bias testing results, DPIA completion, vendor DPAs, candidate rights response logs, and audit outcomes. Documentation that cannot be produced on demand is treated as documentation that does not exist.
Common Misconceptions
These are the compliance misunderstandings that most frequently generate avoidable legal exposure in AI hiring.
- “The vendor is liable for discriminatory outputs, not us.” False. Under U.S. civil rights law and GDPR, the employer is the responsible party for all employment decisions, including those informed or made by a vendor’s AI tool. Vendor indemnification clauses rarely cover discrimination claims and are contractually limited. Employers cannot outsource liability by outsourcing the decision process.
- “We don’t need GDPR compliance because we’re a U.S. company.” False. GDPR applies based on the location of the data subject, not the location of the company. If you recruit EU residents — even for U.S.-based roles — you are covered.
- “If we have a human review every AI recommendation, we’re automatically compliant.” Not necessarily. Rubber-stamp reviews — where humans approve AI decisions without genuine independent evaluation — do not satisfy GDPR Article 22’s human-in-the-loop requirement. The human reviewer must have real authority, real information, and a genuine capacity to override.
- “Bias testing before launch is sufficient.” No. Bias in AI systems can emerge or shift post-deployment as candidate populations change, as the model continues learning, or as job requirements evolve. Ongoing monitoring and periodic re-auditing are required — not a one-time pre-launch check.
- “Anonymizing candidate data removes all privacy obligations.” Only if anonymization is genuine and irreversible. Pseudonymization — replacing names with codes while retaining linkable data — does not remove GDPR obligations. True anonymization is a high technical bar that most HR systems do not achieve.
For the practical DEI dimension of these compliance concepts, see our satellites on AI and DEI Strategy: Benefits, Risks, and Ethical Use and AI Resume Screening: Maximize Accuracy and HR Efficiency.
Build Compliance Into the Workflow, Not Onto It
Every term in this glossary points to the same operational conclusion: compliance in AI hiring is not a legal overlay applied after tools are selected and workflows are built. It is a design constraint that must shape procurement, configuration, documentation, and ongoing monitoring from the start.
The organizations that navigate AI hiring law successfully are not the ones with the best lawyers on retainer after a complaint is filed. They are the ones who ran adverse impact analyses before launch, completed DPIAs before deployment, contractually bound their vendors to explainability and audit cooperation, and maintained documentation sufficient to answer any regulator’s question on day one.
Use the Talent Acquisition Automation ROI: Build Your Business Case framework to make compliance costs visible in your investment model — they belong in the numerator, not as a footnote. And return to the parent pillar, Talent Acquisition Automation: AI Strategies for Modern Recruiting, for the complete architecture of a compliant, high-performance automated recruiting operation.