
Post: Master AI HR Laws: Legal Glossary for Talent Acquisition
Master AI HR Laws: Legal Glossary for Talent Acquisition
AI in talent acquisition operates inside a dense legal framework most HR teams learn only after a complaint surfaces. That is the wrong time to encounter these terms for the first time. This glossary defines the 20 legal, ethical, and regulatory concepts every recruiter, HR director, and executive must understand before deploying any AI hiring tool — because compliance starts with vocabulary, and vocabulary starts here.
For the broader strategic context, see the parent guide: HR AI Strategy: Roadmap for Ethical Talent Acquisition.
Core Legal Terms
Adverse Impact
Adverse impact is a legal standard measuring whether a facially neutral employment practice — including an AI screening tool — produces selection rates that disproportionately exclude members of a protected class. It is the central legal liability mechanism for AI hiring tools.
The operative federal benchmark is the four-fifths rule (also called the 80% rule), codified in the Uniform Guidelines on Employee Selection Procedures (UGESP): if the selection rate for any protected group is less than 80% of the rate for the highest-selected group, adverse impact is presumed to exist. Adverse impact does not require proof of discriminatory intent — statistical disparity is sufficient to shift the legal burden to the employer to demonstrate business necessity.
Why it matters for AI: Automated screening and ranking tools apply uniform rules at scale, which means any embedded bias propagates uniformly — producing measurable adverse impact faster and at larger volumes than manual processes.
Disparate Impact vs. Disparate Treatment
These two doctrines define different pathways to employment discrimination liability. Disparate treatment is intentional discrimination — configuring an AI to explicitly exclude candidates of a specific race, gender, or age. Disparate impact is unintentional discrimination — a neutral AI tool that produces statistically disproportionate exclusion of a protected group regardless of intent.
Most AI hiring litigation risk is rooted in disparate impact, because intent is irrelevant under that doctrine. An algorithm that was never designed to discriminate can still produce unlawful outcomes if the data it was trained on reflected discriminatory historical hiring patterns.
Title VII of the Civil Rights Act
Title VII prohibits employment discrimination on the basis of race, color, religion, sex, or national origin. It is the foundational federal statute under which most AI-based hiring discrimination claims in the United States are pursued. The Equal Employment Opportunity Commission (EEOC) has confirmed that Title VII applies to AI-assisted hiring decisions — employers cannot delegate compliance responsibility to an AI vendor.
Age Discrimination in Employment Act (ADEA)
The ADEA prohibits employment discrimination against individuals 40 years of age or older. In the context of AI hiring tools, this is particularly relevant for models that use proxy variables — graduation year, years of experience thresholds, or platform tenure — that correlate with age without explicitly naming it.
Americans with Disabilities Act (ADA)
The ADA prohibits discrimination against qualified individuals with disabilities. AI-based assessments — including video interview analysis, gamified cognitive tests, and automated writing evaluations — may produce discriminatory outcomes for candidates with disabilities if the tool was not validated across that population. The ADA also creates obligations around reasonable accommodation in the assessment process itself.
Data Privacy and Security Terms
Data Privacy
Data privacy is the principle that individuals have the right to control how their personal information is collected, stored, used, and shared. In talent acquisition, data privacy governs every stage of the candidate experience — from resume submission through automated screening, reference checks, and onboarding data collection.
Data privacy is distinct from data security. Privacy defines who has the right to access information and under what conditions. Security provides the technical controls that enforce those rights.
Data Security
Data security encompasses the technical and operational controls that protect personal information from unauthorized access, breach, alteration, or loss. For AI hiring platforms processing thousands of candidate records, data security requirements include encryption at rest and in transit, access controls and role-based permissions, intrusion detection, and documented incident response procedures.
A breach of candidate data triggers notification obligations under dozens of state laws — and can disqualify an employer as an employer of choice with candidates who learn of it.
General Data Protection Regulation (GDPR)
GDPR is the European Union’s comprehensive data protection regulation. It applies to any organization processing personal data of individuals located in the EU — regardless of where that organization is headquartered. For U.S. employers using AI to screen EU-based candidates, GDPR compliance is mandatory, not optional.
Key GDPR obligations for AI hiring tools include:
- Lawful basis for processing: Typically legitimate interest or explicit consent for candidate data.
- Transparency: Candidates must be informed that AI is being used and how it affects decisions about them.
- Right of access: Candidates can request all personal data held about them.
- Right to erasure: Candidates can request deletion of their data under specified conditions.
- Article 22 protections: Restrictions on solely automated decisions with significant effects (see below).
Penalties for GDPR violations reach up to €20 million or 4% of global annual turnover — whichever is higher.
California Consumer Privacy Act (CCPA)
The CCPA is California’s primary consumer data protection statute. It grants California residents rights to know what personal information is collected about them, to request deletion, and to opt out of the sale of their personal information. These rights apply to candidate data processed by AI hiring tools.
CCPA applies to any company that meets specified revenue or data-volume thresholds and handles data of California residents — regardless of where the company operates. The California Privacy Rights Act (CPRA) extended and strengthened CCPA obligations beginning in 2023.
Algorithmic Governance Terms
Algorithmic Bias
Algorithmic bias is the systematic production of unfair outcomes by an AI model — outcomes that favor or disadvantage individuals based on characteristics such as race, gender, age, or disability status. It is not a malfunction. It is the predictable output of models trained on historical data that encoded the biases of the humans who generated that data.
In talent acquisition, algorithmic bias manifests when a resume screening model trained on historical hires — predominantly from a non-diverse workforce — learns to down-rank resumes that pattern-match to underrepresented groups. The model is doing exactly what it was trained to do. The training data was the problem. For a detailed mitigation framework, see our guide on bias detection and mitigation strategies for AI resume tools.
Algorithmic Transparency
Algorithmic transparency is the degree to which the logic, inputs, and decision rules of an AI model can be examined, understood, and audited by humans outside the development team. A transparent algorithm is one whose outputs can be traced back to specific features and weightings — making it possible to detect bias, explain decisions to candidates, and satisfy regulatory inquiries.
Many commercial AI hiring tools use proprietary black-box models — meaning the vendor controls the algorithmic logic and does not expose it for independent inspection. This opacity is a compliance risk, because HR teams cannot explain or defend decisions they cannot see.
Explainability
Explainability is the operational capacity to articulate, in plain language, why an AI model produced a specific output — a score, a ranking, a screening decision, or a rejection. It is distinct from algorithmic transparency: a model can be technically transparent (its weights are visible) but practically unexplainable (no one can render those weights into a candidate-facing rationale).
Explainability is increasingly a legal requirement. GDPR Article 22 requires that candidates subject to automated decisions receive “meaningful information about the logic involved.” The EEOC has signaled that AI hiring tools must be capable of providing explanations that allow candidates to understand and contest decisions. Teams that cannot explain why the AI excluded a candidate cannot defend that exclusion.
Automated Decision-Making (ADM)
Automated decision-making refers to decisions about individuals made by algorithms operating without human review at the point of decision. In hiring, ADM occurs when an AI tool screens, ranks, or rejects candidates without a human reviewing and approving each output before it takes effect.
GDPR Article 22 restricts ADM that produces legal or significant effects on individuals — a category that includes hiring decisions. Candidates have the right to request human intervention, to contest the automated decision, and to receive an explanation of the logic applied. Fully automated candidate rejections are the highest-risk use case under current regulation.
Bias Audit
A bias audit is an independent statistical analysis of an AI tool’s outputs — examining selection rates across protected classes to determine whether the system produces adverse impact. It is not a vendor self-assessment. It is an analysis conducted by an independent third party with access to the tool’s actual outputs on real candidate data.
New York City Local Law 144 mandates annual independent bias audits for covered employers using automated employment decision tools. Results must be published on the employer’s website. The law is the first in the United States to make bias auditing a legal requirement — and it is unlikely to be the last. See also our compliance guide on AI resume screening compliance.
Emerging Regulatory Frameworks
EU AI Act
The EU AI Act is the world’s first comprehensive AI regulatory framework. It classifies AI systems by risk level, with the highest-risk systems subject to the strictest requirements. AI tools used in employment decisions — including resume screening, candidate ranking, skills matching, and performance prediction — are classified as high-risk systems under the Act.
High-risk classification triggers mandatory conformity assessments before deployment, obligations to register the system in an EU database, ongoing monitoring and incident reporting requirements, transparency obligations to users, and human oversight requirements. The Act entered into force in August 2024 with phased applicability.
NYC Local Law 144
New York City Local Law 144, effective since July 2023, requires employers and employment agencies using automated employment decision tools (AEDTs) to conduct annual independent bias audits and to provide advance written notice to candidates before any AEDT is used in evaluating them. The law applies to any position based in New York City.
An AEDT is defined broadly to include any computational process derived from machine learning, statistical modeling, data analytics, or AI that is used to substantially assist or replace discretionary decision-making in hiring or promotion.
Illinois Artificial Intelligence Video Interview Act (AIVIA)
Illinois enacted the AIVIA in 2020, requiring employers using AI to analyze video interviews to notify candidates that AI will be used, to explain how the AI works and what characteristics it evaluates, to obtain candidate consent, and to limit sharing of the video data. It was the first U.S. law to regulate AI in video-based hiring assessment.
Right to Explanation
The right to explanation is a candidate’s legal entitlement — codified in GDPR and emerging in U.S. state law — to receive a meaningful, human-intelligible account of how an automated system reached a decision that affected them. For HR teams, this creates a practical operational requirement: AI tools must be configured to generate candidate-facing rationale for screening and ranking outputs, not just internal scoring logs that candidates cannot access or interpret.
Fairness and Equity Terms
Protected Class
A protected class is a group of individuals sharing a characteristic that is legally protected from employment discrimination under federal or state law. Under federal law, protected characteristics include race, color, national origin, sex, religion, age (40+), disability, and genetic information. State laws frequently add additional protected characteristics — including sexual orientation, gender identity, marital status, and military status.
Any AI hiring tool must be validated to ensure it does not produce adverse impact against any protected class. The categories relevant to a given tool depend on the jurisdiction, the role, and the candidate population.
Consent
In the context of AI hiring tools and data privacy law, consent is a candidate’s freely given, specific, informed, and unambiguous agreement to the collection and processing of their personal data. Under GDPR, consent must be granular — a blanket “by applying you agree to everything” clause does not satisfy the standard. Candidates must be able to withdraw consent without detriment.
In the context of AI-specific statutes (Illinois AIVIA, NYC Local Law 144), consent requirements extend beyond data processing to include consent to be evaluated by the specific AI tool being deployed.
Audit Trail
An audit trail is a chronological record of AI system outputs, decision logic, data inputs, and human review actions — created and maintained to enable post-hoc regulatory inspection, internal compliance review, and candidate rights fulfillment. Regulators investigating an AI hiring complaint will request the audit trail first. If it does not exist, the employer’s compliance posture collapses immediately.
Human-in-the-Loop (HITL)
Human-in-the-loop describes an AI system design in which a human reviewer is present at each consequential decision point — approving, overriding, or modifying the AI’s output before it takes effect. HITL is the primary regulatory mitigation for automated decision-making risk. It is also the design pattern that makes AI hiring tools defensible: no candidate is screened out, ranked, or rejected by the algorithm alone.
For practical implementation of HITL in your AI readiness program, see the AI readiness assessment for recruitment teams.
Related Terms
Fairness Metric
A fairness metric is a mathematical definition of what “fair” means for a given AI model — and there is no universally accepted definition. Common fairness metrics include demographic parity (equal selection rates across groups), equalized odds (equal true positive and false positive rates), and individual fairness (similar candidates receive similar treatment). The choice of fairness metric is itself a value judgment that affects which groups benefit and which are disadvantaged — making it a compliance and governance decision, not solely a technical one.
Proxy Variable
A proxy variable is a data feature that correlates with a protected characteristic without explicitly naming it. Common examples in resume data include graduation year (a proxy for age), ZIP code (a proxy for race or national origin), and gap years (a proxy for disability or caregiving, which correlates with gender). AI models trained on proxy-laden data can produce discriminatory outputs even when protected characteristics are explicitly excluded from the model inputs.
Model Drift
Model drift occurs when an AI model’s performance or outputs change over time as the real-world data the model encounters diverges from the training data. In hiring, model drift can cause a tool that passed a bias audit at deployment to develop adverse impact over time as applicant demographics, job market conditions, or organizational hiring patterns shift. This is why bias audits must be conducted periodically — not just at initial deployment.
How These Terms Connect to Practice
Legal vocabulary without operational context produces lawyers, not compliance programs. The terms above are most useful when mapped to specific decisions in your AI procurement and deployment workflow:
- Before procurement: Require vendors to disclose bias audit results, fairness metrics used, and explainability mechanisms. Ask whether the tool’s outputs will be classifiable as ADM under GDPR Article 22.
- During implementation: Configure HITL checkpoints at every automated screening or ranking step. Establish audit trail logging from day one. Define the consent process for candidates in each jurisdiction.
- Ongoing operations: Schedule bias audits at minimum annually — more frequently if candidate demographics or hiring volumes shift. Maintain documentation of every ADM override and its rationale.
- Incident response: If a candidate files a discrimination complaint, the audit trail and explainability documentation are the first line of defense. Build them before you need them.
For the bias-specific mitigation playbook, see how other HR teams have implemented AI parsing to reduce unconscious bias and boost diversity.
Jeff’s Take
Most HR teams treat compliance as a post-purchase checklist. That is the wrong sequence. Before you buy any AI hiring tool, you need to know whether it produces adverse impact on your candidate population — not whether the vendor claims it doesn’t. Vendor attestations are not bias audits. Independent statistical analysis of your actual hiring data is. The teams that build the vocabulary first are the ones who avoid the regulatory exposure that is now producing real enforcement actions — not hypothetical ones.
In Practice
The gap we see most often is not malice — it is vocabulary mismatch. An HR director approves an AI screening platform because the vendor demo looked fair. The legal team asks six months later whether the tool was audited for disparate impact. The HR director doesn’t know what that means. The legal team can’t evaluate the tool because the vendor’s documentation uses product-marketing language instead of legal terminology. This glossary exists to close that gap before procurement, not after a complaint.
What We’ve Seen
The two compliance failures that surface most reliably are both definitional: teams that conflate data privacy with data security — treating them as the same problem with the same solution — and teams that do not understand the four-fifths rule until a hiring manager asks why the AI rejected every candidate from a specific ZIP code. Both are preventable. Both start with knowing the terms before you need them in a legal context.
Frequently Asked Questions
What is algorithmic bias in hiring?
Algorithmic bias in hiring occurs when an AI system produces systematically unfair outcomes — screening out protected-class candidates at disproportionate rates — because the model was trained on historical data that encoded past human bias. Biased AI outputs can constitute unlawful employment discrimination under Title VII of the Civil Rights Act, regardless of the employer’s intent.
What is adverse impact and how is it measured?
Adverse impact is a legal standard that measures whether a neutral employment practice disproportionately harms a protected group. The federal four-fifths rule is the primary benchmark: if the selection rate for any protected group is less than 80% of the rate for the highest-selected group, adverse impact is presumed. For AI tools, this calculation applies to automated screening, ranking, and filtering steps — not just final hire decisions.
Does GDPR apply to U.S.-based companies using AI for hiring?
Yes. GDPR applies whenever a company processes personal data of individuals located in the European Union, regardless of where the company is headquartered. U.S. employers using AI to screen EU-based candidates must comply with GDPR’s consent, transparency, and automated decision-making requirements — or face fines of up to 4% of global annual turnover.
What does the CCPA require for AI-driven candidate data?
The CCPA requires organizations to disclose what personal information they collect from California residents, including resume and assessment data, and to provide mechanisms for candidates to access, delete, or opt out of the sale of their information. These obligations apply even if the employer is not based in California, as long as California residents’ data is processed.
What is explainability in AI hiring tools?
Explainability means the ability to articulate, in plain language, why an AI model produced a specific output — a ranking, a shortlist decision, or a rejection. If your team cannot explain why the AI excluded a candidate, you cannot defend that decision in a discrimination inquiry. GDPR Article 22 requires this explanation as a candidate right, not a vendor feature.
What is the EU AI Act and how does it affect HR technology?
The EU AI Act classifies AI tools used in employment — including resume screening, candidate ranking, and performance evaluation — as high-risk systems. This classification triggers mandatory conformity assessments, transparency obligations, and human oversight requirements before deployment in the EU market. It is the most consequential AI regulatory development affecting HR technology since GDPR.
What is NYC Local Law 144 and who must comply?
New York City Local Law 144 requires employers using automated employment decision tools to conduct annual independent bias audits and to notify candidates before any such tool is used in evaluating them. It applies to employers deploying AI screening tools for positions based in New York City and has been in effect since July 2023.
What is the difference between data privacy and data security?
Data privacy governs who has the right to access or use personal information and under what conditions. Data security refers to the technical controls — encryption, access management, breach detection — that protect data from unauthorized access or loss. In AI-driven hiring, both are required: privacy frameworks define the rules; security controls enforce them. Conflating the two leads to compliance gaps in both areas.
What is disparate treatment vs. disparate impact in AI hiring?
Disparate treatment is intentional discrimination — an AI configured to filter out candidates of a specific race or gender. Disparate impact is unintentional discrimination — a facially neutral AI tool that produces statistically disproportionate exclusion of a protected group. Most AI hiring litigation risk comes from disparate impact, because intent is irrelevant to liability under that doctrine.
What is a bias audit for AI hiring tools?
A bias audit is an independent statistical analysis of an AI hiring tool’s outputs to determine whether the system produces adverse impact against any protected class. It must be conducted by an independent third party — not the vendor — using real candidate data. New York City Local Law 144 mandates annual independent bias audits for covered employers. Audit results must be published publicly.
What does automated decision-making mean under GDPR Article 22?
GDPR Article 22 restricts decisions made solely by automated processing that produce legal or significant effects on individuals. In hiring, a fully automated screening or rejection that operates without any human review triggers Article 22 protections — giving candidates the right to request human intervention, contest the decision, and obtain an explanation of the logic applied. Fully automated rejections carry the highest legal exposure under current EU regulation.
What does ‘right to explanation’ mean in hiring AI?
The right to explanation is a candidate’s entitlement — codified in GDPR and emerging in U.S. state law — to receive a meaningful account of how an automated system reached a decision that affected them. For HR teams, this means AI tools must be configured to generate candidate-facing rationale, not just internal scoring outputs that candidates cannot access or interpret.
Build the Vocabulary Before You Build the Stack
The terms in this glossary are not academic. They are the vocabulary regulators use when they investigate, the vocabulary plaintiffs’ attorneys use when they file, and the vocabulary HR leaders need when they are asked to defend a hiring decision that a machine made. Build fluency now.
For the strategic framework that puts these concepts into an actionable deployment sequence, return to the parent guide: HR AI Strategy: Roadmap for Ethical Talent Acquisition. For the business case that justifies this investment in compliance infrastructure, see the strategic business case for AI in recruiting and the KPIs for measuring AI talent acquisition success.