Post: AI Regulation in HR: Laws, Compliance, and Bias Audits

By Published On: August 17, 2025

AI Regulation in HR: Laws, Compliance, and Bias Audits

AI hiring tools are no longer operating in a regulatory vacuum. The EU AI Act is in force, NYC Local Law 144 is being enforced, and the EEOC has made clear that existing anti-discrimination statutes apply to algorithmic decisions. HR leaders navigating this landscape face a critical question: which framework applies to you, what does it actually require, and how do the two regimes compare when you operate across borders? This satellite post answers that directly. For the broader context on building an AI-powered hiring operation on a sound analytics foundation, see our Recruitment Marketing Analytics: Your Complete Guide to AI and Automation.

Framework Comparison at a Glance

The EU AI Act and the US regulatory patchwork share the same concern — algorithmic bias in hiring — but impose obligations through completely different mechanisms. The EU mandates pre-deployment controls. The US largely enforces post-harm through litigation and agency guidance. Here is how the two regimes stack up across the dimensions that matter most to HR operations:

Dimension EU AI Act (High-Risk) US Federal (EEOC Guidance) NYC Local Law 144
Trigger AI tool used in employment decisions for EU residents Any employment decision affecting US-protected class members Automated decision tool used to hire/promote in NYC
Timing Pre-deployment conformity assessment required Post-harm enforcement via EEOC charge or litigation Annual audit required; enforcement ongoing since July 2023
Bias Audit Required as part of risk management system Not mandated; recommended as disparate impact defense Mandatory independent annual bias audit; public summary required
Candidate Disclosure Transparency obligations; explanation of logic required on request No explicit mandate; expected under general fairness principles Required notice to candidates before tool is used
Human Override Mandatory meaningful human oversight for all high-risk systems No statutory requirement; important for legal defensibility Not specified; best practice aligned with audit obligations
Data Governance High-quality training data, documentation, and audit logs required Record retention under EEOC regulations (29 CFR Part 1602) Audit data retention tied to annual audit cycle
Penalty Exposure Up to €30M or 6% of global annual turnover Back pay, compensatory damages, attorney fees via litigation Civil penalties per violation; private right of action
Geographic Scope Any employer using AI on EU residents — including non-EU companies US employers, US-protected class members Any employer using AEDT to hire/promote in NYC

EU AI Act: The Highest Compliance Bar in the World

The EU AI Act is the most comprehensive AI governance framework currently in force, and HR teams using any AI tool to screen, rank, or evaluate candidates for EU-based roles are squarely in scope.

Why HR Tools Are ‘High-Risk’ by Default

The EU AI Act classifies AI systems used in employment, worker management, and access to self-employment as high-risk in Annex III. This is not a gray area — resume screeners, ATS ranking algorithms, video interview analyzers, and predictive suitability scores all qualify. High-risk status triggers the Act’s most demanding obligations before a tool is ever switched on.

Required under high-risk classification:

  • Risk management system: A documented, ongoing process for identifying and mitigating foreseeable risks throughout the system’s lifecycle.
  • Data governance: Training, validation, and testing data must be documented; known biases must be identified and addressed.
  • Technical documentation: Sufficient detail for competent authorities to assess compliance — not something most HR software buyers have ever requested from vendors.
  • Human oversight: The system must be designed so competent HR staff can monitor outputs, detect anomalies, and override or halt the system. Rubber-stamping AI recommendations does not meet this standard.
  • Transparency to deployers: The vendor must provide information sufficient for the deployer (your HR team) to understand the system’s capabilities, limitations, and risks.
  • Transparency to affected persons: Individuals subject to AI decisions have the right to a meaningful explanation of the logic involved.

Who Bears the Compliance Burden: Vendor or Employer?

Both. The EU AI Act distinguishes between providers (vendors who develop and place AI systems on the market) and deployers (employers who use those systems). Vendors must deliver conformity-assessed, documented systems. Employers must implement proper oversight, follow vendor instructions, monitor system performance, and ensure candidate notification. Buying a certified AI tool does not transfer all compliance responsibility to the vendor. If your HR team uses the tool in ways that generate discriminatory outcomes, the deployer shares liability.

Gartner analysis confirms that many HR leaders underestimate the deployer obligations in the EU AI Act, assuming vendor certification resolves their compliance exposure. It does not.

Extraterritorial Reach

A US company hiring a candidate in Berlin using an AI resume screener is subject to the EU AI Act. The regulation applies based on where the affected person is located, not where the employer is incorporated. Global employers cannot route around EU obligations by processing data outside the EU.

US Federal Regulation: Broad Statute, Narrow Mandate

The United States has not enacted a single comprehensive AI law at the federal level. Instead, existing anti-discrimination statutes — Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA) — have been extended by EEOC guidance to cover algorithmic hiring decisions.

What the EEOC Has Actually Said

The EEOC’s guidance makes three points that HR teams must internalize:

  1. Disparate impact applies to algorithms. If an AI tool produces a statistically significant difference in selection rates between protected groups — even with no discriminatory intent — the employer faces disparate impact liability under Title VII.
  2. Employers cannot outsource liability to vendors. Using a third-party AI tool does not shield an employer from discrimination claims. The employer is responsible for the tool’s outcomes.
  3. Reasonable accommodation obligations extend to AI. If an AI screening tool disadvantages applicants with disabilities, the employer may be required to offer an alternative assessment process as a reasonable accommodation under the ADA.

The EEOC’s enforcement posture matters. Harvard Business Review research has documented the mechanisms through which AI hiring systems can replicate and amplify historical workforce biases, particularly when trained on incumbency data that reflects prior discriminatory hiring patterns.

The Missing Federal Mandate: What This Means in Practice

The absence of a federal AI-specific hiring law means US employers face a reactive compliance environment. Liability materializes after a discriminatory outcome triggers an EEOC charge or litigation — not through pre-deployment review. For HR teams, this creates a dangerous false sense of security: no regulator will review your AI stack before you deploy it, but that does not mean you are protected when something goes wrong.

Proactive bias auditing — the kind NYC mandates — is the most effective way to surface problems before they become legal exposure. The fact that federal law does not require it does not mean it is optional for risk-conscious HR leaders.

NYC Local Law 144: The Most Prescriptive US Mandate

New York City’s Local Law 144 is the most specific and currently enforced AI hiring regulation in the United States. Any employer or employment agency using an automated employment decision tool (AEDT) to screen candidates or employees for positions in New York City must comply.

What NYC Law 144 Requires

Three concrete obligations apply:

  1. Annual independent bias audit. The audit must be conducted by an independent auditor — not the vendor, not the employer’s internal team — within one year before the tool is used. The audit must calculate selection rate differentials across race/ethnicity and sex categories.
  2. Published audit summary. Employers must publish a summary of the bias audit results — including the scoring and summary data — on a publicly accessible webpage. The summary must remain available until a new audit is published.
  3. Candidate notice. Candidates must be notified that an AEDT will be used in their assessment, and told what job qualifications or characteristics the tool evaluates. This notice must be provided at least ten business days before the tool is applied.

Scope: What Counts as an AEDT?

Any computational process — machine learning, statistical modeling, data analytics, or AI — that issues a simplified output used to screen candidates for a specific position qualifies. This captures ATS scoring algorithms, resume ranking tools, chatbot screening flows, and video interview analyzers. If the tool produces a score, rank, or recommendation that a recruiter uses to make a hiring or promotion decision in NYC, it is an AEDT.

Enforcement and Penalties

The NYC Department of Consumer and Worker Protection (DCWP) enforces Local Law 144. Civil penalties apply per violation, and enforcement actions are public. Beyond municipal penalties, Local Law 144 creates a framework that plaintiffs’ attorneys can use to establish that an employer knew — through the audit process — about disparate impact in their tools and deployed them anyway.

For a practical look at how automated screening can be structured to minimize bias exposure, see our guide on automated candidate screening best practices.

US State-Level Landscape: A Growing Patchwork

Beyond NYC, state-level AI hiring regulation is accelerating. HR teams operating nationally face a compliance stack that varies by jurisdiction:

  • Illinois (AIVAA): Employers using AI to analyze video interviews must notify candidates before the interview, explain how the AI works, obtain consent, and limit data sharing.
  • Maryland: Prohibits employers from using facial recognition technology in job interviews without candidate consent.
  • California: Expanding algorithmic accountability obligations under the CCPA and the California Civil Rights Act. Automated decision technology regulations continue to develop.
  • Colorado and Washington: Algorithmic bias and automated decision legislation advancing through state legislative processes.

SHRM research documents growing employer confusion about which state laws apply to multi-state hiring operations — a complexity that will only increase as more states enact AI-specific employment regulations.

The Algorithmic Bias Problem: Why Regulation Exists

Every regulatory framework in this space is responding to the same documented failure mode: AI hiring tools trained on historical data replicate and amplify the biases embedded in that data.

The mechanism is straightforward. If a company’s historical high-performer dataset is predominantly male or predominantly white due to prior discriminatory hiring, an AI trained to identify candidates who resemble those high-performers will systematically downrank female or minority candidates — not because the algorithm was designed to discriminate, but because the training data encoded past discrimination as a success signal.

McKinsey Global Institute research on AI systems highlights that biased training data is one of the primary sources of AI failure in high-stakes decision contexts. At hiring scale — thousands of candidates screened per role — even a small algorithmic bias in selection rates produces significant disparate impact across a hiring cycle.

The EEOC’s four-fifths rule provides a concrete benchmark: if the selection rate for any protected group is less than four-fifths (80%) of the rate for the group with the highest selection rate, adverse impact is indicated. Bias audits test against this threshold. For insight into how leading organizations are using AI tools specifically to counteract bias, see our case study on AI bias tools that improved diversity hiring outcomes.

Data Privacy: The Compliance Layer Under Every AI Regulation

AI hiring compliance does not exist in isolation. Every AI tool that processes candidate data also triggers data privacy obligations that compound the regulatory burden.

GDPR and EU AI Act: A Compound Obligation

EU candidates’ personal data used to train or operate AI hiring tools is subject to GDPR. This requires a lawful basis for processing (typically legitimate interest or contractual necessity), data minimization, purpose limitation, and candidate rights to access, correction, and erasure. The EU AI Act adds its own data governance requirements on top — documenting training data provenance and known biases. Satisfying both simultaneously requires coordination between HR, legal, and the AI vendor’s data team.

US Privacy: CCPA and State Equivalents

California’s CCPA and its successor CPRA give California residents rights over personal information used in hiring, including the right to know what data is collected, the right to opt out of certain uses, and the right to deletion. Several other US states have enacted similar frameworks. For a complete treatment of data privacy obligations in recruitment, see our guide to data privacy compliance in recruitment marketing.

Decision Matrix: Which Framework Applies to Your Team?

Use this to scope your compliance obligations based on where you hire and how you hire:

Your Situation Frameworks That Apply Most Urgent Action
US-only employer, no NYC hiring EEOC guidance + applicable state law Conduct voluntary bias audit; document human override procedures
US employer hiring in NYC EEOC guidance + NYC Local Law 144 Commission independent bias audit; publish summary; implement candidate notice
EU employer or hiring EU residents EU AI Act (high-risk) + GDPR Obtain vendor conformity documentation; build human oversight procedures; establish candidate transparency process
Global employer (EU + US) EU AI Act + GDPR + EEOC + NYC Local Law 144 (if NYC hiring) Full AI tool inventory; jurisdiction-mapped compliance plan; vendor documentation review
US employer, Illinois or Maryland hiring EEOC guidance + state-specific AI law Verify video interview AI disclosure and consent compliance; review facial recognition policy

Choose the EU AI Act Approach If… / Choose the EEOC + State Approach If…

Adopt EU AI Act-level controls if: you hire EU residents anywhere in the world; you want the most defensible compliance posture globally; you are building a long-term AI hiring stack and want to minimize retrofit costs as US regulation converges toward EU standards; or your legal team has determined that the EU AI Act’s penalty exposure (up to 6% of global turnover) represents unacceptable risk.

Operate under EEOC + state framework if: you hire exclusively in the US, have no EU operations, and no NYC hiring — in which case federal guidance and applicable state law define your minimum obligation. Be aware this is a lower floor, not a safe harbor. Voluntary bias auditing remains the most effective risk reduction action even without a mandate.

The Practical Compliance Stack: What to Do Now

Regulatory complexity resolves into four concrete actions for HR operations teams:

  1. Inventory every AI tool in your hiring workflow. ATS ranking logic, resume screening filters, chatbot qualification flows, video interview analyzers, and predictive fit scores all count. Many HR teams do not know how many algorithmic decision points exist in their stack. This is where to start.
  2. Request vendor compliance documentation. For EU-facing tools: ask for conformity assessment documentation, training data governance records, and audit logs. For all tools: ask what bias testing the vendor has conducted and what the results showed. Vendors who cannot produce this documentation are a compliance liability.
  3. Commission an independent bias audit for any NYC-facing tool. Do not wait for enforcement. The audit process itself generates institutional knowledge about which tools are producing disparate impact — knowledge you need regardless of jurisdiction.
  4. Build human override documentation into every AI-assisted hiring stage. Document who reviews AI outputs, what criteria override an AI recommendation, and how those decisions are recorded. This documentation is your primary defense in both EEOC investigations and EU regulatory inquiries.

Ethical AI in recruitment involves more than compliance — it requires actively examining how AI changes recruiter judgment and candidate experience. Our detailed guide on ethical AI risks in recruitment covers the full operational picture. And when you are ready to assess the total value your AI hiring stack is actually delivering, our guide to measuring AI ROI in talent acquisition provides the framework.

AI regulation in HR is not a future problem. It is a present operational constraint that shapes what tools you can legally use, how you must deploy them, and what documentation you need when something goes wrong. The employers who treat compliance as a design input — not an afterthought — will build hiring operations that are both more legally defensible and more equitable. That alignment between compliance and quality is not a coincidence; it is the point.