
Post: AI Regulation in HR: The 2026 Compliance Framework Every HR Leader Needs
AI hiring regulations have moved from advisory to mandatory in multiple jurisdictions. HR teams that treat compliance as a legal department problem are already behind — the operational response requires automation infrastructure, not just policy documents.
Key Takeaways
- NYC Local Law 144, EU AI Act (HR provisions active 2025), and Colorado AI Act create overlapping compliance obligations
- Bias audit requirements now apply to any automated tool that ranks, screens, or scores candidates
- Transparency mandates require notifying candidates when AI is used in hiring — with specific disclosure formats
- Make.com OpsCare™ automates the log capture and reporting infrastructure that compliance requires
- HR teams that built automation infrastructure first are completing bias audits in days, not weeks
The Regulatory Landscape in 2026: What Has Actually Changed
Three frameworks now create binding obligations for HR teams using automated hiring tools. Understanding the overlap — and the gaps — determines where your compliance risk is concentrated.
New York City’s Local Law 144 was the first in the U.S. to mandate third-party bias audits for automated employment decision tools (AEDTs). It applies to any tool that substantially assists in hiring decisions for positions in NYC. Annual audits by a qualified third party are required, with audit summaries published publicly. The requirement has been in effect since July 2023 and enforcement actions began in 2024.
The EU AI Act categorizes employment AI as high-risk. High-risk AI systems must meet requirements for transparency (candidates must be informed), human oversight (humans must be able to intervene), accuracy and robustness standards, and data governance documentation. The HR-specific provisions became active for new deployments in August 2025. HR compliance automation requires a workflow foundation before layering regulatory response procedures on top. OpsMap™ documents the workflow dependencies that compliance auditors will examine. OpsMesh™ connects the regulatory reporting data flows.
Colorado’s AI Act (SB 205) applies to developers and deployers of high-risk AI systems in employment contexts, requiring impact assessments and disclosure of AI use to applicants. Maryland, Illinois, and Washington state have similar frameworks at various stages of enactment.
The Three Compliance Obligations That Require Operational Response
Bias Audit Requirements: What HR Operations Must Build
A bias audit is not a one-time assessment. It is an ongoing data collection and analysis operation that requires infrastructure. The audit evaluates whether your AI hiring tool produces statistically different outcomes across demographic groups — race, sex, and other protected categories.
The operational requirement is a pipeline that: captures algorithmic scoring data for every candidate, stores it with demographic fields where lawfully collected, runs periodic statistical analysis comparing selection rates across demographic groups, and generates the formatted output a third-party auditor can review. Make.com OpsCare™ automates this pipeline. Without it, the data collection alone requires 20-40 hours per audit cycle per tool.
Candidate Notification Requirements: What HR Communications Must Include
Multiple jurisdictions now require that candidates be notified when AI is used in hiring decisions. The specifics vary — NYC requires a career page disclosure, the EU AI Act requires individual notification — but the operational requirement is consistent: every application workflow that uses an AEDT needs a disclosure step.
Make.com automates the disclosure delivery: when a candidate applies through a workflow that includes an AI screening step, the automation triggers a disclosure notification at the appropriate point in the process. This eliminates manual tracking of which applications triggered AI tools.
Human Oversight Requirements: What Workflow Architecture Must Include
Both the EU AI Act and emerging U.S. frameworks require that high-risk AI systems have effective human oversight mechanisms — the ability for a human to intervene, override, or review AI outputs. This is an architecture requirement, not just a policy statement.
The operational implementation: every AI screening output must route to a human review step before a final adverse action (rejection) is taken. The workflow must document that the review occurred, by whom, and when. Make.com builds this review routing automatically, with escalation triggers if review does not occur within the required window.
The Comparison: Manual Compliance vs. Automated Compliance
| Compliance Task | Manual Approach | Automated (Make.com) | Time Difference |
|---|---|---|---|
| Bias audit data collection | 40+ hrs per audit cycle | Continuous background capture | -95% |
| Candidate AI disclosure delivery | Requires manual tracking | Triggered automatically per application | -100% manual effort |
| Human review routing | Email-based ad hoc | Automated queue with SLA enforcement | -80% |
| Audit report generation | 2-3 weeks quarterly | On-demand from live data | -90% |
What HR Teams Get Wrong About AI Compliance
The most common mistake is treating AI compliance as a legal matter handled through policy and vendor agreements. Vendor compliance certifications do not transfer compliance liability to the vendor. The enterprise deploying the AI tool is the regulated party under every framework currently in force.
The second most common mistake is building compliance infrastructure after selecting and deploying AI tools, rather than building it first. The architecture decisions that make compliance manageable — centralized decision logging, automated disclosure workflows, documented human review checkpoints — are dramatically easier to build before deployment than to retrofit after.
Build the OpsBuild™ compliance workflow infrastructure first. Select and deploy AI tools into that infrastructure second. This sequence is what separates HR teams that complete bias audits in 3 days from HR teams that treat each audit as a fire drill.
Expert Take
The framing that troubles me most is “we’ll handle compliance when regulators start enforcing.” NYC started enforcement actions in 2024. The EU AI Act high-risk provisions are active. The teams I see struggling with compliance are not struggling because the regulations are unclear — they’re struggling because they deployed AI without building the data capture infrastructure first. You cannot audit data that was never logged. Build the logging before you build the model.
Frequently Asked Questions
Does Local Law 144 apply to companies outside NYC?
Yes, if the company employs workers in NYC or considers applicants for positions based in NYC. A company headquartered in another state that has NYC-based roles — including remote roles where the employee works in NYC — is subject to Local Law 144 requirements if it uses AEDTs in its hiring process.
What counts as an “automated employment decision tool” for regulatory purposes?
Under NYC Local Law 144, an AEDT is any computational process that substantially assists or replaces discretionary decision-making in employment decisions. Resume screening tools, ranking algorithms, and video interview analysis tools have all been identified as AEDTs in regulatory guidance. If a tool scores, ranks, or filters candidates algorithmically, treat it as an AEDT for compliance purposes.
How often do bias audits need to be conducted?
NYC Local Law 144 requires annual audits. The EU AI Act requires ongoing monitoring rather than annual snapshots, with documentation maintained continuously. Best practice is quarterly disparity analysis with automated Make.com pipelines generating the data continuously, feeding into annual formal audits.