
Post: How to Build EU AI Act Compliance Into Your HR Automation Stack: A Step-by-Step Guide
EU AI Act compliance for HR teams is not a legal review exercise — it’s an infrastructure build. The regulation requires documentation, oversight mechanisms, and monitoring systems that don’t exist in most HR stacks today. This guide walks through the exact steps to build compliant HR automation, using Make.com™ as the orchestration layer.
Before You Start
Gather these inputs before building anything:
- A complete inventory of every AI tool in your HR stack — resume parsers, interview analysis tools, performance scoring systems, scheduling AI, productivity monitoring tools
- The vendor documentation for each tool, including any EU AI Act conformity documentation or CE marking they hold
- Your current data processing agreements with each AI vendor
- A list of the EU member states where you operate (determines which national authorities apply)
- Access to your ATS, HRIS, and Make.com™ account
For the complete list of requirements you’re building toward, see 11 EU AI Act Requirements Every HR Leader Must Know in 2026. For the EEOC parallel requirements in US operations, see 9 EEOC AI Compliance Requirements HR Teams Must Meet in 2026.
Step 1: Classify Your HR AI Tools as High-Risk or Not
The EU AI Act’s compliance requirements apply to “high-risk” AI systems. For HR, high-risk classification applies to AI used in: recruitment and selection, employment decisions, access to self-employment, task allocation, performance monitoring and evaluation, and promotion or termination decisions.
Work through your AI tool inventory against this list. Be conservative — if a tool influences any of these decision types, classify it as high-risk. Classification errors in the wrong direction (under-classifying) create non-compliance; over-classifying creates extra work but not liability.
Document your classification decision for each tool, including your reasoning. This documentation is itself a compliance artifact — it shows you conducted a structured assessment, not a casual determination.
Step 2: Audit Your Vendor Documentation
For each high-risk AI tool, collect and verify:
- Technical documentation — the vendor’s documentation of the system’s design, training data, performance characteristics, and limitations
- Conformity assessment — evidence of EU AI Act conformity review, CE marking (for systems requiring it), or third-party assessment
- EU database registration — verify the vendor’s system is registered in the EU AI Act database for high-risk systems
- Data processing agreement — confirm your existing DPA covers EU AI Act data governance requirements, not just GDPR
Create a vendor documentation tracker — a spreadsheet or Airtable record with one row per tool, columns for each documentation type, and status/date fields. This tracker is your ongoing compliance evidence that you’ve performed and maintained vendor due diligence.
Step 3: Map Every AI-Influenced Decision Point
For each high-risk AI tool, map every decision point where AI output influences a human decision. This is your OpsMap™ for compliance — a complete picture of where AI touches your HR processes before you build any automation.
For a resume parser: decision points include initial screen pass/fail, shortlist determination, and interview invitation. For an interview analysis tool: decision points include interview score weighting, comparison ranking, and advancement to offer. For a performance management AI: decision points include performance rating inputs, promotion eligibility scoring, and PIP initiation criteria.
Document each decision point: the AI tool involved, the human roles who review AI output, the current documentation practice, and the gap between current practice and EU AI Act requirements. This gap analysis drives your build plan.
Step 4: Build Human Oversight Enforcement Into Each Decision Point
For every AI-influenced decision point identified in Step 3, build an enforcement mechanism that requires documented human review before the decision is recorded. This is the core compliance build — and it’s where Make.com™ does the work.
The pattern for each decision point:
- AI tool produces output (score, recommendation, flag)
- Make.com™ scenario receives the output via webhook or API
- Scenario routes to a human review task in your ATS or project management tool
- Human review task requires a structured input: reviewer identity, review date, decision made, and whether the decision aligned with or overrode the AI recommendation
- Completed review record is logged with the candidate/employee record
- Scenario advances only after review completion is confirmed
Build this pattern once for each decision point. The enforcement is structural — the workflow cannot advance without the review. Individual humans don’t need to remember the compliance requirement; the system enforces it.
Step 5: Build Candidate and Worker Disclosure Workflows
Disclosure must be systematic and documented. Build two workflows:
Candidate disclosure workflow: Embed AI disclosure language in your application form at the point where AI tools are first used. The application system logs the disclosure presentation with a timestamp and the candidate’s application ID. The log entry writes to the candidate record in your ATS via Make.com™. No manual step required; every applicant receives documented disclosure.
Worker disclosure workflow: When a new AI tool is deployed in workforce management, a Make.com™ scenario generates the disclosure notice from the approved template, delivers it to each affected employee via your HRIS notification system, and logs confirmation of delivery. Non-confirmations after 48 hours trigger a follow-up. Completed confirmations are stored in the employee record.
For the document templates these workflows use, see 13 HR Document Workflows You Should Automate in 2026 for the full document automation framework.
Step 6: Implement Adverse Impact Monitoring
Adverse impact analysis serves double duty: it satisfies EU AI Act accuracy and robustness monitoring requirements AND EEOC adverse impact analysis requirements. Build it once for both.
The data collection setup: Ensure your ATS logs demographic data (where legally collected and appropriate for your jurisdiction) alongside stage progression data. Every AI-influenced decision point from Step 3 needs a corresponding data capture in your ATS — the AI output, the human decision, and the candidate/employee record it applies to.
The analysis automation: A monthly Make.com™ scenario pulls the prior month’s decision data from your ATS, calculates selection rates by demographic group at each AI-influenced stage, and generates a structured report. The report routes to the compliance owner for review. The review action is logged. The report and log are retained in your compliance record system.
Step 7: Set Up Record-Keeping and Retention
The EU AI Act requires automated logging of AI system operation. Your Make.com™ scenario execution logs are the primary record — they capture inputs, outputs, routing decisions, and timestamps for every compliance workflow run. The gap in most implementations: Make.com™’s own log retention is limited. You need to export logs to a longer-retention system.
The log export setup: A daily Make.com™ scenario exports the prior day’s compliance-related scenario execution logs to a Google Sheet or Airtable base with unlimited retention. Each log entry includes: scenario name, execution timestamp, trigger data, outputs, any errors. This becomes your queryable compliance log for audit and investigation purposes.
Set your retention period based on the longer of: EU AI Act requirements, EEOC record retention requirements, and any applicable national law retention requirements for your EU member states of operation.
Step 8: Build the Post-Market Monitoring Process
Post-market monitoring is an ongoing operational requirement, not a one-time setup. Build three recurring processes:
Monthly: Adverse impact analysis report (from Step 6 automation). Compliance workflow exception review — any scenarios that errored or were bypassed. Log completeness check.
Quarterly: Accuracy spot-check — pull 30 random AI tool outputs from the quarter, compare to actual outcomes where available, assess for drift. Vendor documentation currency check — any vendor updates to their technical documentation or conformity status.
Annual: Full compliance review — adverse impact trends, vendor re-assessment, regulatory guidance updates, workflow effectiveness evaluation, and documentation of findings and any remediation actions taken.
Each review produces a structured record stored in your compliance documentation system. The record demonstrates that monitoring happened — not just that a process existed.
How to Know It Worked
Your EU AI Act compliance infrastructure is functioning when:
- Every high-risk AI tool has current technical documentation on file, reviewed within the last 12 months
- Every AI-influenced decision in your ATS and HRIS has a corresponding human review record
- Every candidate and worker affected by AI tools has a documented disclosure record
- Monthly adverse impact reports are running, reviewed, and filed
- Your Make.com™ compliance logs are exporting daily to long-retention storage
- Annual review is scheduled, with an assigned owner and documented completion
Common Mistakes
Treating disclosure as a one-time form update. Disclosure is an ongoing process tied to every hiring cycle and every new AI tool deployment. Build the workflow; don’t just update a form.
Assuming vendor CE marking covers your compliance. Vendor conformity covers the system they built. Your deployment configuration, your data inputs, and your oversight processes are your responsibility. CE marking is a starting point, not a completion.
Building monitoring without assigned ownership. Monitoring processes that aren’t owned by a specific person with a specific calendar commitment degrade within 90 days. Assign the compliance owner before go-live.
Logging without exporting. Make.com™ execution logs are not a compliance archive. Export to a retention system with appropriate access controls and retention enforcement from day one.