
Post: AI Recruitment Myths HR Leaders Keep Believing — And Why They’re Costly
The myths around AI in recruitment have two origins: vendors overselling capabilities, and critics overclaiming harms. Both distort decision-making. HR leaders navigating this space need a clearer map — not reassurance, not alarm, but evidence.
Key Takeaways
- AI does not eliminate bias — it systematizes it. The data you train on determines the bias you get.
- “AI screening is objective” is the most dangerous myth in HR technology today.
- Automation first means fixing your process data before applying AI to it.
- The fairness benefits of AI in hiring are real but conditional — they require active oversight, not passive deployment.
- Make.com workflows can enforce consistent process without AI — often that is enough.
Is AI Hiring Actually More Objective Than Human Hiring?
Only if your historical hiring data reflects the outcomes you want to replicate. If your top performers over the last decade skew toward a particular demographic, AI trained on those outcomes will replicate that skew. The objectivity argument assumes your training data is neutral. It is never neutral. This is why our diversity and inclusion work emphasizes explainable AI — you need to be able to audit what the system is actually selecting for.
Expert Take
The myth I find most damaging is not the bias myth — it is the myth that AI reduces recruiter workload across the board. In my experience, AI screening tools reduce workload for high-volume, low-complexity roles and dramatically increase it for complex roles where the AI makes confident wrong calls that someone has to catch and correct. HR teams that deployed AI screening without auditing false negative rates found themselves explaining to candidates why qualified people were rejected — and explaining to legal why the rejection patterns looked the way they did. Audit before you automate.
Does AI Improve Diversity Outcomes in Hiring?
In controlled conditions, yes. In production, inconsistently. The gap between controlled and production performance is where the myths live. Structured AI screening with diverse training data, regular audits, and explicit fairness constraints can meaningfully reduce human bias at the screening stage. Unstructured AI screening applied to historically biased data can make existing disparities worse and harder to detect.
What to Do Differently
Before deploying any AI screening tool, audit your last three years of hiring data for demographic patterns. If the patterns you find are ones you would not defend publicly, do not train an AI on that data. Build your screening process in Make.com first — consistent, rule-based, auditable — and add AI only where human judgment is genuinely the bottleneck, not where you want to move faster.
Frequently Asked Questions
What is the biggest legal risk of AI hiring tools?
Disparate impact — when an AI screening tool disproportionately screens out protected class candidates at a rate that cannot be justified by job-related criteria. This is increasingly the focus of EEOC guidance.
How do we audit an AI hiring tool for bias?
Run your existing qualified candidate pool through the tool’s screening criteria and measure pass rates by demographic group. Any statistically significant disparity warrants investigation before deployment.