60% Faster Reference Checks with Make.com™ and Keap: How a Staffing Firm Eliminated Phone Tag
Reference checks sit at one of the most consequential handoffs in the entire hiring pipeline: the gap between a candidate you want and an offer you can confidently extend. Done manually, that gap stretches into days of voicemail, inconsistent notes, and recruiter time that should be spent on relationships. Done with a properly architected automation sequence connecting Make.com™ to Keap, that gap compresses to hours — with better data on the other side. This case study documents exactly how that transformation happened, and the workflow decisions that made it repeatable. For context on where reference checks fit in the broader recruiting sequence, see our complete guide to recruiting automation with Make.com™ and Keap.
Engagement Snapshot
| Organization | TalentEdge — 45-person recruiting firm, 12 active recruiters |
| Starting Condition | 100% manual reference process: phone calls, email follow-up, handwritten or scattered notes, no structured data in Keap |
| Key Constraint | Reference checks were one of nine automation opportunities identified during an OpsMap™ audit; implementation had to fit within an OpsSprint™ engagement without disrupting active placements |
| Approach | Keap stage-change trigger → Make.com™ candidate intake → iterator-based referee survey dispatch → timed follow-up logic → structured data write-back to Keap → pipeline advancement trigger |
| Reference Cycle Time | 5–7 business days → under 48 hours (average) |
| Recruiter Touches per Check | 6–7 manual contacts → 1 exception alert only |
| Broader Program Outcome | $312,000 in annualized savings across all nine automation workflows; 207% ROI in 12 months |
Context and Baseline: What Was Actually Happening
Before automation, TalentEdge’s reference check process had no workflow — it had a habit. When a candidate cleared final interviews, a recruiter sent a personal email asking for referee contact details, waited, followed up, eventually received contacts, called or emailed each referee individually, and then transcribed whatever feedback emerged into personal notes or a shared document that may or may not have made it into Keap.
The recruiter was touching each reference check six to seven times before it closed. Across 12 recruiters running multiple active candidates, that volume compounded into a substantial weekly capacity drain. SHRM data consistently places time-to-hire as a top competitive differentiator in tight talent markets, and Gartner research confirms that process delays between interview completion and offer extension are among the highest-risk dropout windows for strong candidates. TalentEdge was sitting inside that risk window for five to seven business days on every hire.
The data problem was equally significant. Because each recruiter conducted references differently — some by phone, some by email, some with written questions, some conversational — the resulting feedback couldn’t be compared across candidates or analyzed over time. There was no way to identify patterns in what top hires’ references said versus candidates who underperformed. The process produced effort but not intelligence.
McKinsey Global Institute research on process automation consistently identifies unstructured, repetitive coordination tasks — exactly what manual reference checking is — as among the highest-ROI targets for workflow automation. The TalentEdge baseline confirmed that assessment precisely.
Approach: Designing the Logic Before Building the Scenario
The OpsMap™ audit identified reference checks as one of nine automation opportunities. The design principle applied here was the same one governing every workflow in the broader program: map the deterministic steps first, identify every decision point and edge case, then build. Attempting to build while still discovering edge cases produces scenarios that work in demos and fail in production.
Four logical blocks were defined before a single Make.com™ module was configured:
- Candidate Intake Block — Triggered by a Keap stage change, this block sends the candidate a personalized message requesting referee names, titles, companies, and contact details through a structured intake form. Responses write back to dedicated Keap custom fields via webhook.
- Referee Dispatch Block — An iterator in Make.com™ loops through each referee record individually, generating a personalized survey link pre-populated with candidate name and position applied for, and dispatching it via email. Each referee receives a unique submission instance tracked separately.
- Response Monitoring Block — A scheduled Make.com™ scenario checks for outstanding referee responses at 48 and 96 hours. If no submission is recorded, a reminder is dispatched automatically. After the second reminder without response, the recruiter receives a Keap task — the only moment human intervention is required.
- Data Consolidation Block — As each referee submits, Make.com™ parses the structured response fields and writes them into the candidate’s Keap contact record as tagged custom field data. Once all referees respond, Make.com™ triggers the next Keap pipeline stage and notifies the hiring manager.
The question set itself was standardized across all candidates: eight structured questions covering work quality, dependability, collaboration, role-specific competencies, and one open-ended overall assessment prompt. Standardization was non-negotiable — without it, the data consolidation block produces fields that can’t be compared across candidates.
Implementation: Build Decisions That Determined the Outcome
The implementation happened inside a single OpsSprint™ engagement, with clear boundaries between what was built, tested, and handed off.
Keap Configuration
Custom fields were created in Keap for each referee slot (up to three per candidate) and for each of the eight survey response dimensions. A dedicated Keap pipeline stage — “References In Progress” — was established between “Final Interview Complete” and “Offer Ready” to give the workflow a clean trigger and a clear completion target. Webhook-driven Keap automation fired the initial Make.com™ trigger the moment a recruiter moved a candidate into the new stage.
Make.com™ Scenario Architecture
Three separate Make.com™ scenarios handled the four logical blocks, keeping each scenario’s scope narrow enough to debug independently. Combining intake, dispatch, monitoring, and consolidation into a single scenario would have created a fragile, difficult-to-maintain flow. Scenario separation also allowed each block to be updated without risking the others.
Error handling was built into every scenario from the start — not added as an afterthought. A dedicated error route on the referee dispatch block caught cases where a referee email address was malformed or the survey platform returned a non-200 response. Instead of silently failing, these errors created a Keap task alerting the recruiter to the specific referee record that needed attention. This approach aligns with best practices documented in our guide to common Make.com™ and Keap integration errors.
Survey Design Constraints
The survey was kept deliberately short — eight questions, estimated four-minute completion — to maximize referee response rates. Each question mapped to a specific Keap custom field by name, eliminating any ambiguity in the data write-back. The survey included explicit field validation to prevent blank or malformed submissions from creating null values in Keap.
Testing Protocol
The workflow was tested against five distinct scenarios before going live: standard three-referee completion, partial completion with one non-responder triggering reminders, malformed email address triggering error route, candidate providing only two referees, and a referee submitting on a mobile device with session interruption. All five scenarios passed before any live candidate records were processed.
Results: Before and After
After the workflow went live across TalentEdge’s 12-recruiter team, the operational numbers shifted measurably within the first 30 days.
| Metric | Before | After |
|---|---|---|
| Average reference cycle time | 5–7 business days | <48 hours |
| Recruiter touches per reference check | 6–7 | 1 (exception only) |
| Structured data in Keap post-reference | None | 8 scored fields per referee |
| Feedback consistency across candidates | None (unstructured) | 100% standardized |
| Recruiter capacity recovered per week | — | Estimated 3–4 hrs/recruiter |
The reference check workflow contributed to TalentEdge’s broader program outcome: $312,000 in annualized savings and a 207% ROI across all nine automation workflows identified in the OpsMap™ audit. Reference checks were one input into that total — not the entire result — but their elimination as a recruiter time-sink was among the fastest wins to materialize.
Harvard Business Review research on hiring process quality reinforces a secondary benefit: standardized reference data improves offer confidence and reduces the probability of a mis-hire, which Forbes composite data places at a minimum of $4,129 per unfilled position in direct costs — costs that compound when a bad hire also requires eventual backfill. Parseur’s manual data entry research adds another dimension: at an estimated $28,500 per employee per year lost to manual data handling, eliminating note-transcription across a 12-person team is not a trivial efficiency gain.
For a detailed look at how interview scheduling automation — the stage immediately preceding references — compounds these time savings, see the automated interview reminder workflow case study.
Lessons Learned: What We Would Do Differently
Transparency is a condition of this case study format. Three decisions generated friction that a cleaner build would have avoided.
1. Custom Field Naming Was Agreed Too Late
The survey question set was finalized before Keap’s custom field schema was locked, which created a naming mismatch that required manual remapping during testing. In future builds, Keap field names and survey question identifiers are agreed simultaneously — not sequentially.
2. Mobile Survey Compatibility Was Tested Last
Session interruption on mobile during form submission was not in the initial test protocol. It appeared during live testing when a referee reported not being able to complete the survey on a phone. The fix was straightforward — enabling resume-on-session-restore in the survey platform — but it delayed go-live by two days. Mobile testing is now first, not last.
3. Recruiter Communication About the New Process Was Underestimated
The technical workflow was clean on launch day. Recruiter adoption had friction. Three of the twelve recruiters continued manually emailing candidates for referee contacts out of habit, creating duplicate intake submissions that populated Keap fields incorrectly. A fifteen-minute team walkthrough before launch would have prevented it. Process change management is part of the build — not a post-launch add-on.
What This Means for Your Reference Check Process
If your team is still running reference checks by phone, the problem isn’t that you haven’t found the right vendor or the right survey tool. The problem is that you haven’t mapped the handoffs and built the routing logic to make those handoffs automatic. The call itself is not where recruiter time goes — the scheduling, follow-up, and transcription around it are.
A Make.com™ and Keap integration routes those handoffs deterministically, produces structured data that phone calls never will, and returns recruiter capacity to the conversations that actually require human judgment. The workflow documented here isn’t proprietary — the architecture is repeatable for any recruiting firm with Keap as its CRM and Make.com™ as its automation layer.
To understand how this workflow sits inside a complete pipeline — covering candidate intake, interview scheduling, reference checks, and offer advancement — review the full recruiting automation sequence in the parent pillar. For teams ready to identify which processes to automate first across their entire operation, an OpsMap™ audit is the right starting point — the same engagement that surfaced reference checks as one of TalentEdge’s nine highest-ROI opportunities.
See also: seven essential Keap and Make.com™ recruiting integrations and how to measure Keap and Make.com™ automation ROI once your workflows are live.




