Post: Build an Automated Candidate Feedback Loop Using Make.com

By Published On: August 12, 2025

Build an Automated Candidate Feedback Loop Using Make.com

Case Snapshot

Dimension Detail
Who Sarah, HR Director at a regional healthcare organization
Problem Manual candidate feedback process took 3–5 days and was skipped entirely for rejected candidates 40% of the time
Constraints No ATS upgrade budget; existing tools limited to Google Workspace and email
Approach Make.com™ scenario triggered by Google Forms submission, routed by decision outcome, with Google Sheets audit log
Outcome Feedback cycle cut from 3–5 days to under 4 hours; 6 hours/week reclaimed; 100% of candidates received a response
Build Time One business day, no code required

This case study is one component of the broader playbook detailed in Recruiting Automation with Make.com™: 10 Campaigns for Strategic Talent Acquisition. If you’re evaluating where to start with recruiting automation, candidate feedback is the workflow that delivers the fastest visible ROI—because its failure is visible to the people you’re trying to hire.


Context and Baseline: What Was Breaking

Sarah’s team was operating a manual feedback process that most HR teams would recognize. After each interview panel, the hiring manager filled out a paper or Word-document evaluation, emailed it to HR, and Sarah’s team was supposed to consolidate that input and send a response to the candidate. In practice, that chain had three failure points.

First, hiring managers submitted evaluations late—often two to four days after the interview. Second, the consolidation step added another half-day of HR time. Third, for candidates who were rejected, the feedback email frequently got deprioritized in favor of advancing the candidates who were moving forward. SHRM research consistently identifies candidate communication as one of the top drivers of employer brand perception, and Sarah’s team was failing at it systematically—not through negligence, but through process design.

The operational cost was measurable. Sarah estimated she and her team spent approximately 12 hours per week across the full recruiting cycle on manual coordination tasks, with feedback management accounting for roughly 6 of those hours. According to Parseur’s Manual Data Entry Report, organizations lose an average of $28,500 per employee per year to manual data handling inefficiencies—a figure that scales directly with team size and process complexity.

The harder cost was invisible: candidates who didn’t hear back left reviews on employer platforms, told their networks, and in at least two documented instances, turned down future offers from the organization citing poor communication during a prior application. In a regional healthcare market with limited candidate supply, that reputational drag was an active recruiting liability.


Approach: Design Before You Build

The core insight driving the solution was simple: the feedback failure was not a communication problem. It was a process sequencing problem. Feedback was delayed because it depended on a human initiating action at every step—evaluation submission, consolidation, and email send. Remove the human initiation requirement at each of those steps, and the delay collapses.

The chosen architecture used three components:

  • Google Forms as the structured evaluation input tool (already in Sarah’s tech stack)
  • Make.com™ as the automation layer connecting form submission to email send
  • Google Sheets as the audit log and reporting layer

The deliberate choice to build around existing tools—rather than purchasing a new ATS add-on—kept the project within budget and reduced adoption friction. Hiring managers already used Google Forms for other HR processes, so the learning curve was zero.

Before any automation was built, the team redesigned the evaluation form. This is where most teams underinvest. The form needed to capture: candidate name, candidate email, role applied for, interview date, structured feedback by competency, and a constrained Decision field with three values: Advance, Hold, and Reject. That Decision field is the variable the router in Make.com™ reads. If it’s a free-text field, the router can’t filter reliably. Constrained dropdowns are non-negotiable.


Implementation: The Make.com™ Scenario Step by Step

Step 1 — Form Trigger

The Make.com™ scenario opens with a Google Forms “Watch Responses” module connected to the evaluation form. Every time a hiring manager submits a completed evaluation, the scenario fires automatically. There is no polling lag when configured correctly—the trigger is near-real-time. This single step eliminates the manual handoff from hiring manager to HR that was responsible for most of the delay.

Step 2 — Google Sheets Log

The second module writes a new row to a Google Sheet. Every field from the form—candidate name, email, role, decision, feedback content, and submission timestamp—gets logged before any email is sent. This sequencing matters: the log captures the record even if a downstream module fails. The sheet becomes a living database of every feedback decision made, queryable for trend analysis.

McKinsey research on data-driven talent decisions consistently shows that organizations with structured feedback data identify hiring pattern problems significantly faster than those relying on anecdote. The Sheets log is what makes that data accessible without a separate reporting tool.

Step 3 — Router with Decision Filters

A Router module reads the Decision field value from the form submission and directs the flow to one of three paths:

  • Advance path: Fires a warm, forward-looking email that confirms the next step, timeline, and point of contact.
  • Hold path: Fires a neutral-tone message that acknowledges the candidate, communicates that the search is ongoing, and sets an expectation for follow-up timing.
  • Reject path: Fires a professional, respectful message that thanks the candidate, closes the loop on this role, and—optionally—invites them to stay connected for future opportunities.

Each path is independent. Editing the Reject template does not affect the Advance path. This modularity makes ongoing refinement low-risk. For additional context on structuring differentiated candidate communications, see the companion satellite on how to automate follow-ups to boost recruiting outcomes.

Step 4 — Email Module per Path

Each router path terminates in an email module (Gmail, in Sarah’s case). The email body uses dynamic fields pulled from the form submission: {{candidate.firstName}}, {{role}}, {{interviewDate}}. The result is a message that reads as individually composed, not templated—even though it was generated and sent without a human touching it.

Asana’s Anatomy of Work research finds that workers spend a significant portion of their week on work about work—status updates, coordination messages, and manual follow-ups—rather than skilled work. Automating the feedback email is the textbook case: it’s a structured, repetitive communication task that automation executes more consistently than any human under cognitive load.

Step 5 — Error Handling and Fallback Alert

The final element most teams skip: an error handler on every module. If any step fails, a Slack or email alert fires to Sarah’s inbox with the failed execution ID and candidate details. This means the automation’s failure mode is a notification to a human—not a silent gap where a candidate receives nothing. For a deeper look at building resilient scenario architecture, see the guide on building robust Make.com™ scenarios for HR excellence.


Results: Before and After

Metric Before After
Feedback delivery time 3–5 business days Under 4 hours
Candidates receiving a response ~60% (rejections frequently skipped) 100%
HR hours on feedback management ~6 hrs/week Under 30 min/week (exception handling only)
Feedback data available for analysis None (paper/email, unstructured) 100% captured in Sheets, queryable
Build investment One business day

The 6 hours per week Sarah’s team reclaimed mapped directly onto higher-value work: sourcing strategy, panel calibration, and hiring manager coaching. The reclaimed capacity is consistent with Gartner’s research showing that HR teams operating with structured automation redirect freed time toward strategic activities at significantly higher rates than those managing tasks manually.


Lessons Learned

1. The Form Is the Foundation

Every routing and personalization failure traces back to ambiguous form design. Constrained fields—dropdowns, not free text—are mandatory for any field the router reads. Teams that rush the form design spend more time debugging automation logic than teams that spend the extra hour getting field structure right before touching Make.com™.

2. Start with Three Paths, Not One

The temptation is to build a single generic feedback email and route everything to it. Don’t. The advance, hold, and reject paths require fundamentally different tones. A candidate advancing to a final round needs forward momentum in their message. A rejected candidate needs closure and respect. A single template serves neither well, and candidates notice the difference. For more on tailoring candidate touchpoints by stage, see how to automate candidate feedback for better hiring data.

3. Log Before You Send

The Google Sheets module must sit before the email modules in the scenario sequence. If the email send fails, the record still exists. If you log after sending and the Sheets module fails, you have a sent email with no record of it—an audit gap. Sequence matters.

4. Error Handling Is Not Optional

Silent automation failures are worse than manual process failures because they’re invisible. Every scenario needs a fallback that alerts a human when something breaks. The alert doesn’t need to be sophisticated—a single email to Sarah’s inbox with the execution ID is enough to enable same-day recovery.

5. The Data Compounds

Within 60 days of deployment, Sarah’s team was using the Sheets log to identify which roles generated the most “Hold” decisions—a reliable signal that job specs were unclear and attracting mismatched candidates. The automation started as a communication tool and became a process diagnostic instrument. That compounding value wasn’t anticipated at build time, which is an argument for always including the logging module even when you don’t have an immediate use for the data.


What We Would Do Differently

The one gap in Sarah’s initial build: no ATS write-back. The feedback decision and timestamp live in Google Sheets, but not in the candidate record inside the ATS. That creates a reconciliation step—when a recruiter pulls up a candidate’s profile months later, the feedback history isn’t there. A second iteration would add an HTTP or native ATS module to write the decision and feedback date back to the candidate record at the same time the email fires. The build complexity is modest; the operational benefit is significant. For teams connecting Make.com™ to their ATS, the guide on Make.com™ CRM integration for recruiters covers that connection architecture in detail.

A second improvement: the email templates were written by Sarah under time pressure during the build. They’re functional, but they weren’t tested with candidates. A/B testing email subject lines through a feedback iteration—even informally, by asking a few recent candidates which message felt more human—would sharpen the template quality over time.


How This Connects to the Broader Recruiting Automation Stack

The candidate feedback loop doesn’t operate in isolation. It’s one stage in a pipeline that begins with sourcing and ends with onboarding. The scenario structure built here—form trigger, router, email send, Sheets log—is the same structure used in Make.com™ pre-screening automation and in workflows for automating job offers. Once your team is comfortable with the pattern at the feedback stage, replicating it for adjacent workflows takes hours, not days.

For the full architecture of how these scenarios connect across the talent acquisition lifecycle, the parent pillar on Recruiting Automation with Make.com™ maps all ten campaigns and the integration points between them. If you’re ready to move beyond single-workflow automation and build a connected recruiting stack, the case study on how to personalize the candidate journey with Make.com™ shows what that looks like end to end. For the interview stage specifically, automated interview scheduling with Make.com™ completes the loop between scheduling and feedback delivery.