
Post: How to Automate Personalized Candidate Outreach with Make.com and ChatGPT
How to Automate Personalized Candidate Outreach with Make.com and ChatGPT
Generic recruiting outreach fails because candidates can detect a template in two seconds — and top talent opts out immediately. The fix is not to hire more recruiters to write individual emails. The fix is to build a workflow where Make.com™ handles all data movement and trigger logic while ChatGPT writes a genuinely individualized message from the candidate’s actual background. This how-to guide builds that workflow from scratch. It is the applied implementation layer of our broader approach to smart AI workflows for HR and recruiting with Make.com™ — deterministic automation first, AI at the content-creation step, never reversed.
By the end of this guide you will have a live Make.com™ scenario that reads candidate records, passes structured data to ChatGPT via the OpenAI API, generates a personalized outreach draft, and routes it to your email client or ATS for recruiter review before send. Estimated build time: two to three hours for a first-time builder; under ninety minutes if you have built Make.com™ scenarios before.
Before You Start
Complete every prerequisite before opening Make.com™. Skipping setup steps is the leading cause of abandoned automation builds.
- Make.com™ account: Any paid plan works. Free plans have operation limits that will throttle batch runs above 20 candidates.
- OpenAI API key: Create one at platform.openai.com. You need API access — ChatGPT.com subscription is a different product and will not work here.
- Candidate data source: This guide uses Google Sheets as the default. If your ATS has a Make.com™ native module or REST API, use that instead and adapt the trigger step.
- Minimum required data fields per candidate: Full name, current or most recent job title, one specific skill or technology, and the role you are recruiting for. More fields produce better output; fewer fields produce generic output.
- Email client or ATS access: Gmail, Outlook, or your ATS API endpoint for receiving the generated draft.
- Time: Two to three hours for initial build and pilot test. Budget an additional hour for prompt refinement after your first batch.
- Risk awareness: AI-generated content can contain factual errors. A human-review gate before send is not optional — it is a structural requirement of this workflow.
Step 1 — Audit and Structure Your Candidate Data Source
The quality of ChatGPT’s output is a direct function of the quality of data you supply. Before building a single module, audit your candidate data.
Open your candidate source — ATS export, Google Sheet, or CRM record — and identify which fields are consistently populated. Create a dedicated sheet or view with only the fields you will use in the prompt. A clean, minimal structure outperforms a bloated export with half-empty columns.
Recommended column structure for your Google Sheet:
- Column A: Candidate full name
- Column B: Current or most recent job title
- Column C: Specific skill, technology, or notable achievement (one sentence maximum)
- Column D: Role you are recruiting for
- Column E: Recruiter highlight (optional — one sentence the recruiter adds manually per candidate; this single field produces the highest-quality personalization output)
- Column F: Status (leave blank; the scenario will write “Sent for Review” here after processing)
Remove any columns containing protected-class information (age, gender, national origin, disability status). These fields must never enter your prompt. For a full compliance framework, review our guide on ethical AI workflows for HR and recruiting.
Once your sheet is clean, add a filter view that shows only rows where Column F is blank. Your scenario will process only unprocessed rows.
Step 2 — Create Your Make.com™ Scenario and Connect the Data Source
Open Make.com™ and create a new scenario. Name it something specific: “Candidate Outreach — [Role Name] — [Date].” Generic scenario names become impossible to manage at scale.
Add the trigger module:
- Click the first module circle and search for “Google Sheets.”
- Select Search Rows (not Watch Rows — Watch Rows fires on new additions only; Search Rows lets you process an existing list on demand).
- Connect your Google account and select your candidate sheet.
- Set the filter to Column F = empty (unprocessed candidates only).
- Set Maximum number of returned rows to 10 for your first test run. Increase after validation.
Add an Iterator module immediately after the Google Sheets module. The Iterator breaks the array of rows into individual items so subsequent modules process one candidate at a time. This is non-negotiable — without it, your OpenAI module receives all rows simultaneously and produces a single blended output.
Step 3 — Add the OpenAI API Module and Write Your Master Prompt
This is the highest-leverage step. A precisely engineered prompt produces consistent, on-brand outreach. A lazy prompt produces output indistinguishable from a template.
Add the OpenAI module:
- Click the next module circle and search for “OpenAI.”
- Select Create a Completion or Create a Chat Completion (GPT-4o or GPT-4 Turbo recommended).
- Connect your OpenAI API key.
- Set temperature to 0.75 — high enough for natural variation, low enough for consistency.
System prompt (paste this and customize the bracketed fields):
You are a senior recruiter at [Company Name]. Your tone is professional, warm, and direct. You write concise outreach emails of 100–130 words. Never use hollow phrases like “I hope this message finds you well,” “I came across your profile,” or “exciting opportunity.” Lead with a specific observation about the candidate. End with one clear, low-friction call to action. Do not use bullet points. Write in plain prose.
User prompt (map Make.com™ variables into the curly-brace fields):
Write a personalized recruiter outreach email to {{Candidate Name}}, who is currently a {{Current Job Title}}. A notable aspect of their background is: {{Skill or Achievement}}. We are recruiting for a {{Role Name}} position. {{If Recruiter Highlight is not empty: Additional context from the recruiter: {{Recruiter Highlight}}.}} Generate only the email body — no subject line, no salutation, no sign-off.
Map each curly-brace placeholder to the corresponding column variable from your Iterator module output. Make.com™’s variable picker makes this a drag-and-drop action.
Add a Max Tokens value of 250 to prevent runaway output. ChatGPT respects this ceiling while honoring your word-count instruction in the system prompt.
Step 4 — Map the Generated Message to Your Email or ATS Module
After the OpenAI module, add your delivery module. Choose based on your team’s workflow:
Option A — Gmail Draft (recommended for initial deployment):
- Add a Gmail module: Create a Draft.
- Map To to the recruiter’s email address (not the candidate’s — drafts go to the recruiter for review).
- Map Subject to a static field: “[REVIEW BEFORE SEND] Outreach Draft — {{Candidate Name}} — {{Role Name}}.”
- Map Body to the OpenAI module’s output text variable.
Option B — ATS note or candidate record: Use your ATS’s native Make.com™ module or an HTTP module pointed at your ATS REST API. Map the OpenAI output to the note or draft message field in the candidate record. This keeps the draft inside your existing recruiting workflow.
After the delivery module, add a Google Sheets Update a Row module that writes “Sent for Review” to Column F for the processed candidate. This prevents the same candidate from being processed twice on the next scenario run.
Step 5 — Add a Human-Review Gate Before Send
Do not skip this step. Based on our experience, automated outreach without a review gate consistently produces at least one critical error per 20–30 messages during initial deployment — wrong company name, misattributed skill, or hallucinated credential. One of those errors reaching a candidate destroys the relationship before it starts.
The Gmail draft model in Step 4 is itself a review gate — the recruiter sees and sends each draft manually. If you are using an ATS delivery model, build an explicit approval step:
- After the ATS update module, add a Make.com™ Wait for Approval module or route a Slack notification to the recruiter’s channel with the draft text and an approve/reject button.
- Only on approval does the scenario trigger the actual send action.
This gate stays active until you have reviewed 50 consecutive outputs with zero critical errors. Then move to a 10% spot-check model — not zero review.
Step 6 — Test, Spot-Check, and Scale
Run your first batch with the Google Sheets trigger limited to 10 rows. Review every output before approving any send. Score each message on three criteria: (1) factual accuracy — does it correctly reference the candidate’s data? (2) tone — does it match your system prompt instructions? (3) specificity — does it read as written for this individual, or does it feel like a template with a name inserted?
After reviewing 10 messages, refine your prompt based on the lowest-scoring criterion. Common fixes:
- Output too generic: Add more specific data to Column C or require the recruiter to complete the Recruiter Highlight field.
- Tone inconsistent: Tighten the system prompt with one or two example sentences that demonstrate the exact voice you want.
- Messages too long: Lower Max Tokens to 200 and add a word count constraint to the system prompt.
- Opening lines repetitive: Rotate the user prompt instruction between “lead with the candidate’s skill” and “lead with what makes this role relevant to their background.”
Once quality is consistent, increase batch size to 50, then 100. Track recruiter response rates on outreach sent through this workflow versus your previous template baseline. This metric is your proof of concept.
How to Know It Worked
Your workflow is functioning correctly when all of the following are true:
- The scenario runs without errors and Column F updates to “Sent for Review” for every processed row.
- Each draft email references data specific to that candidate — not generic claims that could apply to anyone.
- Recruiters report the drafts require minimal editing before send — target under 60 seconds of edits per message.
- Candidate reply rates on outreach sent through this workflow are measurably higher than your template baseline over a 30-day sample.
- No candidate has received an email containing a factual error about their background.
McKinsey research consistently finds that personalization at scale — delivering the right message to the right person at the right time — is among the highest-ROI applications of AI in business operations. This workflow is the recruiting-specific implementation of that principle.
Common Mistakes and Troubleshooting
Mistake: Using Watch Rows instead of Search Rows as the trigger. Watch Rows only fires when new rows are added. If you are processing an existing list, the scenario never triggers. Use Search Rows with a blank-status filter.
Mistake: Skipping the Iterator module. Without an Iterator, the OpenAI module receives all candidate rows as one bundle and produces a single blended message. Every message after the first is either ignored or garbled. The Iterator is mandatory.
Mistake: Passing raw resume text to the OpenAI module. Unstructured resume text produces inconsistent output and inflates token usage. Parse resume data into structured fields first. Our guide on AI resume analysis with Make.com automation covers the parsing step in detail.
Mistake: Setting temperature to 0. A temperature of 0 produces deterministic, near-identical output for similar inputs. Your “personalized” messages will all have the same structure and similar phrasing. Set temperature between 0.65 and 0.85.
Mistake: Removing the human-review gate too early. The scenario runs faster without it. The quality risk is not worth the time saved until you have a proven prompt with a clean error history.
API error: 429 (rate limit): Add a Make.com™ Sleep module set to 2–3 seconds between the Iterator and the OpenAI module. This paces API calls and prevents rate limit errors on large batch runs.
Next Steps: Connect Outreach to the Full Recruiting Automation Chain
Personalized outreach is one node in a complete recruiting automation chain. Once this workflow is stable, connect it upstream to your AI candidate screening workflow so only qualified candidates enter the outreach queue automatically. Connect it downstream to automated scheduling so candidates who reply can book an interview without recruiter intervention — a direct path to reducing time-to-hire with AI recruitment automation.
For the business case — including hard ROI numbers on what this class of automation delivers — see our analysis of the ROI of Make.com™ AI workflows for HR. And once outreach is sent, close the loop with a structured AI candidate feedback loop so every touchpoint improves the next one.
The sequence is non-negotiable: deterministic automation handles the data spine, AI fires at the content-creation step, and humans review before it reaches a candidate. Build it in that order and the results follow.