Post: How to Build Intelligent HR Communications with ChatGPT and Make.com

By Published On: August 9, 2025

How to Build Intelligent HR Communications with ChatGPT and Make.com

Most HR automation projects stall because teams add AI to a broken process and expect it to fix the underlying chaos. Intelligent HR communications work the opposite way: the smart AI workflows for HR and recruiting with Make.com principle holds here exactly — deterministic automation handles routing, filtering, and data retrieval first, and ChatGPT fires only at the moment language generation is actually needed. This guide walks you through that build sequence, step by step.

Done correctly, this approach produces communications that are faster, more accurate, and more personalized than anything a manually managed email queue can deliver — without removing human judgment from the decisions that require it.


Before You Start

Gather these prerequisites before opening Make.com™ for the first time.

  • Make.com™ account with a plan that supports HTTP modules and data stores (Core plan minimum; check current plan features at Make.com).
  • OpenAI API key with access to the GPT-4 or GPT-4o model endpoint.
  • HRIS API credentials or a webhook-enabled export from your HR system of record. Know which fields you need: employee name, role, department, location, start date, manager name.
  • Policy source documents in a queryable format — a Google Doc, Notion page, or SharePoint document that Make.com™ can read via API. Plain text is fine; formatted PDFs require an extraction step.
  • Communication channel credentials: Slack bot token, Microsoft Teams webhook, or SMTP credentials for email delivery, depending on where your employees receive HR communications.
  • Time budget: Allow two to four hours for a single-use-case pilot (policy FAQ). Allow one to two days plus a full testing week for a multi-stage onboarding communication sequence.
  • Risk assessment: Identify which communication types are high-stakes (termination notices, benefits election changes, disciplinary correspondence). These require a human-review gate in every scenario — no exceptions.

Step 1 — Map the Communication Use Case Before Building Anything

Define the exact communication type you are automating before creating a single module. Vague scope produces vague scenarios.

For each use case, document:

  • Trigger: What event starts this communication? (New hire record created in HRIS, inbound Slack message, form submission, calendar date reached.)
  • Audience: Who receives the output? (New employee, current employee, manager, HR team member.)
  • Data needed: Which fields from which systems must be present in the prompt context?
  • Output format: Email body, Slack message, Teams card, or document draft?
  • Stakes level: Low (policy FAQ, onboarding day-one welcome), medium (benefits enrollment reminder, performance review notification), high (disciplinary notice, termination communication).

Start with one low-stakes, high-volume use case. Policy FAQ response — where employees ask routine questions about PTO, benefits, or holiday schedules — is the ideal pilot. It is high frequency, low risk, and easy to measure. McKinsey Global Institute research indicates that knowledge workers spend roughly 20% of their workweek searching for information and answering routine questions; automating this category alone creates meaningful capacity recovery.


Step 2 — Build the Deterministic Spine First

The automation scaffold must work completely and reliably before ChatGPT enters the scenario. Build and test each of these steps independently before connecting them.

2a. Configure the Trigger Module

Set up the trigger that initiates the scenario. For a policy FAQ responder, this is typically a Slack event watcher (message posted in a designated HR channel), a webhook from your ticketing system, or a form-submission trigger. Confirm the trigger fires correctly on test inputs before proceeding. Make.com™ shows you the raw payload in the scenario editor — verify that all expected fields are present.

2b. Add a Filter for Intent Classification

Not every inbound message needs the full AI pipeline. Add a filter module immediately after the trigger to exclude messages that match simple keyword patterns (e.g., “where is the bathroom,” out-of-office replies, bot echoes). This keeps API costs low and scenario run counts accurate. For more complex intent classification, a lightweight preliminary ChatGPT call — asking only “is this a policy question, a personal HR request, or something else?” — can route traffic before the main prompt executes.

2c. Fetch Relevant Data from Your HRIS

Use an HTTP module or a native HRIS connector to retrieve the employee’s record. Map only the fields the prompt actually needs. Passing excess PII into a prompt is a compliance and auditability risk — pull name, role, department, and location, then stop. Asana’s Anatomy of Work research documents that context-switching and information retrieval consume significant chunks of the workday; automating this data fetch step removes it entirely from the human workflow.

2d. Retrieve Policy Context

Fetch the relevant policy excerpt from your document repository. If you have a small policy library, a static text module with pre-formatted policy blocks works fine for a pilot. For larger libraries, use a search step to retrieve the most relevant section based on the employee’s question keywords. This retrieved text becomes the grounding context in the ChatGPT prompt — the model answers only from what you provide, not from its training data.

2e. Test the Spine Without ChatGPT

Run the scenario end-to-end with a dummy output module replacing the ChatGPT step. Confirm that every data field populates correctly for multiple test inputs, including edge cases: a part-time employee, an employee in a different country, a question that has no matching policy document. Fix all data-fetch failures before adding the AI layer. Parseur’s Manual Data Entry Report found that manual data handling errors cost organizations substantially per employee per year — catching structural errors at this stage prevents them from propagating through AI-generated outputs.


Step 3 — Design the Prompt Template

A well-structured prompt is the single most important determinant of output quality. Build one prompt template per communication type.

System Role Block

Define who ChatGPT is in this context:

You are the HR communications assistant for [Company Name]. 
Your tone is professional, warm, and clear. 
You answer employee questions using only the policy information provided. 
If the answer is not in the provided policy text, respond: 
"I don't have that information — please contact your HR business partner directly."
Do not speculate. Do not add information not present in the context below.

Dynamic Context Block

Inject the employee data and policy excerpt retrieved in Step 2:

Employee: {{employee_name}}, {{employee_role}}, {{employee_department}}, {{employee_location}}
Manager: {{manager_name}}

Policy context:
{{policy_excerpt}}

User Message Block

Pass the employee’s original question verbatim:

Employee question: {{inbound_message_text}}

Output Format Instruction

End every prompt with an explicit format directive:

Respond in 3–5 sentences. Use plain language. 
Do not use bullet points. 
End with: "If you have additional questions, reply to this message or contact HR directly."

Test each template against at least ten real or realistic employee questions — including vague, misspelled, and multi-part questions — before deploying. Harvard Business Review research on AI-assisted communication consistently shows that structured prompts with explicit constraints outperform open-ended prompts in consistency and accuracy.


Step 4 — Add the ChatGPT Module and Connect It

With the scaffold validated and the prompt template designed, add the ChatGPT (OpenAI) module to the Make.com™ scenario.

  • Select the Create a Completion action (or Create a Chat Completion for GPT-4 models).
  • Set the model to gpt-4o or gpt-4 for nuanced HR communications. GPT-3.5-turbo is faster and cheaper but produces less contextually sensitive outputs for complex policy interpretation.
  • Map the system role, dynamic context, and user message blocks from your template into the corresponding message fields.
  • Set temperature to 0.3–0.5 for factual policy responses (lower temperature = more consistent, less creative). For onboarding welcome messages where warmth matters, 0.6–0.7 is appropriate.
  • Set a max tokens limit appropriate to your output format — 300 tokens for a short FAQ answer, 600–800 for a full onboarding email.

Run a live test with real employee data (use your own record or a test employee). Review the output for accuracy, tone, and format compliance before proceeding.


Step 5 — Build the Human-Review Router

Every scenario that produces communications for high-stakes situations requires a conditional branch that routes the AI-generated draft to an HR staff member before it is sent.

How to Build the Router

  1. After the ChatGPT module, add a Router module in Make.com™.
  2. On Path A (high-stakes), add a filter that checks for sensitivity indicators: specific keywords in the employee’s question (termination, disciplinary, FMLA, accommodation), the employee’s employment status field, or a manual flag set by HR. Route these to a Slack DM or email to the HR business partner with the draft attached for approval.
  3. On Path B (standard), route the output directly to the delivery module (Slack, Teams, or email).
  4. For Path A approvals, build a second scenario triggered by the HR partner’s approval action (a button click in Slack, a form submission, or a status change in your ticketing system) that then sends the approved message.

SHRM guidance on HR compliance consistently identifies communications errors — incorrect policy citations, inappropriate tone in sensitive situations — as a leading source of employee relations disputes. The human-review router is not bureaucracy; it is the control that makes the automation defensible.

For further guidance on securing this architecture, see how to secure Make.com AI HR workflows for data and compliance.


Step 6 — Configure Logging and the Audit Data Store

Every ChatGPT input and output must be logged. This is non-negotiable for HR communications.

What to Log

  • Timestamp of the scenario run
  • Employee identifier (not full PII — use employee ID, not name, in the log)
  • The original inbound message or trigger event
  • The full prompt sent to ChatGPT (system + context + user message)
  • The full ChatGPT output before any editing
  • Whether the message went through human review (yes/no)
  • The final message delivered and the channel it was sent through

How to Build the Log

Add a Make.com™ Data Store module immediately after the ChatGPT module and after the delivery module. Write all fields listed above to a structured data store record. Export this store to a Google Sheet or your HRIS document repository weekly. Retain logs for a minimum period consistent with your organization’s HR records retention policy — typically three to seven years depending on jurisdiction.

Gartner research on AI governance in HR identifies auditability as the top compliance requirement for AI-assisted employee communications. A log that captures the full prompt-response chain is the foundation of any audit response.


Step 7 — Deploy the Delivery Module and Activate

With the ChatGPT module, router, and logging configured, connect the final delivery module.

  • For Slack: Use the Send a Message action, mapping the ChatGPT output to the message text field. Set the channel to the employee’s direct message thread or a designated HR channel.
  • For Microsoft Teams: Use the Teams webhook module or the Send a Message action. Format the output as an Adaptive Card for better readability.
  • For Email: Use the Send an Email module with the ChatGPT output as the body. Set the subject line dynamically based on the communication type — map it from a lookup table or a preliminary ChatGPT call that generates a subject line separately.

Set the scenario to Active in Make.com™. Schedule a 48-hour monitoring window where you or a team member reviews every output before it is sent. Graduate to full automation only after 48 hours of clean, accurate outputs.


Expanding to Onboarding and Lifecycle Communications

Once the policy FAQ scenario is running cleanly, apply the same scaffold-first methodology to more complex communication sequences. The approach to automate HR onboarding with Make.com and AI follows the identical pattern: deterministic triggers (new hire record created, Day 1 date reached, Day 30 milestone), HRIS data fetch, prompt template with employee-specific context, ChatGPT language generation, human-review gate for edge cases, logging, delivery.

The difference in onboarding sequences is sequencing across time. Use Make.com™’s scheduling and iteration modules to send a series of communications — Day 0 welcome, Day 1 orientation prep, Day 7 check-in, Day 30 milestone acknowledgment — each pulling fresh HRIS data at the moment of send to ensure accuracy. Microsoft Work Trend Index data shows that employees who receive structured, personalized communication in their first 30 days report significantly higher engagement scores than those who receive generic onboarding materials.

For candidate-facing communications, the same architecture applies. See how to build AI candidate screening workflows with Make.com and GPT and automate HR communications across the employee lifecycle for lateral implementations of this scaffold.


How to Know It Worked

Measure these indicators at 30, 60, and 90 days post-deployment:

  • Response time: Average time from employee inquiry to HR response. Baseline this before deployment. A well-built policy FAQ scenario should reduce response time from hours to minutes.
  • HR staff hours on routine communications: Track weekly time spent answering repetitive questions. Asana’s Anatomy of Work data indicates a significant portion of knowledge worker time goes to low-value communication tasks — this metric should drop measurably within the first month.
  • Employee satisfaction with HR responsiveness: Add a one-question pulse to HR communications: “Did this answer your question? Yes / No / Partially.” A well-tuned scenario should achieve above 80% “Yes” within 60 days.
  • Audit log review: Review a random 10% sample of logs weekly for the first month. Look for prompt injections (employees trying to manipulate the AI), hallucinated policy citations, and tone violations. Each finding informs a prompt template update.
  • Error rate: Track the number of AI-generated messages that required correction before or after delivery. This should trend toward zero as prompt templates mature.

Common Mistakes and How to Avoid Them

Mistake 1 — Adding ChatGPT Before the Scaffold Is Stable

If your trigger fires inconsistently or your HRIS data fetch returns empty fields 15% of the time, ChatGPT will generate responses based on incomplete context. Fix the scaffold first. Every data-fetch failure is a miscommunication waiting to happen.

Mistake 2 — Asking ChatGPT to Recall Policy From Memory

ChatGPT’s training data does not include your company’s policies. Prompts that ask the model to “answer questions about our PTO policy” without providing the policy text will produce confident-sounding fabrications. Always inject the source document text into the prompt context. Always.

Mistake 3 — One Generic Prompt Template for All Communication Types

A single prompt cannot optimize for the tone of a Day 1 welcome message and the precision of a benefits election reminder simultaneously. Build one template per use case. Maintain them in a shared document so the team can review and update them as policies change.

Mistake 4 — Skipping the Human-Review Router for “Probably Fine” Messages

The category of communications that seem low-stakes until they aren’t is exactly where organizations face HR compliance exposure. If you are uncertain whether a message type needs review, route it for review. The cost of one HR manager spending two minutes reviewing a draft is far lower than the cost of an erroneous message reaching an employee at a sensitive moment.

Mistake 5 — Building Multiple Scenarios Before Validating One

The fastest path to a working HR communications system is one validated scenario, then a second, then a third — not three simultaneous builds that share unresolved structural problems. The build a custom HR chatbot with Make.com and ChatGPT guide demonstrates this iterative approach in a related context.


Next Steps

A working policy FAQ responder is the proof of concept that unlocks organizational confidence for broader deployment. Once that scenario is running cleanly, the natural expansion path moves through onboarding communication sequences, candidate status update messages, and performance review notification drafts — each following the same scaffold-first methodology documented here.

For a fuller picture of the business case and ROI trajectory of this investment, see the analysis of ROI of Make.com AI workflows in HR. For the strategic architecture that governs where and when AI should fire across the full HR function, return to the parent pillar: smart AI workflows for HR and recruiting with Make.com.

Structure before intelligence. Every time.