How to Automate HR Service Delivery: Make.com™ and AI Ticketing
HR service desks fail for one reason: they try to use AI to compensate for missing structure. The fix is the opposite — build deterministic automation first, then layer AI exactly where rules run out. This guide walks you through every step of that build on Make.com™, from intake webhook to escalation logic, so your team spends time on judgment work instead of leave-balance lookups. It is one focused application of the broader principle covered in our parent guide on smart AI workflows for HR and recruiting with Make.com™.
Before You Start
Complete these prerequisites before opening Make.com™. Skipping any one of them is the most common reason builds stall mid-project.
- Tools required: Make.com™ account (any paid plan), HRIS with an active API or Zapier-style webhook endpoint, a communication channel your employees already use (Slack, Microsoft Teams, or email), and an LLM API key (OpenAI GPT-4o or equivalent).
- Time estimate: One to two days for a single-intent proof of concept; three to five days for a five-category production scenario with escalation. Plan for one additional day of testing before go-live.
- Data-privacy check: Identify which HRIS fields the scenario will fetch. Exclude SSNs, compensation details, and protected-class data before any field touches an external API. Review our full guide on securing Make.com™ AI HR workflows for data and compliance before wiring any LLM step.
- Scope the intent list: Before building, write down every inquiry category you want the scenario to handle. Start with no more than five. Leave balance, pay-stub retrieval, benefits-enrollment deadline, policy lookup, and onboarding status are the highest-volume, lowest-complexity starting points for most HR teams.
- Risk acknowledgment: An incorrectly routed ticket that goes unanswered creates a worse employee experience than no automation at all. Build the escalation path (Step 6) before you go live, not after.
Step 1 — Create a Single Intake Webhook in Make.com™
All tickets must enter through one door. Create a Custom Webhook module in Make.com™ and copy the generated URL. This endpoint is what your intake channel — a form, chatbot, or communication-tool slash command — will POST to every time an employee submits an inquiry.
Configure the webhook to capture at minimum: employee_id, message_text, channel (where the response should go), and timestamp. If your intake form collects department and role, include those fields — the AI classification step performs measurably better with that context.
Based on our testing, a single centralized webhook is far easier to maintain than per-channel triggers. When you later add a Teams bot alongside a web form, both POST to the same endpoint and the same scenario handles both — no duplicate logic.
In Practice: If your organization uses a chat tool like Slack or Teams, configure a slash command (e.g.,
/hr-help) that posts the employee’s message and user ID to this webhook automatically. Employees get a familiar experience; your scenario gets clean, structured data.
Step 2 — Fetch Employee Context from Your HRIS
Immediately after the webhook trigger, add an HTTP module (or native HRIS connector) to pull the employee’s current record. This happens before any AI step — the LLM needs context to classify accurately, and your response templates need real data to be useful.
Fields to fetch at this stage:
- Full name and preferred name
- Department, manager, and work location
- Employment type (full-time, part-time, contractor)
- Leave balances (PTO, sick, FMLA if applicable)
- Benefits-enrollment status and next open-enrollment date
Map each returned field to a Make.com™ variable. Do not pass compensation figures or protected-class fields downstream. Set a timeout on the HTTP call (5 seconds is sufficient) and add an error handler that routes to escalation if the HRIS call fails — you never want a silent failure that drops a ticket.
Fetching HRIS data before classification is the step most teams skip. They send the raw employee message to the LLM and ask it to both classify and answer. That approach produces hallucinated answers for any query that requires live data. Structure first: fetch the data, then classify.
Step 3 — Classify Intent with an LLM Module
Now — and only now — engage the AI. Add a Make.com™ HTTP module pointed at your LLM API. Pass the employee’s message_text, their department, and their employment_type as context. Do not pass the full HRIS record at this stage.
Your system prompt should instruct the model to return a structured JSON response with two fields:
intent: one of your predefined categories (e.g.,leave_balance,pay_stub,benefits,policy,onboarding,other)confidence: a number from 0 to 1
Force JSON output mode if your LLM supports it. Parse the response in a Make.com™ JSON Parse module immediately after the HTTP call. Any response that does not parse cleanly, or returns other with a confidence below 0.75, routes directly to the escalation branch in Step 6.
For deeper guidance on prompt design and model selection for HR contexts, see our guide on how to customize AI models for HR without coding.
Step 4 — Route by Intent Using a Router Module
Add a Router module immediately after the JSON Parse step. Create one branch per intent category. Each branch has a Filter condition that checks: intent = "leave_balance" (or whichever category), AND confidence >= 0.75.
Inside each branch, build the resolution logic specific to that intent:
- leave_balance: Format a response using the HRIS leave-balance variables fetched in Step 2. No additional API call needed.
- pay_stub: Generate a direct link to the employee’s pay-stub portal using their
employee_idas a URL parameter. - benefits: Pull the next open-enrollment date from the HRIS variable and format a response with the HR benefits portal URL.
- policy: Use a Make.com™ Data Store lookup or a Google Drive/SharePoint search module to retrieve the relevant policy document URL by keyword.
- onboarding: Fetch onboarding task completion status from your HRIS or onboarding tool and return a checklist summary.
This routing layer is the deterministic spine of the entire system. Review the essential Make.com™ modules for HR AI automation for a full breakdown of which native connectors handle each category most reliably.
Step 5 — Send the Automated Response
At the end of each resolution branch, add a Send Message module targeted at the employee’s original channel variable. This ensures the response goes back to wherever the employee asked — Slack, Teams, or email — without requiring them to check a separate portal.
Format matters. Structure automated responses with:
- A one-sentence direct answer (the data the employee actually asked for)
- One supporting sentence of context (e.g., policy reference or next step)
- A single link to the relevant portal or document
- A one-line footer: “Not what you needed? Reply ‘talk to HR’ to reach a specialist.”
That footer is not optional. It is your safety valve — and it feeds your escalation trigger in Step 6 without requiring employees to navigate a separate escalation form. For more on structuring employee-facing automated messages, see our guide on automating HR communications across the employee lifecycle.
Step 6 — Build the Escalation Path
The escalation branch handles three conditions: low-confidence classification (confidence < 0.75), intent of other, or an employee reply containing “talk to HR.” Each of these routes to the same escalation flow.
The escalation flow does four things:
- Creates a ticket in your HR platform (Jira Service Management, Freshdesk, or equivalent) via Make.com™ HTTP module, tagged with the original
message_text,employee_id, and the AI’s best-guessintentfor context. - Assigns the ticket to the correct HR specialist based on the intent guess or department — use a Make.com™ Data Store lookup table that maps department to HR owner.
- Notifies the specialist via Slack or Teams with the ticket details and a direct link to the ticket.
- Acknowledges the employee immediately: “Your question has been assigned to [HR Specialist Name] and you’ll hear back within [SLA].” Never leave an employee in silence after an escalation.
The escalation path is the most important branch in the scenario. Build it first, test it independently, and verify it catches every failure mode before enabling the automated resolution branches.
Step 7 — Add a Satisfaction Micro-Survey
Fifteen minutes after a ticket is marked resolved (set a Make.com™ Sleep module or use a scheduled trigger), send a single-question follow-up: “Did you get what you needed? Yes / No.” Route “No” responses back into the escalation flow automatically.
Store every response in a Make.com™ Data Store with the ticket’s intent category. This gives you a weekly dataset showing which intent categories have the lowest satisfaction scores — exactly where your classification prompts or resolution templates need tuning. Without this loop, you have no signal distinguishing resolved tickets from silently abandoned ones.
Asana’s Anatomy of Work research found that employees lose significant time each week to status-checking and follow-up on unresolved requests. A micro-survey loop eliminates that friction by closing the feedback gap automatically.
How to Know It Worked
Measure these three metrics weekly for the first 30 days:
- First-contact resolution rate (FCRR): Percentage of tickets closed by the automated response without escalation. Target: above 70% by day 30. If you’re below 60%, review classification accuracy — the intent categories or confidence threshold likely need adjustment.
- Average response time: Time from webhook receipt to employee notification. Automated responses should deliver in under 60 seconds. If they’re slower, profile the HRIS API call — it’s almost always the bottleneck.
- Satisfaction rate: Percentage of micro-survey responses that are “Yes.” Target: above 80%. A FCRR of 75% with a 60% satisfaction rate means the automated answers are technically resolving tickets but not actually helping employees — rewrite the response templates.
If all three metrics are on target after 30 days, the scenario is production-stable. At that point, replicate the scenario template for the next five intent categories. The pattern is identical — only the resolution branch content changes.
Common Mistakes and How to Fix Them
- Skipping the HRIS fetch before classification. The LLM cannot return accurate intent if it cannot tell a full-time employee from a contractor. Always fetch employment context first.
- Setting the confidence threshold too low. A threshold of 0.5 means the AI is essentially guessing. Start at 0.75 and lower it only if your escalation volume is unmanageable after reviewing classification logs.
- No error handling on the HRIS call. If the HRIS API times out and there’s no error path, the scenario stops silently and the employee gets no response. Every HTTP module needs an error-route branch pointing to escalation.
- Sending the automated response to the wrong channel. If your intake collects
channelas a variable but the Send module has it hardcoded to email, employees who asked via Slack see nothing. Use the variable, not a hardcoded value. - Launching without the satisfaction survey. This is the single most common oversight. Without it, you cannot distinguish resolved from abandoned tickets. Add it before go-live.
What Comes Next
Once your HR service-delivery scenario is stable, the same orchestration pattern extends across every employee-facing workflow. The AI-powered HR onboarding workflows with Make.com™ guide applies this exact structure to new-hire task routing. The ROI framework for Make.com™ AI in HR gives you the financial model to present results to leadership.
McKinsey Global Institute research estimates that roughly 50% of current work activities could be automated with existing technology — HR service delivery is one of the highest-density opportunity areas. Parseur’s Manual Data Entry Report puts the fully loaded cost of manual administrative work at approximately $28,500 per employee per year. Every routine ticket resolved automatically is a direct reduction against that number.
The build described in this guide is the deterministic foundation every AI-enhanced HR function needs. Structure before intelligence. Routing before reasoning. Verification before scale. That sequence is the only one that produces durable results — and it is the same sequence we apply in every OpsMap™ and OpsSprint™ engagement where HR service delivery is on the table.




