Post: How to Set Up HR Webhook Error Notifications in Make.com: A Step-by-Step Guide

By Published On: November 25, 2025

How to Set Up HR Webhook Error Notifications in Make.com: A Step-by-Step Guide

A webhook that fails silently is worse than a webhook that never ran. At least a missing automation raises a question. A silent failure creates false confidence — your team believes onboarding is running, offer letters are going out, compliance triggers are firing. None of that is true. And you will not find out until a new hire shows up on Day 1 with no system access, or an auditor asks for a completion log that does not exist.

This guide walks through exactly how to build a real-time error notification system for HR webhook scenarios in Make.com™. Five steps. No custom code. Works for any webhook-driven HR workflow — onboarding, offer management, compliance triggers, time-off requests, or any external HRIS/ATS integration.

For context on why webhook architecture matters before you build the notification layer, start with the webhooks vs. mailhooks infrastructure decisions for HR automation — that parent guide covers the trigger-layer fundamentals this post builds on top of.


Before You Start

Confirm the following before opening your scenario editor:

  • Active Make.com™ scenario with at least one webhook trigger: This guide assumes you already have a working HR webhook scenario. If you are starting from scratch, build the happy-path flow first.
  • Access to at least one notification destination: Slack workspace with a dedicated HR-ops channel, a Gmail or Outlook account, or a Google Sheet with append access. You only need one to start.
  • Edit permissions on the scenario: You need scenario editor access — not just viewer — to add error handler branches.
  • 15–30 minutes: First-time setup. Subsequent scenarios take 5–10 minutes once you have a template.
  • A cloned test version of the scenario: Never test error handling in your live production scenario. Clone it first.

Risk note: Adding an error handler does not change how your happy path executes. It adds a parallel branch that only fires on failure. There is no risk to live data from adding the handler itself — the risk is in testing, which is why you clone first.


Step 1 — Map Every External Touchpoint in Your Scenario

Before you add a single error handler, identify every module in your scenario that communicates with an external system. These are your failure candidates — the places where a network timeout, an invalid API credential, a rate limit, or a malformed payload will cause the scenario to stop.

Walk through your scenario from left to right and flag every module that:

  • Makes an HTTP call to an external API (HRIS, ATS, payroll system, document platform)
  • Reads from or writes to a database outside Make.com™
  • Sends a communication through an external service (email provider, Slack, SMS gateway)
  • Parses or transforms data in a way that depends on a specific field being present in the incoming payload

For each flagged module, note:

  1. What HR action does this module perform? (Write new-hire record, trigger background check, send offer letter)
  2. What is the compliance or business impact if this module fails silently? (High / Medium / Low)
  3. Should a failure here stop the entire scenario, or should it log the error and continue processing remaining records?

This mapping exercise takes 10 minutes and shapes every decision downstream. Modules with High compliance impact get individual error handlers with immediate-alert routing. Modules with Medium or Low impact can share a route-level handler that logs to a triage sheet.

Based on our experience: Most HR scenarios have three to five high-impact external touchpoints. Teams routinely add error handlers only to the first and last module, leaving three unmonitored failure points in the middle of the chain.


Step 2 — Add Error Handler Branches to High-Impact Modules

Make.com™ lets you attach an error handler to any individual module or to an entire route. Use individual handlers for high-impact modules; use route-level handlers as a catch-all for everything else.

To add an individual error handler:

  1. In the scenario editor, right-click the target module (e.g., your “Create Record in HRIS” module).
  2. Select Add error handler from the context menu.
  3. A new branch appears, visually separate from the happy path. This branch only activates if the target module throws an error.
  4. The first module you place on this branch will receive the error bundle automatically — no additional configuration needed to access the error data.

To add a route-level handler:

  1. Click the route connector (the line between two modules, or at the start of a route).
  2. Select Add error handler to route.
  3. This single handler catches any unhandled error anywhere along that route.

Error directives — choose one for each handler:

  • Break: Stops the scenario for this bundle and stores it as an incomplete execution for manual or automatic replay. Use for critical failures where you cannot afford data loss.
  • Resume: Logs the error and continues processing the next bundle. Use for batch operations where one bad record should not halt the entire queue.
  • Ignore: Suppresses the error entirely. Only use for truly non-critical steps where failure has zero downstream consequence — which in HR automation is almost never.
  • Retry: Re-attempts the module up to a configurable number of times with a delay between attempts. Use for transient failures like API rate limits or intermittent network timeouts.

For compliance-sensitive HR modules, default to Break and configure Incomplete Executions to On in your scenario settings. This ensures the failed data bundle is preserved and replayable once the root cause is resolved.


Step 3 — Capture the Full Error Context with Set Multiple Variables

Knowing that a module failed is not enough. You need to know why it failed and what data was involved. Without the original payload, you cannot reproduce the failure in a test environment — and debugging takes days instead of minutes.

Immediately after the error handler branch begins, add a Tools → Set Multiple Variables module. Map the following fields:

Variable Name Value to Map Why It Matters
error_message {{bundle.error.message}} Human-readable description of what went wrong
error_code {{bundle.error.statusCode}} HTTP status or platform error code for classification
scenario_id {{bundle.scenarioId}} Direct link context for the execution log
failed_at_timestamp {{now}} Precise failure time for audit trail
input_payload Key fields from the original webhook payload (e.g., employee ID, candidate email) Identifies exactly which record failed

For HR workflows, the most important variable is input_payload — specifically the record identifier (employee ID, candidate ID, offer ID) from the original webhook data. When an alert fires, the person triaging it needs to know immediately whose data is affected, not just that something broke.

Pro tip: Add a module_name variable and hardcode the name of the failing module as a plain text string (e.g., “Create Record — Workday”). When your error handler catches failures from multiple modules via a route-level handler, this variable tells you immediately which step failed without opening the execution log.


Step 4 — Route the Alert Based on Severity

Not all errors deserve the same response speed. Alert fatigue is a real operational risk — if every minor parse error sends an @here message to the HR ops Slack channel at 2am, your team will start ignoring the channel. And then the critical failures disappear into the noise.

Route by severity, not by module count:

After your Set Multiple Variables module, add a Router module with two branches:

  • Branch A — Critical: Filter condition: the failed module is flagged as High compliance impact (you identified these in Step 1). Route to an immediate notification — Slack message with @here, or email to the HR Director and HR IT lead.
  • Branch B — Non-Critical: All other errors. Route to a Google Sheet append row for weekly triage review.

Critical alert message structure (Slack or email):

  • Subject/Header: ⚠️ HR Automation Failure — [Module Name] — [Timestamp]
  • Body line 1: Error message and code
  • Body line 2: Affected record (employee ID or candidate email from input_payload)
  • Body line 3: Direct link to the Make.com™ execution log for this scenario
  • Body line 4: Recommended next action (e.g., “Manually verify HRIS record for [Employee ID] before EOD”)

Non-critical log row (Google Sheet) columns:

Timestamp | Scenario ID | Module Name | Error Code | Error Message | Affected Record ID | Status (default: Open)

This two-tier system keeps Slack actionable and creates a persistent audit log that lives outside Make.com™’s rolling execution history window — which matters when a compliance review asks for failures from 90 days ago.

For additional patterns on routing HR error alerts, see the real-time critical HR alerts with webhooks guide for a deeper look at alert prioritization frameworks.


Step 5 — Configure Incomplete Executions and Scenario-Level Settings

Error handlers handle errors that occur within a running execution. But some failures stop a scenario before individual module error handlers can fire — for example, a webhook payload that exceeds the data size limit, or a scenario that hits the operations ceiling mid-run. Scenario-level settings protect against these edge cases.

In your scenario Settings panel, configure:

  • Allow storing incomplete executions: ON — This is the single most important setting for HR webhook resilience. When enabled, any execution that fails for any reason (including failures your error handlers do not catch) is stored as an incomplete execution that can be manually or automatically replayed once the root cause is resolved. Without this, failed data is gone.
  • Auto commit: OFF (for scenarios that write to multiple systems) — Prevents partial writes where one system is updated and another is not, which is a common source of data integrity problems in HRIS-to-ATS sync scenarios.
  • Max number of cycles: Set explicitly — For batch webhook scenarios, set this to match your expected bundle volume. An unbounded cycle count can exhaust your operations quota silently.

Enable scenario-level error notifications in Make.com™:

In your Make.com™ organization settings, navigate to Notifications and enable email alerts for scenario errors. This is a platform-level backstop — it catches any execution failure across all scenarios that do not have in-scenario handlers. For a mature HR automation stack, this should be the last line of defense, not the first.

For high-volume HR webhook scenarios — processing hundreds of application submissions or onboarding triggers per day — also review the guide on scaling HR automation for high-volume webhook scenarios, which covers queue management and operations budgeting alongside error handling.


How to Know It Worked

Do not trust your error notification system until you have broken it on purpose and watched it recover. Run these verification steps on your cloned test scenario before promoting to production:

  1. Trigger an intentional module failure: Open the module you added your error handler to. Temporarily set an invalid value — a malformed URL for an HTTP module, or a blank required field for an HRIS write module. Run the scenario manually.
  2. Confirm the error handler branch fires: In the execution detail view, the error handler branch should show as executed. The happy-path branch should show as not executed for this bundle.
  3. Verify variable capture: Open the Set Multiple Variables module output in the execution detail. All five variables (error_message, error_code, scenario_id, failed_at_timestamp, input_payload) should be populated with real values — not empty or null.
  4. Confirm the alert arrived: Check the destination — Slack channel, email inbox, or Google Sheet — and verify the alert contains the correct message structure with all fields populated, including the affected record identifier.
  5. Confirm the Incomplete Execution was stored: Go to Scenario → Incomplete Executions. The failed bundle should appear as a stored incomplete execution, available for replay.
  6. Restore the module to its correct configuration and run the scenario again with valid test data to confirm the happy path is unaffected.

If all six steps pass, your error notification system is production-ready. If any step fails, the most common causes are: the error handler directive is set to Ignore instead of Break, the Set Multiple Variables module is placed after the notification module rather than before it, or the notification module has a mapping error that causes the notification itself to fail silently.


Common Mistakes and Troubleshooting

Mistake 1 — The notification module fails and you never find out

If your Slack or email module inside the error handler throws an error, Make.com™ does not automatically send a second-level alert. The error handler itself can fail silently. Mitigation: add a second, simpler notification method as a fallback (e.g., if Slack is your primary alert, add a Gmail send as a secondary module on the same error handler branch). Keep the fallback simple — plain text, no complex mapping — so it cannot fail from a mapping error.

Mistake 2 — Error handler captures no useful data

The most common cause: Set Multiple Variables is placed after the notification module instead of before it. The notification fires before the variables are set, so the alert contains no error details. Always place Set Multiple Variables as the first module on every error handler branch, before any notification or logging module.

Mistake 3 — Using Ignore directive on modules that affect compliance records

Ignore is appropriate for truly non-consequential steps. In HR automation, almost no step is non-consequential. An ignored failure on a background check initiation module means a new hire starts without a completed check — and no one is alerted. Default to Break for any module that touches regulated or audit-sensitive data.

Mistake 4 — Testing in the production scenario

Testing an intentional failure in your live scenario can store incomplete executions with test data, corrupt execution history metrics, or trigger real notifications to your HR team. Always clone before testing.

Mistake 5 — Building one error handler for the entire scenario

A single route-level handler catches all errors, but it cannot apply different directives to different failure types. A transient API timeout on an HRIS write deserves a Retry directive; a malformed payload on a compliance trigger deserves a Break. Build module-level handlers for high-impact modules and use a route-level handler only as a catch-all for the remainder.

For a comprehensive walkthrough of what causes HR webhook failures in the first place, the troubleshooting guide for resilient HR webhooks covers root-cause diagnosis alongside the fix patterns described here.


What Robust Error Notification Infrastructure Actually Buys You

Asana’s Anatomy of Work research consistently shows that knowledge workers spend a significant portion of their time on work about work — status checking, chasing confirmations, manually verifying that automated processes completed. In HR specifically, that work-about-work includes checking whether the HRIS was updated after an offer acceptance, whether the background check was initiated, whether the onboarding sequence fired.

An error notification system with a persistent log eliminates that verification overhead for the failure case — and, critically, it gives you the confidence to stop checking the success case manually too. When you know failures will announce themselves, you stop auditing every execution by hand. That reclaimed time is what lets HR teams operate at the strategic level rather than the data-entry level.

Gartner research on HR technology consistently identifies data integrity and process visibility as the two highest-priority operational concerns for HR technology leaders. Error notification infrastructure directly addresses both: it flags data integrity failures in real time and creates the process visibility trail that satisfies both internal audit and external compliance review.

The mailhook error handling patterns covered in the error handling for resilient HR automations guide apply many of the same principles to email-triggered scenarios — worth reviewing if your HR automation stack uses both webhook and mailhook triggers.


Next Steps

Start with your highest-stakes HR webhook scenario — the one where a silent failure causes the most harm. Add the error handler, the Set Multiple Variables capture, and the two-tier alert routing. Test it deliberately. Then roll the pattern to your remaining scenarios using the same template.

Once your error notification layer is stable, the logical next evolution is proactive monitoring: building dashboards that surface error rate trends before they become incidents. The guide on why real-time HR workflows demand webhook triggers over polling covers the architectural principles that underpin both error handling and monitoring at scale.

For the full strategic framework — how webhook trigger architecture, error handling, and AI-layer decisions fit together into a coherent HR automation infrastructure — return to the complete guide to HR webhook and mailhook architecture.

Silent failures are a choice. Build the handler.