How to Configure Make™ Error Handling for Resilient HR Automation

Clean data filtering and mapping — covered in our data filtering and mapping logic that enforces integrity at the source — prevents most HR automation failures. Error handling catches the ones that slip through anyway. Together they form the two layers every production-grade HR scenario must have before it touches live candidate or employee data.

This guide walks through five concrete steps to configure Make™ error handling so your scenarios recover silently, log every exception, and never leave your ATS, HRIS, or payroll systems in a partial-write state.


Before You Start

  • Tools required: An active Make™ account with at least one live or draft scenario connecting two or more HR systems.
  • Time required: 2–4 hours to fully instrument an existing scenario; 30–60 minutes for a new scenario built with error handling from the start.
  • Prerequisite knowledge: You should be comfortable adding modules, configuring filters, and reading the scenario execution log. If not, review the essential Make™ modules for HR data transformation first.
  • Risk context: Incorrectly configured Rollback directives on non-transactional modules can suppress legitimate writes. Test every error route in a sandbox scenario against a non-production data store before deploying to live systems.
  • What you will build: A scenario architecture where every module has an explicit error disposition (not the default silent failure), retry logic handles transient API outages, and a structured log captures every exception for audit review.

Step 1 — Audit Every Module for Its Most Likely Failure Mode

Before touching a single error route, map what can go wrong at each module. This audit drives every configuration decision that follows.

Open your scenario and work left to right. For each module, answer three questions:

  1. What data does this module require? List every required field and its expected format (string, integer, ISO date, valid email, etc.).
  2. What external dependency does this module call? Identify whether it connects to a third-party API (ATS, HRIS, payroll, email provider). External API calls fail for reasons entirely outside your control — rate limits, maintenance windows, credential expiry.
  3. What is the consequence if this module fails silently? A module that appends a log row failing silently is annoying. A module that writes an offer letter amount to payroll failing silently is a compliance event.

Document this as a simple table — module name, required inputs, external dependency (yes/no), consequence severity (low / medium / critical). You will use this table to assign the correct error directive in Step 3.

Based on our testing: Most HR scenarios have 2–4 modules with critical consequence severity. Those are the modules that need Rollback, not Break. Everything else can usually be handled with Resume or a logged Ignore.


Step 2 — Add Input Validation Filters Before Critical Modules

Error handlers catch failures after they occur. Filters prevent failures from occurring. Both are required — but filters are cheaper and faster.

For every module flagged as medium or critical severity in your Step 1 audit, add a Make™ filter immediately upstream. Configure the filter to verify:

  • Presence: Required fields (first name, last name, email, employee ID) are not empty.
  • Format: Email addresses match the standard pattern. Phone numbers are numeric and meet minimum length. Dates are ISO 8601.
  • Range: Salary figures fall within a plausible range. Start dates are not in the past. Numeric IDs are positive integers.
  • Referential integrity: The department code, job requisition ID, or cost center value exists as a valid option in the downstream system before you attempt to write it.

Use Make™’s built-in filter operators (Text operators, Number operators, Date operators) for basic checks. For complex pattern matching — validating phone formats, postal codes, or structured reference numbers — pair the filter with a RegEx operation. Our guide on automating HR data cleaning with Make™ and RegEx covers the exact patterns most HR teams need.

When a filter blocks a record, that bundle stops at the filter — no error, no partial write, no noise in the error log. Configure the filter’s fallback route to write a rejection log row (module: “Input Validation,” reason: “Missing required field — email,” timestamp, raw input data) so the record is traceable without human monitoring.

Gartner research consistently identifies poor data quality as the root cause of the majority of automation failures. Filters are the lowest-cost intervention against that root cause.


Step 3 — Attach the Correct Error Handler Directive to Each Module

Make™ provides five error handler directives. Using the wrong one creates exactly the kind of invisible data corruption that HR teams discover weeks later during an audit or through a payroll complaint.

Here is when to use each:

Resume

The scenario continues as if the module succeeded, using a fallback value you define. Use this for non-critical modules where a default or null value is acceptable downstream — for example, a phone number field that is optional in the HRIS. Do not use Resume on modules that write financial or compliance-critical data.

Ignore

The error is swallowed and the bundle processing ends for that record. Use this only for genuinely optional enrichment steps — adding a social profile URL to a candidate record, for instance — where failure has zero downstream consequence. Always pair Ignore with a log write so the skipped record is visible.

Break

Processing stops for the current bundle. Make™ marks the run as incomplete and moves to the next bundle. The partial state of the current bundle is preserved — meaning any writes that succeeded before the error remain in place. Use Break when the failed module is the last write in a sequence and the completed prior writes are correct and should be kept. Do not use Break as a default — it is commonly misapplied to multi-system writes where it leaves systems out of sync.

Rollback

Every action in the current bundle is undone before the error surfaces. Use Rollback any time a scenario writes to more than one system in a single bundle and those writes must succeed together or not at all. The canonical HR example: provisioning a new hire in the HRIS, creating their payroll record, and sending their onboarding email. If payroll fails, you do not want the HRIS record to persist in isolation. Rollback ensures atomicity.

When connecting your ATS, HRIS, and payroll systems in a single scenario, Rollback should be your default directive on every multi-system write bundle until you have a specific reason to use something else.

Commit

Finalizes all actions up to the current point and starts a new transactional block. Use Commit when you have a sequence of writes that should be treated as two separate transactions — complete the first group, commit it, then begin the second group with its own error scope. This is the right pattern for a scenario that successfully creates an HRIS record (commit) and then separately attempts payroll provisioning (separate transaction with its own Rollback if needed). Commit prevents a downstream failure from undoing correctly completed upstream work.

Based on our testing: The Rollback/Commit pattern for multi-system HR writes eliminates the partial-state problem that causes the majority of data discrepancy tickets in HR automation environments.


Step 4 — Configure Retry Logic for External API Modules

Transient failures — the API returned a 503, the rate limit was momentarily hit, the connection timed out — are the most common source of HR automation errors, and they are entirely recoverable without human intervention if you configure retry logic correctly.

For every module identified in Step 1 as having an external API dependency, configure a Retry error handler route:

  1. Set the initial wait interval. Start at 5 minutes for most HR SaaS APIs. This gives the upstream service time to recover from a brief outage without hammering it with immediate retries.
  2. Use exponential back-off. Double the interval on each successive retry (5 min → 10 min → 20 min). This prevents your scenario from becoming part of the problem during a sustained outage.
  3. Cap retries at 3–5 attempts. Beyond 5 retries, a transient failure has become a sustained outage or a credential/configuration problem that requires human review.
  4. Route exhausted retries to an alert. When all retries are exhausted, trigger a notification — email, Slack, or your team’s preferred channel — with the module name, the error message, and the affected record identifier. Do not let exhausted retries disappear silently into a log no one monitors.

Asana’s Anatomy of Work research found that employees spend significant time on work about work — status checking, chasing down failures, manual follow-up. Properly configured retry logic eliminates the entire category of “did the integration run last night?” status checking for transient API failures.

For scenarios that handle essential Make™ filters for recruitment data alongside API calls, place retry logic specifically on the API-calling modules, not on the filter or mapping modules — those do not have external dependencies and do not benefit from retry configuration.


Step 5 — Build a Structured Error Log and Test Every Failure Path

An error handler that does not write a log is nearly as dangerous as no error handler at all. Silent recovery is not recovery — it is deferred confusion.

Build the Error Log

On every error handler route (Resume, Ignore, Break, Rollback, exhausted Retry), add a module that writes a structured record to a Make™ Data Store or an append-only Google Sheet. Each log entry must capture:

  • Timestamp: ISO 8601, UTC.
  • Scenario name and module name: Where exactly the failure occurred.
  • Error code and message: The raw error returned by Make™ or the upstream API.
  • Bundle identifier: The candidate ID, employee ID, requisition ID, or whatever unique key identifies the affected record.
  • Raw input data: The data that was being processed when the failure occurred. This is essential for reconstructing what happened and manually re-processing the record if needed.
  • Directive applied: Which error handler fired (Resume, Rollback, etc.) so the log is self-explanatory without cross-referencing the scenario configuration.

Do not store sensitive personal data (SSNs, full compensation figures, health information) in error logs that are not protected at the same access-control level as your primary HR systems. Log the record identifier and error context — not the full payload — for records containing PII. This aligns with the GDPR-compliant data filtering practices that govern your broader HR automation stack.

Test Every Failure Path Deliberately

This step is non-negotiable. Configure a sandbox version of your scenario pointed at non-production test data. Then deliberately trigger every failure mode you documented in Step 1:

  • Submit a bundle with a missing required field — verify the filter blocks it and writes a rejection log row.
  • Submit a bundle with a valid format but an invalid reference (a department code that does not exist) — verify the correct error directive fires.
  • Simulate an API failure by temporarily invalidating the API connection credentials — verify the retry logic fires the correct number of times and the alert notification is sent on exhaustion.
  • Submit a bundle that fails at the second module in a Rollback-protected sequence — verify the first module’s write is undone.

Forrester research on automation ROI consistently identifies inadequate testing of failure scenarios as a primary driver of post-deployment remediation costs. Every hour spent testing error paths in a sandbox eliminates multiple hours of incident response in production.

After testing, schedule a recurring error log review — weekly minimum for active HR automation scenarios. The predictive filtering that reduces manual error correction improves over time only when the error log data is used to refine upstream filters and mapping rules.


How to Know It Worked

Your error handling configuration is production-ready when all of the following are true:

  • Every module in your scenario has an explicit error handler — no module relies on Make™’s default silent failure behavior.
  • The error log receives a structured entry every time any error handler fires — no handler exits silently.
  • Sandbox testing confirmed that Rollback actually undoes writes in your HRIS or ATS test environment when a downstream module fails.
  • Retry logic fired correctly during API failure simulation and sent the alert notification on retry exhaustion.
  • Input validation filters blocked malformed test records before they reached any write module.
  • Your HR or ops team has a documented process for reviewing the error log on a regular cadence and re-processing flagged records.

If any of these conditions are not met, the scenario is not production-ready regardless of how well the happy path runs.


Common Mistakes to Avoid

Using Break as the Default Directive

Break is not a safe default. It preserves partial writes, which is almost never what you want in a multi-system HR workflow. Default to Rollback for any module that writes to a live system, and use Break only when you have a specific reason to keep the prior writes.

Logging to an Unmonitored Destination

Writing errors to a Google Sheet no one opens weekly is marginally better than no log. Assign explicit ownership of error log review. Without ownership, the log becomes a graveyard of silent failures.

Skipping Error Handling on “Simple” Scenarios

Single-module scenarios and short two-step scenarios still fail. A scenario that watches a form submission and creates a candidate record in your ATS has exactly one external API call — and that call can fail. Add error handling to every scenario regardless of length.

Treating Error Handling as a One-Time Setup

Every time you add a module to an existing scenario, audit the new module against the Step 1 framework and add the appropriate error handler before activating the change. Error handling configurations go stale as scenarios evolve.

Storing Full PII Payloads in Error Logs

Error logs often end up in less-protected storage than primary HR systems. Log record identifiers and error context. Do not log full candidate or employee records containing sensitive personal data unless the log destination has equivalent access controls.


What Comes Next

Resilient error handling is one layer of a production-grade HR automation architecture. The upstream layer — ensuring clean, validated data enters your scenarios before error handling ever has to fire — is covered in our guide on mapping resume data to ATS custom fields and the techniques for routing complex HR data flows across multiple systems.

For teams that have error handling and data validation in place and are ready to reduce the manual work that remains, the guide on eliminating manual HR data entry mistakes covers the next layer of operational leverage.

The full architecture — from data filtering through mapping, routing, error handling, and compliance — is laid out in the parent pillar on data filtering and mapping logic that enforces integrity at the source. Start there if you are building or auditing an HR automation stack from scratch.