Make HR Automation: Build Logic for Smarter Decisions

Speed is not the problem HR automation solves. The problem is judgment — knowing which candidate advances, which onboarding track fires, which payroll record waits for verification. Automation without logic just executes the wrong steps faster.

This post is a companion to the parent pillar on data filtering and mapping in Make™ for HR automation. Where that pillar covers data integrity foundations, this satellite goes deeper on the decision logic that makes those clean data pipelines actually intelligent. Below are 10 Make™ logic workflow patterns, ranked by their impact on reducing HR errors and manual decision overhead — from the basics every team needs to the advanced patterns that separate a pilot scenario from a production system.


1. Threshold Filters: The First Gate Every HR Workflow Needs

Filters are binary: data passes or it stops. Every HR automation should have at least one filter before any consequential action fires.

  • What it does: Evaluates a single condition — score above X, status equals Y, field is not empty — and blocks records that fail.
  • HR use cases: Block sub-threshold candidate scores from advancing to interview scheduling; prevent payroll triggers from firing if employment type is unconfirmed; stop onboarding tasks from launching before background check status is “cleared.”
  • Why it ranks first: Filters are the cheapest logic layer to build and the most consistently absent in broken HR scenarios. Adding a filter before a downstream action is a five-minute fix that eliminates entire categories of errors.
  • Configuration note: In Make™, filters live on the connection between modules. Right-click the connector arrow and set your condition using the visual formula builder — no code required.

Verdict: Non-negotiable. If your HR scenario fires a consequential action without a filter validating the incoming record, the scenario is incomplete by definition. See the deeper breakdown in the guide to essential Make™ filters for recruitment data.


2. Router Modules: One Trigger, Multiple Intelligent Paths

When a single HR event can produce different outcomes depending on context, a Router replaces the need for separate scenarios per case.

  • What it does: Creates multiple outbound paths from a single module. Each path has its own filter conditions; data travels down the first path whose conditions match.
  • HR use cases: Route candidates to technical interview, behavioral assessment, or rejection based on role type and assessment score. Branch onboarding tasks for full-time, part-time, and contractor employees. Direct expense approvals to department heads based on cost center.
  • Why it matters: Without a router, teams build parallel scenarios that duplicate trigger logic and drift out of sync. One router consolidates the decision into a single maintainable place.
  • Configuration note: Add a fallback path (no filter conditions) as the last route to catch records that match nothing — this prevents silent drops.

Verdict: Essential for any HR process with more than one outcome. Combine with filters on each branch for multi-criteria decision trees. The full router approach is covered in the guide to HR data routing with Make™ routers.


3. Iterator + Filter Combination: Logic at Scale for Batch HR Data

HR data arrives in batches — 50 applications after a job post closes, 200 employees in a performance review cycle, a full payroll cohort. Iterators process each record individually; filters inside the iteration apply logic to each one.

  • What it does: The Iterator unpacks a collection (array) into individual items. A filter after the Iterator evaluates each item against your criteria before any downstream action fires.
  • HR use cases: Evaluate each application in a bulk import and advance only those meeting minimum qualifications. Process each employee in a training completion list and send manager alerts only where completion is overdue. Review each record in a payroll batch and flag anomalies before submission.
  • Why it matters: Without iteration, you apply logic to the entire batch or none of it. With an Iterator, you get record-level precision at any volume.
  • Configuration note: Make™ processes iterators sequentially by default. For large batches, monitor your scenario’s operation count against your Make™ plan limits.

Verdict: Required for any HR workflow that touches collections. Pairs directly with the Aggregator (see #4). Detailed module walkthrough available in the guide to Iterator and Aggregator modules for complex HR data.


4. Aggregators: Collapse Iterated Results Into Actionable Summaries

After an Iterator processes individual records, the Aggregator compiles the results — counts, concatenated strings, totals, arrays — back into a single output for reporting or downstream routing.

  • What it does: Collects outputs from all iterations and combines them into one bundle (text, numeric total, array, or table row).
  • HR use cases: Aggregate screened candidate counts per role into a hiring dashboard update. Compile a list of employees with overdue compliance training into a single manager email. Summarize payroll anomaly counts into a finance alert instead of sending one alert per record.
  • Why it matters: Iterating without aggregating produces noise — one notification per record instead of one actionable summary. Aggregators make batch logic usable.
  • Configuration note: Use the Array Aggregator when downstream modules need to process the list further. Use the Text Aggregator when the output is a human-readable message. Use the Numeric Aggregator for counts and sums feeding dashboards or conditional checks.

Verdict: Iterator and Aggregator are a matched pair. Build them together or plan to regret the notification volume.


5. Conditional Routing by Employment Type: The Onboarding Logic Baseline

Onboarding full-time employees, part-time employees, and contractors as if they were identical is one of the most common and costly HR automation mistakes. Employment type determines which tasks fire, which systems receive access requests, and which documents are generated.

  • What it does: Uses a router with employment-type conditions to branch the onboarding scenario into distinct tracks — each with its own task list, system provisioning steps, and documentation.
  • HR use cases: Full-time track triggers benefits enrollment, equipment provisioning, and 90-day check-in scheduling. Contractor track triggers NDA routing, limited system access, and W-9 collection. Part-time track triggers a reduced task set with adjusted benefit eligibility checks.
  • Why it matters: McKinsey research finds that poorly executed onboarding directly affects early retention. Sending contractors the wrong documentation or missing benefits enrollment for eligible employees creates compliance exposure and employee frustration simultaneously.
  • Configuration note: Map employment type to a standardized field in your HRIS before this router runs. Downstream logic is only as reliable as the field it reads.

Verdict: This is the single most impactful router build for HR teams with mixed workforce compositions. Build it before anything else in the onboarding scenario.


6. Score-Based Candidate Advancement Logic: Eliminate Manual Screening Triage

Assessment scores sitting in a spreadsheet waiting for a recruiter to review them are a process bottleneck, not a quality control measure. Logic-based advancement routes scored candidates automatically based on defined thresholds.

  • What it does: Reads a numeric score field from an assessment tool, applies a filter or router based on defined ranges, and triggers the appropriate next action — interview invite, hold queue, or rejection — without human review for standard cases.
  • HR use cases: Candidates above threshold receive automated interview scheduling trigger. Candidates in a “review zone” (within 10 points of threshold) are routed to a recruiter review queue. Candidates below minimum trigger an automated decline with a personalized message.
  • Why it matters: Asana’s Anatomy of Work research finds knowledge workers switch tasks frequently due to unnecessary coordination overhead. Manual screening triage is exactly that overhead — a task humans perform not because judgment is needed but because the logic was never built.
  • Configuration note: Define score ranges in a data store or a configuration module at the top of the scenario so thresholds can be updated without editing the workflow structure.

Verdict: High-ROI, low-complexity build. This is where Sarah — the HR director at a regional healthcare organization — reclaimed 6 hours per week. The scenario logic that drove her 60% reduction in hiring time started with exactly this pattern. For the interview scheduling automation that completes this flow, see the guide to conditional logic for interview scheduling automation.


7. Deduplication Logic: Stop Duplicate Candidates Before They Corrupt the Pipeline

Duplicate candidate records create double-interview scheduling, inflated funnel metrics, and ATS data that HR leadership cannot trust. Logic-based deduplication catches duplicates at entry, not during quarterly data cleanup.

  • What it does: Checks an incoming record against a data store or ATS lookup before creating a new candidate profile. If a matching record exists (by email, phone, or name + location composite), the workflow updates the existing record or routes to a review queue instead of creating a duplicate.
  • HR use cases: Prevent the same candidate from appearing twice when they apply through multiple job boards. Catch re-applicants who should be flagged for prior evaluation notes. Merge duplicate records created by manual imports.
  • Why it matters: MarTech’s 1-10-100 rule (Labovitz and Chang) holds that it costs $1 to verify data at entry, $10 to clean it later, and $100 to remediate downstream errors caused by bad data. Deduplication logic is the $1 investment.
  • Configuration note: Email is the most reliable deduplication key for candidates. Name matching alone produces false positives at scale.

Verdict: Build deduplication into every candidate intake workflow from day one. The full pattern is covered in the guide to filtering candidate duplicates in Make™.


8. Compliance Flag Routing: Automate the Escalation, Not the Decision

HR compliance decisions — background check outcomes, I-9 discrepancies, policy violations — require human judgment. What automation handles is the routing: making sure the right person sees the flag within the right timeframe.

  • What it does: Reads a compliance status field and routes flagged records to the appropriate HR reviewer, legal contact, or hiring manager with full context attached — not just a notification, but the record, the flag type, and the required response window.
  • HR use cases: Background check results with adverse findings route to HR compliance officer with a 72-hour response timer. I-9 discrepancies route to the onboarding coordinator with the specific field requiring correction. Policy violation reports route to the employee’s direct manager and HR business partner simultaneously.
  • Why it matters: SHRM data consistently shows that compliance failures stem from process gaps, not policy gaps. The policy exists; the routing to enforce it does not.
  • Configuration note: Never automate the compliance decision itself. Automate the routing, the notification, the deadline tracking, and the documentation — not the outcome determination.

Verdict: This is where automation earns its compliance value. The logic handles escalation; humans handle judgment. That boundary must be explicit in the scenario design.


9. Error-Handling Routes: The Logic Layer Most Teams Skip

An HR workflow that fails silently is more dangerous than one that fails loudly. When a module errors out mid-scenario, the downstream record is in an unknown state — partially written, not written, or written incorrectly. Error-handling routes define what happens instead of hoping nothing goes wrong.

  • What it does: Attaches error handler modules (Break, Resume, or Rollback) to critical modules. When that module errors, the handler fires its own path — logging the failure, notifying an HR admin, quarantining the record, and resuming the scenario with the next item.
  • HR use cases: ATS API timeout during candidate creation routes to a retry queue with the original data preserved. Payroll write failure triggers an immediate Slack alert to the HR operations manager with the employee ID and error code. Offer letter generation failure quarantines the record and flags it for manual review before any communication fires.
  • Why it matters: Parseur research estimates manual data entry costs organizations approximately $28,500 per employee per year. Errors in automated workflows that go undetected compound that number — bad records advance, trigger downstream costs, and require expensive remediation. Error handling is the circuit breaker.
  • Configuration note: Every module that writes to an external system (ATS, HRIS, payroll) needs an error handler. Every module that reads from an API needs a retry or fallback. Build this in during construction, not as a patch after the first production failure.

Verdict: The difference between a demo scenario and a production scenario is error handling. The complete guide to resilient scenario design is in the post on error handling in Make™ for HR operations.


10. Data Store State Management: Logic That Persists Across Scenario Runs

Some HR logic needs memory. Did this candidate already receive a rejection email this week? Has this employee’s training completion been recorded for the current cycle? Has this position’s offer budget been flagged for this quarter? Data stores give Make™ scenarios persistent state — the ability to remember between runs.

  • What it does: Reads from and writes to Make™’s built-in lightweight database within a scenario. Logic checks the data store before acting — if the record exists, the scenario takes a different path than if it does not.
  • HR use cases: Store processed candidate IDs to prevent duplicate processing across multiple scenario runs. Track which employees have received compliance training reminders this cycle to prevent over-messaging. Hold offer letter approval status so the hiring manager confirmation triggers the next step only once.
  • Why it matters: Without state management, every scenario run treats every record as new. In HR, that produces duplicate communications, double-processed payroll records, and redundant system writes that erode both candidate experience and data integrity.
  • Configuration note: Keep data store schemas simple and documented. Complex data stores that accumulate stale records become a maintenance liability. Build a cleanup routine into long-running HR scenario data stores.

Verdict: State management is what graduates a Make™ HR scenario from “works in a demo” to “runs reliably in production.” Build it for any workflow that runs on a recurring schedule against the same population of records.


Building These Patterns Into a Coherent HR Automation Architecture

These 10 logic patterns are not standalone tricks — they compose. A candidate intake scenario combines a threshold filter (#1) with a router (#2) branching on role type, an iterator (#3) for batch imports, deduplication logic (#7) at the entry point, compliance flag routing (#8) for background check results, and error handling (#9) on every ATS write. That is not a complex scenario — it is a complete one.

Gartner and Deloitte both identify process reliability as the prerequisite for scaling HR technology investment. Harvard Business Review research on AI-HR partnerships reinforces the same sequencing: deterministic automation first, AI-assisted judgment second. The logic patterns in this post are the deterministic layer.

For the full data integrity foundation these logic patterns sit on top of, return to the parent pillar on data filtering and mapping in Make™ for HR automation. To extend these patterns into your full HR tech stack, the guide on connecting your ATS, HRIS, and payroll with Make™ covers the integration layer these scenarios write into.

Nick — a recruiter at a small staffing firm processing 30–50 PDF resumes per week — reclaimed 150+ hours per month for his team of three by combining the iterator pattern (#3), threshold filtering (#1), and deduplication logic (#7) into a single intake scenario. That is not a technology story. It is a logic story. The technology was already available. The logic layer is what made it work.