How to Build Predictive Filtering in Make™ for Error-Proof HR Workflows
HR automation breaks at the data layer — not the AI layer, not the integration layer. A candidate record with a missing required field, a salary entered as “$85,000” instead of “85000,” or a duplicate application that slips past your ATS creates downstream damage that no automation platform can repair after the fact. The fix is predictive filtering: a structured approach to defining what valid data looks like, catching deviations at the entry point, and routing bad records to a named human owner before they contaminate your pipeline.
This guide walks through exactly how to build that filter logic in Make™, step by step — from prerequisites through verification. It is the tactical counterpart to the broader data filtering and mapping in Make™ for HR automation framework. If you have not read that pillar first, start there for strategic context. Come back here when you are ready to build.
Before You Start
Predictive filtering is not a Make™ feature you configure in isolation. It requires upfront data work. Skip these prerequisites and your filter stack will be incomplete from day one.
What You Need
- An active Make™ account with access to the scenario builder and at least one live data source (ATS webhook, Google Form, HRIS API, or spreadsheet).
- A field inventory for your target workflow. List every field the scenario will receive, its expected data type (text, number, date, boolean), and whether it is required or optional. This document is your filter specification.
- A defined list of valid values for any enumerated fields (department codes, employment types, status flags). Filters that check against undefined value sets always let exceptions through.
- A fallback destination. Decide before you build where failed records go — a Slack channel, an email alias, a Google Sheet row, or an Airtable base. Every filter path that rejects a record must have a named owner.
- A test dataset of at least 5 records that includes: one clean valid record, one with a missing required field, one duplicate, one with a numeric field entered as text, and one with an out-of-range value.
Time Estimate
Single-use-case filter (candidate qualification check): 2–4 hours to build, test, and deploy. Multi-branch workflow (onboarding routing or payroll change validation) with full fallback handling: 1–2 days.
Risk to Know
Overly aggressive filters that reject too broadly will suppress valid records and create false exceptions. Build tight but not brittle — use OR logic where legitimate variation exists in source data formatting.
Step 1 — Map Your Data Quality Rules Before Touching the Scenario Builder
Open a blank document and answer three questions for every field your workflow will process: What is the expected data type? What values are valid? What should happen when the field fails validation? This document is your filter specification. Do not skip it.
For a candidate screening workflow, a field inventory might look like this:
- years_experience — Number. Valid: 0–40. Invalid if: text, blank, negative, or greater than 40. On fail: route to exception log + Slack alert.
- email — Text. Valid: matches standard email regex pattern. Invalid if: blank or malformed. On fail: reject and notify source system.
- role_applied — Text. Valid: exact match to one of 12 open role codes. Invalid if: blank, misspelled, or not on approved list. On fail: route to recruiter triage queue.
- consent_gdpr — Boolean. Valid: true only. Invalid if: false or blank. On fail: hard stop — do not process record, log rejection with timestamp.
This specification is what you translate directly into Make™ filter conditions in the next step. Every row becomes a filter condition. Every “On fail” instruction becomes a router path or error handler.
Consulting the guidance in our post on essential Make™ filters for recruitment data will help you identify which filter operators map to each validation type before you begin configuring.
Step 2 — Build Your Primary Filter Stack
With your field specification complete, open Make™ and create or open the scenario you’re enhancing. Add a Filter module immediately after your trigger — this is the first gate every incoming record must pass.
Configuring Multi-Condition Filters
- Click the small circle between your trigger module and the next action module to add a filter.
- Give the filter a descriptive label — “Required Fields Validation” or “Candidate Qualification Gate.” Labels matter when you return to debug six weeks later.
- Add your first condition. Select the field from the bundle, choose the appropriate operator (e.g., “exists,” “greater than,” “matches pattern”), and enter the value or regex.
- Click “Add AND rule” to add each additional required-field check. Use AND when all conditions must be true simultaneously. Use OR when legitimate variation exists — for example, if a field may be labeled “FT” or “Full-Time” depending on source system.
- Chain multiple filter modules in sequence when your logic separates into distinct categories (e.g., a data-type filter followed by a business-rule filter). Chained filters are easier to audit than a single filter with 15 conditions.
Regex Conditions for Text Field Validation
For email, phone, and formatted ID fields, use the “matches pattern” operator with a standard regex. Make™ evaluates these natively — no custom function module required. Example for email validation: ^[a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}$. Paste it directly into the pattern field. Our deeper guide on Make™ and RegEx for HR data cleaning covers the full pattern library for common HR fields.
Step 3 — Add a Router for Multi-Branch Routing Logic
A filter is binary — pass or fail. When your workflow needs to send records down different paths based on field values (not just exclude them), you need a Router module.
Add a Router immediately after your primary filter stack. Each router path has its own filter condition, which you configure the same way as a standalone filter. Common routing branches for HR workflows:
- New hire vs. rehire — route based on whether an employee ID already exists in your HRIS.
- Remote vs. on-site — route based on work_location field to trigger different IT provisioning and equipment order paths.
- Department code — route to department-specific Slack channels, approval chains, or onboarding templates.
- Salary band — route payroll change requests above a defined threshold to a secondary approval workflow before HRIS update.
The router pattern is also the right tool for handling filtering candidate duplicates in Make™ — one branch processes records that clear the uniqueness check, a second branch flags and parks duplicates for review.
For the complete architecture of multi-branch HR data flows, see our guide on automating complex HR data flows with Make™ routers.
Step 4 — Build the Fallback Path for Every Failed Record
This step is not optional. Every filter or router path that rejects a record must terminate in an explicit, visible action. Silent failures are the most dangerous failure mode in HR automation — they look like the workflow is running correctly while records disappear.
Three Fallback Patterns That Work
- Slack or Teams alert to a named HR owner. Configure a Send Message module on the fallback path. Include: the field that failed, the value that was received, the record ID, and a timestamp. Keep the message to 3–4 lines — long alerts get ignored.
- Exception log row in Google Sheets or Airtable. Append one row per failed record with all relevant fields plus a “failure_reason” column populated by the specific filter condition that triggered the fallback. This gives you an audit trail and a weekly review queue in one structure.
- Source system notification. If your data source supports it (most ATS and HRIS APIs do), send a webhook or status update back to the originating system flagging the record as “validation_failed.” This closes the loop without requiring manual follow-up from your HR team.
The discipline of building complete fallback paths is covered in depth in our guide on error handling in Make™ for resilient workflows. Read it alongside this guide — error handling and predictive filtering are two halves of the same architecture.
Step 5 — Enforce Compliance Fields as Hard-Stop Gates
Certain fields in HR workflows are not optional. GDPR consent flags, I-9 verification status, background check clearance — these require a hard stop, not a fallback route. A record that fails a compliance check should never proceed to downstream processing regardless of how complete the rest of the record is.
How to Build a Hard-Stop Gate
- Place the compliance field filter as the first filter in your chain — before any business-logic filters.
- Configure the pass condition as strictly as possible. For a boolean consent flag:
consent_gdprequalstrue. Any other value — false, blank, null, the string “yes” — fails the filter. - On the fail path: log the rejection to your exception log with a timestamp and the exact field value received. Do not send the record anywhere else. Do not alert a recruiter to “review it.” A GDPR consent failure is a hard stop, not a judgment call.
This architecture is the foundation of our guide on GDPR-compliant data filtering with Make™ — consult it for the full compliance field checklist relevant to EU hiring workflows.
Step 6 — Test With Edge Cases, Not Ideal Inputs
Testing with clean, textbook-formatted records validates nothing. The records that break filter workflows are always the edge cases: a salary field with a dollar sign, a department code with a trailing space, a duplicate application submitted 11 minutes after the first, a consent field that came through as the string “TRUE” instead of a boolean.
Pre-Launch Test Protocol
- Open your scenario and switch Make™ to Run Once mode (not scheduled or webhook-live).
- Manually trigger or inject each of your 5 test records one at a time.
- For each record, confirm: Did it land in the correct branch? Did the fallback path execute for failed records? Did the exception log receive a row with the correct failure reason?
- Check your HRIS or ATS destination to confirm no test records were written to production. Use a sandbox environment or a test-tagged status field if your system supports it.
- Add two additional edge cases that emerged during your field inventory that you did not anticipate when you wrote the original test dataset. These are the ones that matter most.
McKinsey Global Institute research consistently identifies data quality as a primary bottleneck in automation value realization — the discipline of edge-case testing before launch is where that quality is either protected or surrendered.
Step 7 — Activate, Monitor, and Tighten
Activate the scenario. For the first two weeks, review your exception log daily. You are looking for three things:
- High-volume fallbacks on a specific field — this indicates the source system is formatting that field differently than you expected. Adjust your filter condition, not your source data, unless you control the source system.
- Valid records landing in fallback — your filter is too aggressive. Identify the condition causing false rejections and widen it with OR logic or a format-normalization step upstream.
- Zero fallbacks after Week 1 — do not interpret this as success without verification. Check that records are actually flowing through. Zero fallbacks combined with zero downstream records written means your trigger is not firing, not that your data is clean.
After the first 30 days, shift to weekly exception log reviews. Track your manual-exception rate — the number of records per week requiring human intervention after the workflow runs. A production-grade predictive filter stack should drive that number down by at least 80% from your pre-automation baseline within 30 days. If it does not, audit the fallback log for the most common failure reason and address that single condition first.
Asana’s Anatomy of Work research identifies context-switching from unexpected task interruptions — including manual data exception handling — as one of the primary drivers of lost productive time for knowledge workers. Eliminating exception-driven interruptions is not a marginal benefit; it is a structural recovery of strategic capacity for your HR team.
How to Know It Worked
Your predictive filter stack is performing correctly when all of the following are true:
- Your downstream systems (ATS, HRIS, payroll) contain zero records with missing required fields from automated ingestion.
- Every failed record that entered your workflow appears in your exception log with a legible failure reason — nothing is unaccounted for.
- Your manual-exception rate has declined measurably from your pre-automation baseline.
- HR team members report fewer interruptions from “where did this record go?” and “why does this entry look wrong?” inquiries.
- Make™’s scenario execution log shows consistent filter-path routing with no unhandled errors in the past 7 days.
Parseur’s Manual Data Entry Report documents the cost of manual HR data processing at $28,500 per employee per year in combined time and error-correction costs. A predictive filter stack that eliminates the majority of that error-correction loop recovers a measurable fraction of that figure per HR team member. Track it.
Common Mistakes and How to Avoid Them
Mistake 1: Building the Filter After the Workflow
Adding filters as an afterthought forces you to retrofit logic onto a structure not designed for it. Always design filter conditions before building action modules. The field specification document in Step 1 is the mechanism that enforces this discipline.
Mistake 2: No Fallback Path
Every filter that rejects a record without a defined fallback creates a silent data loss risk. Build the fallback path immediately after building each filter — never leave a path without a terminal action.
Mistake 3: Testing Only With Happy-Path Data
Happy-path testing confirms the workflow runs. Edge-case testing confirms it is reliable. The 20 minutes spent injecting malformed records before go-live prevents multi-hour incident responses post-launch.
Mistake 4: Conflating Predictive Filtering With AI Screening
Predictive filtering enforces deterministic rules — field presence, data types, valid values, format patterns. AI screening evaluates qualitative fit. These are complementary, not interchangeable. Deploy filtering first; AI adds no value when it operates on dirty or incomplete records. For context on where AI adds genuine leverage in hiring pipelines, see our analysis of how AI changes talent acquisition strategies.
Mistake 5: Ignoring Source System Variability
The same field arrives differently formatted from different source systems. An applicant’s years of experience may come through as “7,” “7 years,” “7+,” or “seven” depending on the form or ATS version. Build OR conditions or upstream text-normalization steps to handle known variants before they hit your core filter stack. Our guide on eliminating manual HR data entry with Make™ covers source-system normalization patterns in detail.
Next Steps
Predictive filtering is one layer of a complete HR data quality architecture. Once your filter stack is live and your exception rate is declining, the next build priorities are:
- Data mapping to ATS custom fields — filtering ensures the right records get through; mapping ensures they land in the right fields. See our guide on mapping resume data to ATS custom fields using Make™.
- Precision filtering for hiring workflows — expand your filter logic to cover advanced qualification matching. Our listicle on Make™ filtering for precision hiring provides 10 additional filter patterns organized by hiring stage.
Both build on the foundation you have established here. Data that is validated at entry, routed correctly, and mapped accurately is the infrastructure that makes every subsequent HR automation reliable — and every HR metric you report trustworthy.




