60% Faster Shortlisting with Automated Applicant Pre-Screening: How Sarah Qualified Candidates with Make Filters
Application volume doesn’t respect recruiter bandwidth. For Sarah, an HR Director at a regional healthcare system, the math was brutal: 12 hours every week spent manually triaging résumés, cross-checking experience claims, and routing candidates into the right ATS pipeline — before a single interview was scheduled. The screening bottleneck was the hiring process. This case study documents how a multi-condition Make™ filter workflow replaced that manual triage, cut time-to-shortlist by 60%, and reclaimed six hours of recruiter capacity every week without adding headcount or software licenses.
This satellite drills into the pre-screening implementation layer of a broader Make™ filtering and mapping strategy for HR automation. If you’re building a qualification pipeline, start here. Then return to the parent pillar for the full data integrity architecture that surrounds it.
Snapshot: Sarah’s Pre-Screening Challenge
| Dimension | Detail |
|---|---|
| Organization | Regional healthcare system, mid-market |
| Role | HR Director (Sarah) |
| Pre-automation workload | 12 hours/week on interview scheduling and candidate triage |
| Core constraint | No budget for additional recruiters; existing ATS filters too blunt for clinical role requirements |
| Approach | Make™ scenario with structured intake form, three nested filter gates, dual-branch routing, and ATS write-back |
| Outcome | 60% reduction in time-to-shortlist; 6 hours/week reclaimed; automated rejections delivered within minutes |
Context and Baseline: Where the Bottleneck Lived
Sarah’s team handled hiring across multiple clinical departments — nursing, administrative, and allied health — each with distinct qualification requirements. The existing ATS offered basic keyword filtering, but clinical roles demand structured criteria: licensure status, years of direct patient care, shift availability, and geographic proximity to specific campuses. Keyword matching couldn’t enforce any of those criteria with precision.
The result was a two-stage manual process. First, every application was opened and skimmed for the baseline disqualifiers. Second, survivors were manually moved into department-specific pipelines. According to Asana’s Anatomy of Work research, knowledge workers spend roughly 60% of their time on coordination and process work rather than skilled tasks — Sarah’s screening workload was a textbook example of exactly that misallocation.
The cost of an unfilled clinical position extended well beyond recruiter frustration. SHRM data places average cost-per-hire in healthcare significantly above the cross-industry average, and every week a position sat open waiting for a qualified shortlist compounded that cost. Harvard Business Review research on recruitment effectiveness confirms that speed-to-qualified-candidate is one of the highest-leverage variables in reducing total hiring cost.
Sarah’s stated goal was not to eliminate recruiter involvement — it was to eliminate recruiter involvement in the deterministic part of screening. If a candidate objectively did not meet the minimum license requirement, no human judgment was needed. The filter should handle it. That framing — humans for judgment, automation for rules — became the design principle for the entire build.
Approach: Mapping the Criteria Before Touching the Platform
The build started with a qualification matrix, not a Make™ canvas. Sarah and the department leads documented every role’s non-negotiable disqualifiers — the criteria where a “no” answer meant an immediate, objective rejection regardless of anything else in the application. These were separated from “preferred” criteria that would weight candidates but not eliminate them.
For a representative clinical role, the non-negotiable gate looked like this:
- Active state licensure: Boolean — yes or no, no exceptions
- Minimum years of direct patient care: Numeric threshold, role-specific (e.g., ≥ 3 years for senior positions)
- Shift availability: Must include at least one of the required shift patterns
- Commute proximity: Must reside within the specified radius of a campus OR confirm willingness to relocate
This matrix became the direct specification for the Make™ filter conditions. No interpretation required — the criteria document mapped one-to-one to filter logic. That pre-work is what makes a build fast. Without it, you iterate in the scenario builder, which is the most expensive possible place to discover ambiguity in your own requirements.
The intake form was redesigned simultaneously. Free-text fields for experience and licensure were replaced with structured dropdowns and numeric inputs wherever possible, minimizing the data normalization burden before filters could run. This is the same principle covered in the essential Make™ filters for recruitment data guide — clean inputs produce clean filter results.
Implementation: Three Filter Gates and Dual-Branch Routing
The Make™ scenario architecture followed a linear gate structure with a parallel rejection branch. Each gate was a hard stop: fail any gate, exit to the rejection branch. Pass all three, proceed to ATS pipeline creation.
Gate 1 — Licensure Verification
The first filter evaluated the licensure field. The condition was a simple boolean: the applicant’s response to “Do you hold an active [state] license?” must equal “Yes.” Any other value — “No,” “Pending,” “Applied For,” blank — routed immediately to the rejection branch. This gate eliminated the largest single disqualifier category without any complex logic.
Gate 2 — Experience Threshold with Normalization
The second gate evaluated years of direct patient care against the role’s minimum. Because even a redesigned form can receive edge-case inputs, a regex normalization step ran before the filter, extracting the leading integer from the experience field value. The filter then applied a numeric “greater than or equal to” comparison against the role’s threshold.
This normalization step — parsing structured values out of semi-structured inputs — is exactly the pattern described in the regex-based HR data cleaning in Make™ guide. It is the unglamorous but essential step that most failed pre-screening builds skip. Without it, numeric comparisons fail silently on any non-standard input, and candidates get incorrectly routed in both directions.
Gate 3 — Availability and Location AND/OR Logic
The third gate was the most complex: shift availability OR confirmed relocation willingness, AND location within the approved radius OR remote-eligible confirmation. Make™’s filter interface handles nested AND/OR conditions through grouped condition blocks — each group evaluates as OR, and groups are connected with AND. The logic that sounds complicated in English maps cleanly to Make™’s condition builder once the criteria are specified precisely.
Qualified Branch: ATS Write-Back and Assessment Trigger
Applicants passing all three gates triggered an ATS module that created a candidate record in the appropriate department pipeline, populated custom fields with the structured intake data, and sent an automated email inviting the candidate to complete a role-specific assessment. No recruiter action required until the assessment was returned.
The ATS write-back step connects directly to the mapping résumé data to ATS custom fields workflow — the filter determines who enters the ATS, and the mapping logic determines how their data is structured once they arrive.
Disqualified Branch: Automated Rejection with Role-Specific Messaging
The rejection branch used a router with conditions matching which gate failed, sending a role-appropriate rejection email that acknowledged the specific role applied for without disclosing the disqualifying criterion. Rejections landed in applicant inboxes within minutes of submission — a dramatic improvement over the previous days-to-weeks response time.
Results: What Changed and What Was Measured
Within the first full hiring cycle after deployment, Sarah’s team tracked four outcome metrics against the pre-automation baseline:
| Metric | Before | After | Change |
|---|---|---|---|
| Time-to-shortlist | ~5 business days | ~2 business days | −60% |
| Weekly recruiter hours on triage | 12 hrs/wk | 6 hrs/wk | 6 hrs reclaimed |
| Average rejection response time | 4–14 days | < 10 minutes | Near-instant |
| ATS data entry errors on qualified candidates | Frequent (manual copy-paste) | Zero (automated write-back) | Eliminated |
The reclaimed six hours per week translated directly into recruiter capacity for candidate engagement — phone screens, relationship building, and offer negotiation — the high-judgment work that actually requires a human. McKinsey Global Institute research on workforce automation consistently identifies this pattern: automating deterministic tasks doesn’t eliminate the role, it restructures it toward higher-value activities.
Parseur’s Manual Data Entry Report benchmarks manual data processing costs at roughly $28,500 per employee per year when factoring in time, error correction, and opportunity cost. Sarah’s team wasn’t at that full figure, but the directional math — hours spent on rules-based screening are hours not spent on relationship-based recruiting — held exactly as the research predicts.
Deloitte’s Human Capital Trends research identifies candidate experience as an increasingly significant employer brand variable. The near-instant rejection response — a byproduct of the disqualified branch, not its primary purpose — delivered a measurable improvement in how the organization was perceived by applicants who didn’t make the shortlist. Gartner’s talent acquisition research confirms that rejected candidates who receive timely, respectful communication are substantially more likely to reapply in the future and to refer others.
Lessons Learned: What We Would Do Differently
1. Build the Normalization Step on Day One
The regex normalization for the experience field was added after the initial build when filter failures surfaced on edge-case inputs. It should have been the first module after the Watch trigger, not a retrofit. Any field that accepts free-text input — even a “structured” numeric field — needs normalization before it touches a filter condition.
2. Test with Real Edge-Case Data, Not Ideal Data
Initial testing used clean, perfectly formatted submissions. The normalization problem wasn’t discovered until real applicant data arrived. Future builds will include a test set drawn from actual historical submissions — messy, inconsistent, and representative — before go-live.
3. Version-Control the Criteria Document
Hiring managers updated role requirements mid-cycle without notifying the operations team. The filter logic became misaligned with actual criteria for two weeks before the discrepancy was caught. A simple process — criteria document is the source of truth, filter changes require a documented criteria update first — eliminates this class of error entirely.
4. Map the Duplicate Risk Before Launch
Several applicants submitted multiple applications for the same role (different form instances, different campaign links). Without a duplicate check upstream, they were processed multiple times and created duplicate ATS records. The filtering candidate duplicates in Make™ workflow should be added to every pre-screening build as a standard first gate, before any qualification logic runs.
What This Means for Your Pre-Screening Build
Sarah’s implementation is reproducible. The pattern — structured intake, normalization step, nested filter gates, dual-branch routing, ATS write-back — applies to any role with deterministic qualification criteria. The specific conditions change; the architecture doesn’t.
The prerequisite is clarity on criteria, not technical sophistication. If you can produce a table of “this criterion is a hard disqualifier” versus “this criterion is a preference,” you have everything you need to specify the filter logic. The build follows directly from that document.
From there, the pre-screening workflow connects naturally to downstream automation: qualified candidates route into structured assessment triggers, assessments feed scoring data back into ATS custom fields, and scoring thresholds trigger the next routing decision — automated interview scheduling with conditional logic. Each layer handles one deterministic decision. Recruiters handle the judgment calls that remain after the rules have run.
For the full data integrity architecture that connects pre-screening, field mapping, duplicate prevention, and downstream analytics, return to the parent pillar: Make™ filtering and mapping strategy for HR automation. The pre-screening filter is one gate in a pipeline that runs end-to-end — and that pipeline is where the compounding value lives.
If you want to see where your current screening workflow is losing recruiter hours, an OpsMap™ session maps every decision point and identifies which ones are candidates for deterministic automation. The pattern Sarah implemented isn’t exceptional — it’s what a well-scoped automation build looks like when the criteria work gets done first.




