Post: How to Automate Pre-Screening for High-Volume Hiring: A Case Study

By Published On: January 11, 2026

A retail HR team processing 800 monthly applications for 40 store-level and corporate roles automated pre-screening from a 3-recruiter manual process consuming 120 hours per month to a Make.com™ workflow processing each application in under 8 minutes with one recruiter reviewing shortlists for 22 hours per month — a 82% labor reduction at 96% screening decision accuracy. Here is the complete case study with build details, results, and lessons.

What Was the Pre-Automation State?

Before automation, three recruiters spent 40 hours each per month on application review: opening each resume, manually checking against role requirements, entering notes into Greenhouse ATS, and sending status notifications. With 800 monthly applications across 40 roles, each recruiter handled approximately 267 applications — an unsustainable workload that was causing 4-day average screening lag and contributing to a 31% application abandonment rate (candidates who applied and withdrew before hearing back). Two roles per month were going unfilled due to recruiter capacity constraints.

What Did the Automated System Include?

The Make.com™ system had four integrated components. Component 1: AI screening scenario (Greenhouse webhook → Affinda parser → scoring rubric → ATS write → candidate acknowledgment), processing each application within 8 minutes of submission. Component 2: role-specific rubric library — a Google Sheet containing scoring weights for all 40 role types, enabling the scenario to apply the correct rubric based on job ID. Component 3: exception routing — any application with parse confidence below 75% routes to a manual review Slack channel with the application URL. Component 4: weekly QA export — a scheduled scenario exporting all screening decisions for the prior week to a Google Sheet for recruiter review. See the Make.com HR Workflow guide for the role-specific rubric library architecture.

What Were the Implementation Challenges?

Three challenges required additional work. Challenge 1: role diversity — 40 different role types required 40 rubric configurations. Solution: standardized the rubric structure across all roles (same five dimensions, different weights and required skills lists per role) and built the rubric library in Google Sheets with a VLOOKUP feeding the Make.com™ scenario. Total rubric build time: 14 hours. Challenge 2: high-volume parse exceptions — store-level roles attracted a high proportion of PDF scan applications (22% of volume) that produced low parse confidence. Solution: added an OCR pre-processing step using AWS Textract before the Affinda parser call, reducing exception rate from 22% to 6%. Challenge 3: seasonal volume spikes — October/November holiday hiring pushed volume to 1,400 applications per month. Make.com™ handled the volume without modification; the bottleneck shifted to recruiter shortlist review time, not the automation.

What Were the Results After 6 Months?

After six months: recruiter screening time dropped from 120 hours per month to 22 hours (82% reduction). Application acknowledgment time improved from an average of 4 days to under 8 minutes. Application abandonment rate dropped from 31% to 14% — attributable entirely to same-application-day acknowledgment. Screening decision accuracy (measured by weekly QA — agreement between automated decisions and recruiter manual review): 96% on shortlist decisions, 94% on screen-out decisions. The two unfilled monthly roles caused by capacity constraints were eliminated in month 2. Time-to-fill improved from 28 days to 16 days on average.

What Was the Financial Impact Over 14 Months?

Build cost: 38 hours × $150/hour = $5,700 one-time. Operating cost: $420/month (Affinda API + AWS Textract + Make.com™ marginal). Annual operating cost: $5,040. Labor saving: 98 recruiter-hours/month × $44/hour (hourly rate for recruiter role) × 12 = $51,744. Vacancy cost reduction (two roles filled per month that were previously unfilled): 2 roles × 16-day reduction in vacancy × $285/day vacancy cost = $109,440. Total year-one value: $161,184. Year-one ROI: 2,407%. 14-month cumulative value: $208,000+ on a $10,740 total investment (build + operating).

Expert Take — Jeff Arnold, 4Spot Consulting™

The most interesting result in this case study is the vacancy cost reduction — $109,440 from filling two previously unfilled roles per month — which the team had not modeled in their original ROI projection. They built the automation to save recruiter time. They discovered that recruiter capacity was the binding constraint preventing two roles from being filled each month, and the automation removed that constraint. The indirect ROI exceeded the direct labor savings. Always model what you can do with freed capacity, not just what you save on current activities.

Key Takeaways

  • 82% recruiter labor reduction (120 hours to 22 hours monthly) on 800 applications across 40 roles.
  • Role-specific rubric library in Google Sheets enables one Make.com™ scenario to handle 40 different role types.
  • OCR pre-processing reduced high-volume scan PDF exception rate from 22% to 6%.
  • 96% screening decision accuracy on shortlist decisions validated against weekly recruiter QA review.
  • Indirect ROI (vacancy cost reduction from freed capacity) exceeded direct labor savings — always model capacity reuse.
  • 14-month ROI: $208,000+ on $10,740 total investment.

Frequently Asked Questions

How do you manage rubric quality across 40 different role types without losing consistency?

Standardize the rubric structure — same five dimensions, same weight categories — across all roles. Only the specific required skills and dimension weights vary per role. This structure means rubric quality reviews check 40 weight configurations rather than 40 completely different scoring systems. Monthly QA examines decision accuracy by role type to identify which rubrics need recalibration.

What was the biggest risk in the implementation that almost derailed the project?

The OCR pre-processing challenge in month 1. Without it, 22% of applications were routing to manual review — eliminating most of the automation’s value for store-level roles. The OCR addition required AWS account setup, API integration into the Make.com™ scenario, and 12 hours of additional build time. This was not in the original project scope and added two weeks to deployment. The lesson: audit your applicant pool’s file type distribution before scoping the build.

How do you handle candidates who reapply after being screened out?

The Make.com™ scenario checks the ATS for prior applications by email address before scoring. Candidates reapplying within 90 days are routed to manual review with their prior screening score displayed as context — not automatically re-screened with the same result. Candidates reapplying after 90 days are screened fresh. This policy prevents the automation from systematically excluding candidates who gained skills since their last application.