How to Automatically Filter Out Duplicate Candidate Resumes in Make.com Before ATS Sync

Duplicate candidate resumes can plague your Applicant Tracking System (ATS), leading to wasted recruiter time, inaccurate data, and a disjointed candidate experience. Manually sifting through thousands of applications is inefficient and error-prone. This guide provides a step-by-step approach to leverage Make.com (formerly Integromat) to create an automated workflow that identifies and filters out duplicate resumes before they ever reach your ATS, ensuring your data remains clean and your processes streamlined.

Step 1: Map Your Current Resume Ingestion Workflow

Before building any automation, it’s crucial to thoroughly understand how candidate resumes currently enter your system. Identify all potential entry points: website forms, email attachments, third-party job boards, or direct uploads. For each entry point, determine the initial trigger event that signals a new resume arrival. This foundational understanding will dictate the starting point of your Make.com scenario, whether it’s a “Watch Email” module, a “Webhook” for form submissions, or a “Watch Files” for cloud storage. A clear map ensures you capture all potential resume sources, preventing duplicates from slipping through unseen channels.

Step 2: Establish a Unique Identifier for Candidates

The core of duplicate detection relies on identifying a consistent, unique piece of information for each candidate. The most common and reliable unique identifier is the candidate’s email address. Other options could include a combination of first name, last name, and phone number, but email typically offers the highest level of uniqueness. Ensure that whatever identifier you choose can be reliably extracted from the resume or accompanying application data. This identifier will be the key field used to check against your existing records, acting as the primary flag for potential duplicates.

Step 3: Implement a Data Store or Lookup Table in Make.com

To perform a duplicate check, you need a repository of unique identifiers for all resumes already processed or synced. Make.com’s built-in “Data Store” module is an excellent solution for this, offering a simple key-value database. Alternatively, a dedicated Google Sheet or a database like Airtable can serve as your lookup table. Each time a new, non-duplicate resume is processed, its unique identifier (e.g., email address) will be added to this store. This central repository becomes your single source of truth for previously seen candidates, enabling efficient lookups in subsequent steps.

Step 4: Configure the Duplicate Check Module

With your data store established, the next step involves querying it for incoming candidate identifiers. In your Make.com scenario, immediately after the initial trigger, add a module that interacts with your chosen data store (e.g., “Get a record” from a Data Store, or “Search Rows” in Google Sheets). Configure this module to search for the unique identifier extracted from the new resume. The output of this module will indicate whether a matching record (i.e., a duplicate) already exists in your store. This is the pivotal moment where the automation identifies a potential redundancy.

Step 5: Apply Conditional Routing for Duplicate Management

Following the duplicate check, you’ll need to direct the workflow based on the outcome. Utilize Make.com’s “Router” module, coupled with “Filters,” to create two distinct paths. One path is for unique, new candidates, and the other for identified duplicates. The filter for the “new candidate” path will be configured to proceed only if no matching record was found in your data store. Conversely, the “duplicate” path will activate if a match was found. This conditional routing ensures that only genuinely new resumes proceed to your ATS, while duplicates are handled separately.

Step 6: Integrate New Candidates with Your ATS and Update Store

For the “new candidate” path, configure the necessary modules to push the resume data into your ATS. This will typically involve an “Create a Record” or “Add a Candidate” module for your specific ATS (e.g., Greenhouse, Workable, BambooHR). Immediately after successfully syncing the new candidate to your ATS, add a module to update your data store. This module will add the unique identifier of the newly processed candidate to your store (e.g., “Add a record” to Data Store or “Add a row” to Google Sheet). This crucial step prevents the same resume from being flagged as new in future runs.

Step 7: Implement Notifications and Logging for Duplicates

For the “duplicate” path, instead of pushing to the ATS, consider implementing a notification system or logging mechanism. This could involve sending an internal email notification to a recruiting operations team, logging the duplicate event in a separate Google Sheet, or simply moving the duplicate file to an archive folder. This provides transparency and allows for manual review if necessary, without cluttering the ATS. Having a clear audit trail for duplicates ensures that no candidate is inadvertently lost while keeping your primary ATS clean and optimized.

If you would like to read more, we recommend this article: The Automated Recruiter’s Edge: Clean Data Workflows with Make Filtering & Mapping

By Published On: August 13, 2025

Ready to Start Automating?

Let’s talk about what’s slowing you down—and how to fix it together.

Share This Story, Choose Your Platform!