Post: How to Map Resume Data to ATS Custom Fields Using Make

By Published On: August 13, 2025

How to Map Resume Data to ATS Custom Fields Using Make

Resume data hits your inbox in PDFs, plain text, HTML, and parsed JSON — none of it formatted to match your ATS custom fields. Manual transcription is where hiring data breaks: wrong data types, truncated values, blank required fields, and the kind of copy-paste error that turns a $103,000 offer into a $130,000 payroll entry. Automating this flow with Make™ eliminates that failure mode entirely.

This guide is one focused piece of a larger system. The parent resource — Master Data Filtering and Mapping in Make for HR Automation — covers the full data integrity framework. Here, we drill into the specific mechanics of mapping parsed resume data to ATS custom fields: trigger setup, parsing logic, type conversion, filtering, writing, and verification.


Before You Start

Skipping prerequisites is the fastest way to build a scenario that looks functional but silently corrupts candidate records. Confirm all of the following before opening Make™.

  • ATS API access: Confirm your ATS exposes a REST API or has a native Make™ connector. Identify whether you are writing to a sandbox or production environment — build and test in sandbox.
  • Custom field schema documentation: Pull the exact API field names, data types (string, integer, boolean, date, enum), required/optional status, and — for dropdowns — the accepted option values or IDs. This is the most time-consuming prerequisite and the most important one.
  • Resume source access: Know where resumes enter your pipeline — email inbox, ATS upload portal, career site webhook, or job board feed — and confirm you can trigger an automated flow from that source.
  • Make™ account with sufficient operations: Complex parsing scenarios consume more operations than simple data transfers. Estimate module count before choosing a plan tier.
  • Error logging destination: Set up a Google Sheet, Airtable base, or data store before building the scenario. You need somewhere to route failed records from day one.
  • Time estimate: Four to eight hours for a single resume source with five to eight custom fields. Add time for AI parsing integration and multi-source routing.

Step 1 — Audit and Document Your ATS Custom Fields

Start with your ATS, not with Make™. Every mapping error traces back to assumptions made about field names and data types that turned out to be wrong.

Open your ATS admin panel or API documentation and create a mapping table with four columns: Human Label (what recruiters see), API Field Name (exact string used in API calls), Data Type, and Accepted Values. Work through every custom field you intend to populate.

Common custom fields in recruiting ATS builds include:

  • Years of total experience (integer)
  • Highest education level (enum/dropdown)
  • Primary technical skill (string or multi-select)
  • Visa sponsorship required (boolean)
  • Target salary range (integer or string)
  • Source channel (enum — values vary by ATS)
  • Certifications held (multi-select or text array)

For enum and multi-select fields, pull the exact accepted values from your ATS API. If your ATS expects bachelor_degree but your parsing returns Bachelor's, the write fails silently. Document the exact strings now.

Export this table. You will reference it in every subsequent step.


Step 2 — Configure Your Trigger Module

The trigger is what starts the scenario. Your trigger choice depends on how resumes enter your pipeline.

Option A: Webhook (recommended for real-time processing)

Create a new Make™ scenario and add a Webhooks > Custom Webhook module as the first module. Make™ generates a unique endpoint URL. Configure your resume source — career site form, ATS upload hook, or email parser — to POST resume data to that URL whenever a new resume is received. This approach processes resumes in seconds rather than on a polling schedule.

Option B: Email trigger (for email-based resume submission)

If candidates submit resumes by email, use an Email > Watch Emails module pointed at a dedicated inbox. The module fires when a new message arrives. Attachments are accessible as file bundles in subsequent modules. Set the polling interval to the minimum your plan allows — fifteen minutes is typical.

Option C: ATS polling (for ATS-initiated flows)

Use your ATS’s native Make™ connector with a Watch Records module to detect new candidate records or newly uploaded resume files. This works when resumes are uploaded directly to the ATS and you want to enrich those records with additional parsed data after the fact.

Whichever trigger you choose, run it once with a real test resume and confirm the data bundle structure before moving to the next step. Every downstream module depends on what the trigger exposes.


Step 3 — Extract Resume Text from File or Payload

Before you can parse data, you need raw text. How you get it depends on what the trigger delivers.

If the trigger delivers JSON

A webhook payload from a structured source (some ATS platforms, career site forms with structured fields) may already contain key-value pairs. Map directly to your field audit table. Skip to Step 4.

If the trigger delivers a PDF or Word file

Add a conversion step. Make™ includes a Tools > Parse PDF function for simple PDFs. For complex multi-column layouts, route the file to a document parsing service via the HTTP > Make a Request module. The output is a text string containing the resume’s full content.

If the trigger delivers plain text or HTML

No conversion needed. Pass the text string directly to the parsing step.

Add a router branch at this step that checks the incoming file MIME type or content structure and routes each resume type to the appropriate extraction path. This keeps your scenario clean and avoids hard failures when resume format varies — which it always does.

For more on building branching logic in Make™, see the guide on Make™ modules that power HR data transformation.


Step 4 — Parse Structured Fields with Regex, Parse Unstructured Fields with AI

Parsing is where most teams over-engineer. The rule is simple: use deterministic tools for predictable patterns, AI for genuine ambiguity.

Deterministic parsing (regex and built-in functions)

Add a Text Parser > Match Pattern module for each field with a predictable format. Write a regex pattern for each target field:

  • Email: [a-zA-Z0-9._%+\-]+@[a-zA-Z0-9.\-]+\.[a-zA-Z]{2,}
  • Phone (US): (\+?1[-.\s]?)?\(?[0-9]{3}\)?[-.\s]?[0-9]{3}[-.\s]?[0-9]{4}
  • LinkedIn URL: linkedin\.com/in/[a-zA-Z0-9\-]+
  • Years of experience (numeric): (\d+)\+?\s*years?
  • Graduation year: (19|20)\d{2} (within education section context)

For a deeper treatment of regex patterns in HR data workflows, see the satellite on using regex in Make™ for HR data cleaning.

AI parsing (for unstructured narrative content)

For skills lists, job summaries, certifications buried in paragraph text, and education level inference, add an AI parsing module after text extraction. Configure the prompt to return a JSON object with the exact field names from your Step 1 audit table. A prompt structure that works:

Extract the following fields from the resume text below and return them as a JSON object with these exact keys: [list your field names]. If a field cannot be determined, return null for that key. Do not infer or guess — return null rather than an approximation.

The null-on-uncertainty instruction is critical. Guessed values are worse than blank values — they pass downstream filters and populate ATS fields with wrong data that recruiters trust.

After the AI module, add a Tools > Parse JSON module to convert the AI’s text output into a structured Make™ data bundle. Every subsequent module works with discrete mapped fields, not a raw string.


Step 5 — Apply Type Conversion Functions

Type conversion is the highest-frequency failure point in ATS write operations. Your ATS expects specific data types. Parsed text is always a string. You must convert before writing.

Use Make™’s built-in functions in the mapping panel of your next module:

  • String to integer: toNumber({{parsed.years_experience}}) — converts “7” to 7
  • String to date: parseDate({{parsed.start_date}}; "MM/DD/YYYY") — formats to your ATS date format
  • String to boolean: if({{parsed.visa_required}} = "yes"; true; false)
  • Free text to enum: Use a switch function to map parsed values to ATS-accepted option strings: switch({{parsed.education_level}}; "Bachelor's"; "bachelor_degree"; "Master's"; "master_degree"; "PhD"; "doctorate"; "other")
  • Text list to array: Use split({{parsed.skills}}; ", ") to convert a comma-separated skills string to an array for multi-select fields

Build these conversions inline in the mapping panel rather than as separate modules. Fewer modules means fewer operations consumed and easier debugging.


Step 6 — Filter Incomplete Records Before Writing

A partial write is worse than no write. Add a Filter module immediately before your ATS write module. The filter checks that every required field from your Step 1 audit is populated.

Set the filter conditions:

  • Required text fields: value does not equal blank AND value is not null
  • Required numeric fields: value is a number
  • Required enum fields: value is one of the accepted values list

Configure the scenario’s error handling so records that fail this filter route to your error log (the destination you set up in prerequisites), not to Make™’s default error handler. Add a Slack or Email notification so a recruiter knows within minutes that a record needs manual review.

For the full error handling architecture that makes this work reliably at scale, see the guide on error handling in Make™ for resilient workflows. The essential Make™ filters for recruitment data satellite covers filter configuration patterns in detail.


Step 7 — Write Parsed Data to ATS Custom Fields

With clean, typed, validated data in hand, add your ATS write module. Use your ATS’s native Make™ connector if one exists — it handles authentication and field discovery automatically. For ATS platforms without native connectors, use the HTTP > Make a Request module with your ATS REST API endpoint.

In the module configuration, map each converted value to its corresponding ATS field using the exact API field names from your Step 1 audit table:

  • Map toNumber({{parsed.years_experience}}) → ATS field total_experience_years
  • Map converted education enum → ATS field highest_education_level
  • Map split skills array → ATS field technical_skills
  • Map boolean visa flag → ATS field visa_sponsorship_required

Set the module’s error handling to Break with resume functionality enabled. This means a failed write pauses the scenario rather than skipping the record silently, giving you a recoverable state to investigate.

If you are enriching an existing candidate record rather than creating a new one, use an Update Record operation and pass the candidate’s existing ATS ID from the trigger bundle. For new records, use Create Record and capture the returned record ID for any subsequent steps.


Step 8 — Log Every Transaction

Add a final module to write a log entry for every processed resume — success or failure. Route to a Google Sheet or data store with these columns:

  • Timestamp
  • Candidate name (if parsed)
  • Resume source
  • ATS record ID (if write succeeded)
  • Fields successfully written
  • Fields that failed or were null
  • Status: Success / Filter Rejected / Write Failed

This log is your operational dashboard. It tells you which resume formats cause the most parsing failures, which fields have the highest null rates, and whether AI parsing accuracy is improving or degrading after prompt changes. Parseur’s research estimates that manual data entry errors cost organizations approximately $28,500 per employee per year — your log is what proves automation is eliminating that cost in your specific context.


How to Know It Worked

Run the completed scenario with five structurally different test resumes before declaring it production-ready: a PDF with two columns, a plain-text file, a Word document, a minimalist one-pager, and a dense multi-page resume. For each, verify the following:

  • ATS record exists with the correct candidate name and contact fields
  • All required custom fields are populated — open the ATS record and check every field from your audit table
  • Data types are correct — numeric fields contain numbers, date fields contain dates, booleans are true/false, enums match accepted values
  • Error log is clean — no unexpected filter rejections or write failures for valid test resumes
  • Incomplete resumes route to error log — test with a deliberately stripped resume and confirm it does not reach the ATS

After going live, spot-check ten real candidate records per week for the first four weeks. Field-level accuracy should exceed 98% for structured regex-parsed fields and 90% for AI-parsed narrative fields. If accuracy falls below those thresholds, review the error log for patterns before adjusting parsing logic.

McKinsey research on intelligent process automation identifies data input standardization as the prerequisite for any downstream analytical value — your ATS mapping scenario is that standardization layer. Get it right and every report, scorecard, and hiring metric downstream improves automatically.


Common Mistakes and Troubleshooting

Mistake: Building the scenario before documenting field schema

You will hit silent write failures on every enum and multi-select field. Stop, pull the accepted values from your ATS API docs, and add switch functions before the write module.

Mistake: Using AI parsing for every field

AI parsing is slower, more expensive, and occasionally wrong on fields with deterministic patterns. Regex extracts a phone number correctly 100% of the time. AI extracts it correctly 95% of the time — and the 5% failure rate is random, not predictable.

Mistake: No filter before the ATS write

Partial records reach the ATS, look complete to recruiters, and cause downstream errors. Add the filter. Route rejects to a log. Never skip this step.

Mistake: Testing with only one resume format

The scenario works perfectly on the format you tested. It fails silently on the format a candidate actually submits on day three. Test structural variety, not volume.

Mistake: Not logging failed records

Failed records disappear into Make™’s execution history, which expires. Candidates get lost. Add the log module before go-live.

Troubleshooting: AI returns inconsistent JSON structure

Add explicit JSON schema instructions to your prompt. Specify that all keys must be present (null if unknown) and that no additional keys should be added. Parse the JSON in a separate module before mapping, and add a null-check filter on each AI-sourced field.

Troubleshooting: ATS write fails with 422 error

A 422 (Unprocessable Entity) from most ATS APIs means a field value doesn’t match the expected format — usually a type mismatch or an enum value the ATS doesn’t recognize. Check the field your API response identifies, trace it back to the conversion step, and verify the switch or coercion function output.


Next Steps: Extending the Scenario

Once the core mapping scenario is stable, three extensions compound the value significantly:

  • Duplicate detection: Before the ATS write step, query the ATS for existing records matching the candidate’s email. Route matches to a duplicate review log rather than creating a second record. The satellite on filtering candidate duplicates with Make™ covers this pattern.
  • Offer letter generation: Once a candidate reaches the offer stage, the same mapped ATS fields can drive automated offer document generation. See the guide on automating job offer letters with Make™ data mapping.
  • Cross-system sync: Map the ATS write to trigger a corresponding record creation in your HRIS for candidates who accept offers — eliminating the second round of manual data entry. The guide on connecting ATS, HRIS, and payroll with Make™ walks through the integration architecture.

Accurate ATS data is the foundation every downstream HR process — reporting, compliance, compensation benchmarking — depends on. This scenario is one component of that foundation. The full system is covered in the parent resource: Master Data Filtering and Mapping in Make for HR Automation.