How to Master Make™ Mapping Functions for HR Data Transformation
Raw HR data breaks automation. Not because your scenario logic is wrong — because an ATS sends a single “Full Name” string when your HRIS expects two separate fields, or a job board delivers skills as a comma-separated blob when your talent platform wants individual tags. These mismatches are not edge cases. They are the default state of every HR tech stack that hasn’t been explicitly mapped.
Make™ mapping functions are the deterministic translation layer that resolves those mismatches before they corrupt a record. This guide walks through every step of configuring them correctly — from prerequisites through verification — so your HR scenarios run on real candidate data without manual intervention. For the strategic context of where mapping fits in a complete data pipeline, start with the parent pillar: Master Data Filtering and Mapping in Make for HR Automation.
Before You Start
Complete these prerequisites before touching a single mapping panel. Skipping them is the primary reason mapping configurations fail in production.
- Export real data from every source system. Download at least 20 actual records from your ATS, job board, or HRIS — not demo data. Real exports surface inconsistent capitalization, missing fields, and non-standard phone formats that clean sample data hides.
- Document field requirements for every destination system. Identify exact field names, accepted data types (string, integer, boolean, array), and enumerated option values (the ATS “Employment Type” field likely expects “Full-time” not “Full Time” — that space matters).
- Map the gap list on paper first. Before opening Make™, write down every source field and its destination field. Flag mismatches in type, format, or structure. These flags become your mapping function checklist.
- Have the Make™ scenario open in a test environment. Never develop mapping logic against a live scenario. Use a cloned version connected to sandbox credentials.
- Time required: 2–4 hours for a typical ATS-to-HRIS integration with 15–25 mapped fields. Complex aggregations add another 1–2 hours.
- Risk: Incorrectly mapped fields that reach payroll or compliance records are expensive to remediate. Harvard Business Review research on data quality confirms that bad data fed into downstream systems compounds errors faster than manual processes can correct them.
Step 1 — Open the Mapping Panel and Inspect Raw Field Output
Before writing a single function, confirm exactly what your source module is actually sending. Make™ shows you the raw bundle output from every module run — use it.
Run your scenario once with a real record. Click the output bubble on your source module (the small circle that appears after a run). You will see the full data bundle: every field name, its value, and its data type. This is your ground truth. Compare it against the destination field requirements you documented in prerequisites.
Common discoveries at this step:
- The “Full Name” field contains “SMITH, John” in last-first format — not “John Smith” — so a simple split() won’t produce the right first/last output without additional transformation.
- A date field arrives as a Unix timestamp (1711497600) rather than a human-readable string.
- A boolean “Active” field arrives as the string “true” rather than a boolean value, which will fail a destination field expecting a true boolean.
- An array field arrives flattened as a single comma-separated string.
Document every discrepancy. Each one requires a specific function in Step 2 or Step 3.
Step 2 — Apply Text Functions to Resolve String Mismatches
Text functions handle the largest category of HR field mismatches. Configure them directly inside the destination field mapping panel by clicking the field and using the function picker.
Splitting a Full Name into First and Last
Use split(Full Name; " ") to produce an array, then map [1] to First Name and [2] to Last Name. If your source data uses last-first comma format (“Smith, John”), use split(Full Name; ", ") and reverse the index selectors.
Based on our testing, add a trim() wrapper around each output to eliminate leading or trailing spaces that cause lookup failures in downstream systems: trim(split(Full Name; " ")[1]).
Standardizing Phone Numbers
Phone formats from job boards are notoriously inconsistent. Use replace() to strip non-numeric characters, then reformat: replace(replace(replace(Phone; "-"; ""); "("; ""); ")"; ""). Chain as many replace() calls as your source data requires.
Fixing Capitalization
ATS enumerated fields are case-sensitive. If your source sends “full-time” and your destination expects “Full-time,” use capitalize(Employment Type) or build an explicit if() statement that maps each incoming variant to the exact accepted value.
Converting Data Types
Use toString(), toNumber(), or toBooleanString() to resolve type mismatches. A string “true” that needs to become a boolean passes through toBooleanString(Active).
Step 3 — Convert Date Formats with formatDate()
Date format mismatches are the most common cause of integration failures between ATS platforms and HRIS or payroll systems. A single formatDate() call in the mapping panel resolves them permanently.
The function signature is: formatDate(date; format; [timezone])
Common HR use cases:
- ISO 8601 to MM/DD/YYYY:
formatDate(Start Date; "MM/DD/YYYY") - Unix timestamp to readable date:
formatDate(parseDate(Timestamp; "X"); "YYYY-MM-DD") - Adding timezone context for multi-location teams:
formatDate(Interview Date; "YYYY-MM-DD HH:mm"; "America/Chicago")
Always confirm the exact format string your destination system requires. Payroll platforms in particular often have strict format requirements that differ from ATS defaults. Test with three to five real records before closing this step — date parsing errors are silent when the format is close but wrong (e.g., “2026-3-15” vs. “2026-03-15”).
For more on mapping complex field structures, see the companion guide on mapping resume data to ATS custom fields.
Step 4 — Parse Array Fields with split() and Iterator
Multi-value HR fields — skills, certifications, languages, department tags — frequently arrive as a single comma-separated string from job boards and resume parsers. Destination systems that expect individual values will silently fail or create a single malformed tag if you map the string directly.
The correct pattern uses two Make™ components working in sequence:
- split() in the mapping panel: Apply
split(Skills; ",")to convert the string into an array within the current module’s output. - Iterator module: Add a Make™ Iterator module immediately after. Map the array output to the Iterator’s Array field. The Iterator processes each element individually, emitting one bundle per value.
- Destination mapping: In the module that writes to your ATS or HRIS, map the Iterator’s
Valueoutput to the individual tag or skill field. Each iteration creates one tag record.
Add a trim() inside the split to strip spaces around commas: split(trim(Skills); ","). Job board data frequently includes spaces after commas (“Python, SQL, Excel”) that create tag values with leading spaces (” SQL”, ” Excel”) that won’t match existing tag taxonomies.
For a deeper look at Iterator and Aggregator combinations, the dedicated guide on Iterator and Aggregator modules for complex HR data covers multi-record aggregation patterns in full.
Step 5 — Apply Conditional Mapping for Employment-Type Routing
Not every incoming record should follow the same transformation path. Full-time hires, contractors, and part-time employees each trigger different onboarding document sets, different benefits eligibility flags, and different HRIS record structures. Building separate scenarios for each employment type is maintenance-heavy and unnecessary. Conditional mapping handles all three paths inside a single scenario.
Option A — if() Function in the Mapping Panel
For simple two-branch decisions, use Make™’s if() function directly inside a destination field: if(Employment Type = "Contractor"; "CONTR"; "EMP"). This maps the incoming value to the exact enumerated code your HRIS expects without any additional modules.
Option B — Router with Filter Conditions
For three or more branches (Full-time, Part-time, Contractor), add a Router module after the trigger. Configure one route per employment type using Make™ filter conditions on the employment type field. Each route contains its own mapping configuration tailored to that path’s destination fields.
This is the pattern that powers the scenario Nick’s staffing team uses to route the 30–50 PDF resumes received weekly into the correct talent pools without manual sorting — each resume’s employment preference value determines which downstream mapping path it follows.
For the full logic framework behind conditional routing, see essential Make™ filters for recruitment data.
Step 6 — Aggregate Iteration Outputs into Dashboard-Ready Summaries
Iterators break collections apart. Aggregators reassemble them. For HR reporting use cases — consolidating quarterly performance reviews, summing application counts by source, or building a single onboarding status object from multiple task records — the Iterator + Aggregator pattern is the correct approach.
Configure the pattern:
- Add an Array Aggregator module after your Iterator and transformation steps.
- In the Aggregator settings, set “Source Module” to your Iterator module. This tells Make™ which bundle series to collect.
- Map the specific fields you want in the aggregated output to the Aggregator’s “Mapped items.”
- The Aggregator emits a single bundle containing an array of all processed items — ready to be written to a dashboard tool, a Google Sheet, or an HR analytics platform.
Deloitte’s Human Capital Trends research consistently identifies real-time workforce analytics as a top priority for HR leaders. The Iterator + Aggregator pattern is the mechanism that makes real-time aggregation from transactional HR records practical without a dedicated data warehouse. For a broader look at the modules that power these workflows, see essential Make™ modules for HR data transformation.
Step 7 — Validate with Real Edge-Case Data Before Going Live
Clean sample data hides every mapping error that will surface in production. Run your scenario against a batch of real historical records that includes:
- Records with missing optional fields (no middle name, no LinkedIn URL, no skills listed)
- Records with non-standard capitalization or special characters in names
- Records at the boundary of field length limits
- Records where the employment type value doesn’t match any of your expected enumerated options
- Duplicate records that test whether your filters (configured separately) correctly prevent double-processing
For each test run, inspect the output bundle of every module in the chain — not just the final destination. A field can be correctly formatted at the source module and silently dropped or truncated by an intermediate transformation step.
The data-quality principle here is well-established: Labovitz and Chang’s 1-10-100 rule (cited in MarTech research) holds that errors cost $1 to catch at entry, $10 to correct later, and $100 when acted upon in a corrupted state. In HR, “acted upon” means a payroll record, an offer letter, or a compliance filing. Catch mismatches in this step.
For patterns that catch errors at the workflow level rather than the field level, the guide on essential Make™ modules for HR data transformation covers error handlers and fallback routing.
How to Know It Worked
Your mapping configuration is production-ready when all of the following are true:
- Zero field-type errors in destination systems. Every mapped field writes successfully without type rejection errors from the ATS or HRIS API.
- Name fields split correctly across a sample of 20+ records, including hyphenated names, single-word names, and names with suffixes (Jr., III).
- Date fields display in the exact format the destination system renders — verify by opening the written record in the destination platform UI, not just checking the API response.
- Skill tags create as individual values, not as a single comma-string tag in the destination ATS.
- Conditional routing places each employment type on the correct path — verify by running one test record per employment type and checking the downstream outcome for each.
- Aggregated summary outputs contain the correct record count and accurate field values — spot-check three to five individual source records against their contribution to the aggregate.
If any check fails, return to the step that governs that output and re-inspect the raw bundle data before adjusting the function.
Common Mistakes and How to Fix Them
Mapping Against Demo Data Instead of Real Exports
Demo data is always clean. Real HR exports contain inconsistencies that only appear in production. Always export and test against actual records from your live systems before activating a scenario.
Ignoring Enumerated Field Constraints
ATS and HRIS fields that accept only specific values (“Active,” “Inactive,” “Pending”) will silently ignore or error on non-matching inputs. Map every enumerated destination field to a conditional function that explicitly converts each incoming variant to its accepted value — don’t assume the incoming value will match.
Building Separate Scenarios for Each Employment Type
Three separate scenarios for full-time, part-time, and contractor onboarding triples your maintenance burden every time a field changes. Use a Router with conditional mapping inside a single scenario. Changes propagate once, not three times.
Skipping the trim() Wrapper
Spaces before or after a value are invisible in the mapping panel but cause lookup failures, duplicate tag creation, and broken conditional logic in destination systems. Wrap every string output in trim() as a default practice.
Using AI Where a Mapping Function Will Do
AI augmentation belongs at judgment points — scoring unstructured interview notes, flagging unusual compensation requests, classifying ambiguous job titles. Splitting a name field, converting a date format, or routing by employment type are deterministic operations that a mapping function executes with zero error rate and zero latency. Reserve AI for what rules genuinely cannot decide. The parent pillar on HR data filtering and mapping covers this sequencing in full.
What to Build Next
With mapping functions correctly configured, your HR scenario has a reliable data translation layer. The natural next expansions are:
- Automate offer letter generation using mapped candidate fields — see the guide on automating job offer letters with data mapping for the full pattern.
- Add RegEx cleaning for unstructured text fields like resume summaries or notes — covered in the guide on RegEx for HR data cleaning and standardization.
- Eliminate remaining manual data entry touchpoints across your HR tech stack — the guide on eliminating manual HR data entry with automation covers the highest-impact entry points.
Parseur’s Manual Data Entry Report estimates the annual cost of a manual data-entry role at approximately $28,500 per year in time lost to rekeying tasks. In HR, that cost concentrates at exactly the touchpoints these mapping functions address. The configuration work in this guide is a one-time investment that eliminates recurring rework across every scenario run.




