How Nick’s Staffing Firm Eliminated 150+ Hours of Monthly Data Processing by Mastering JSON, Arrays, and Objects in Make™

HR automation breaks at the data layer — not the logic layer. Misrouted candidates, silent field drops, and payroll entry errors trace back to one root cause: nobody engineered the JSON structure before building the workflow. This case study shows exactly how a 3-person staffing firm confronted that problem head-on, and what the fix looked like inside their data filtering and mapping in Make™ for HR automation stack.

Snapshot: Nick’s Resume Intake Pipeline

Entity Nick — recruiter, small staffing firm (3-person team)
Baseline volume 30–50 PDF and JSON resumes per week across active job orders
Baseline time cost 15 hrs/wk per recruiter on manual file processing = 150+ hrs/month for the team
Core constraint ATS webhook payload contained nested arrays for skills, education, and work history — auto-mapping silently dropped all nested data
Approach OpsMap™ discovery → explicit JSON Parse → iterator per nested array → field-level validation before ATS write
Outcome 150+ hours/month reclaimed; zero nested-field data loss; scenario handles 5- to 50-item arrays without modification

Context: When Automation “Works” But the Data Is Wrong

Nick’s team was not automation-averse. They had already connected their application intake form to their ATS via webhook. The scenario ran without errors. What they did not know was that their candidate records were incomplete — every single one.

The application form collected structured data: contact details, a skills list, education history, and work experience. The ATS received contact details. Skills, education, and work history were absent from every record. The scenario showed green checkmarks in every execution log. There were no error alerts. The data was just gone.

This is the canonical nested-array failure pattern. The webhook payload arrived as a JSON object with top-level keys for name, email, and phone — and nested arrays for skills, education, and experience. Make™’s auto-mapping surface-scanned the top-level keys and mapped them correctly. The nested arrays were structured data, not simple strings, and auto-mapping did not descend into them. They were silently ignored.

By the time Nick’s team identified the problem, three weeks of candidate records in the ATS contained only contact information. Reconstructing the missing data required manually re-entering skills and history from original application emails — the exact manual work the automation was built to eliminate.

According to Parseur’s Manual Data Entry Report, manual data entry costs organizations an average of $28,500 per employee per year in processing time and error correction. For a 3-person team spending 15 hours each week on file processing, that figure was not abstract — it was the operating reality the team was trying to escape.

Approach: OpsMap™ Before Any Module Gets Built

The rebuild started with an OpsMap™ discovery session, not with Make™. The purpose was to document the exact JSON schema of every payload the workflow would touch before a single module was configured.

OpsMap™ produced a data map with four columns for each integration point: field name in source system, data type, nesting depth, and corresponding field name in destination system. For Nick’s ATS webhook, that exercise immediately surfaced the structural reality the original build had missed:

  • Top-level keys: candidateId, firstName, lastName, email, phone — flat strings, map directly
  • Nested object: currentRole — a child object with its own keys (title, employer, startDate) requiring dot-notation access
  • Nested arrays: skills[], education[], workHistory[] — variable-length arrays of objects, each requiring a dedicated iterator before mapping
  • Field name mismatch: Source sent annualCompensation; ATS expected expectedSalary — a silent null on write without explicit mapping

The field name mismatch in the compensation field is worth pausing on. A null value writing to a salary field does not throw an error — it either writes null, writes zero, or gets ignored depending on the receiving system’s default behavior. This is the same class of error that turned a $103K offer letter into a $130K payroll entry for David, an HR manager at a mid-market manufacturer. The ATS-to-HRIS transcription mismatch cost $27K and ended in the employee’s resignation. The structural audit before build is what catches this before it becomes a financial event.

Gartner research consistently identifies data quality failures as a top driver of enterprise automation project failures. The APQC benchmarks on HR process efficiency reinforce that organizations with documented data mapping standards complete automation projects significantly faster and with fewer post-launch defects than those that begin building without them.

Implementation: The Four-Module Parsing Stack

With the data map complete, the build itself was straightforward. The architecture used four structural modules before any business logic ran:

Module 1 — Webhook + JSON Parse

The Make™ Custom Webhook module received the raw payload. Immediately downstream, a JSON Parse module converted the raw text into structured bundle data. Every field — including nested objects and arrays — became addressable by name. Auto-mapping was disabled. Every field mapping was explicit.

Module 2 — Dot-Notation for Nested Objects

The currentRole nested object was accessed using dot-notation references: currentRole.title, currentRole.employer, currentRole.startDate. These mapped directly to flat ATS fields with no iterator needed because the object was a single record, not a list.

Module 3 — Iterator Per Nested Array

Each of the three variable-length arrays (skills[], education[], workHistory[]) fed into its own iterator. The iterator emitted one bundle per array item, regardless of array length. A route filter immediately downstream checked for empty arrays — zero-item arrays were routed to a no-op path rather than triggering a downstream error. This is the empty-array guard that production scenarios require.

For deeper implementation detail on iterator configuration, the guide on Make™ iterator and aggregator for complex HR data covers the full configuration pattern.

Module 4 — Field Validation Before Write

Before any data wrote to the ATS, a filter module validated that required fields were non-null and correctly typed. The compensation field mismatch (annualCompensationexpectedSalary) was mapped explicitly with a fallback value of zero-write blocked by filter if null. This validation layer is the structural equivalent of what mapping resume data to ATS custom fields requires for any field that carries financial or compliance implications.

The error handling configuration followed the pattern described in error handling in Make™ for resilient workflows — every parse failure routed to a Slack alert with the raw payload attached, so the team could manually review and reprocess rather than silently lose data.

Results: What Changed After the Rebuild

The rebuilt scenario went live with a batch of twelve candidate applications as a controlled test. All twelve records wrote to the ATS with complete data — contact details, skills lists, education history, and work experience fully populated. The scenario ran in under four seconds per candidate.

Over the following month, the team processed more than 180 applications through the pipeline. Zero nested-field data loss. Zero compensation field null-writes. One error alert triggered — a malformed payload from a third-party job board — which routed correctly to Slack and was manually resolved in under ten minutes.

The operational impact:

  • 150+ hours/month reclaimed across the 3-person team (15 hrs/wk per recruiter eliminated from manual file processing)
  • Zero manual data reconstruction events post-launch, compared to the three-week backlog the original scenario created
  • Variable-length array handling validated across batch sizes from 5 to 47 applications with no scenario modification required
  • Compensation field accuracy confirmed on every record — the $0 fallback filter blocked two null-write attempts in the first week, both from incomplete application submissions

McKinsey Global Institute research on automation economics consistently shows that time reclaimed from manual data processing translates directly to capacity for higher-value work. For Nick’s team, the reclaimed hours shifted toward candidate relationship management — the activity that actually drives placement revenue.

Asana’s Anatomy of Work research finds that knowledge workers spend a significant share of their week on work about work rather than skilled work. For a recruiting team, manual data entry is the clearest example of that inefficiency. The 150+ hours reclaimed per month converted directly to recruiter-candidate contact time.

Lessons Learned: What We Would Do Differently

Three lessons from this rebuild apply to any HR automation project that touches structured data payloads:

1. Treat the JSON Schema Audit as a Deliverable, Not a Prerequisite

The original build skipped the schema audit because it felt like overhead. The rebuild treated the OpsMap™ data map as a formal deliverable — something that gets reviewed, versioned, and stored. When the ATS vendor updated their webhook schema six weeks later, the team caught the change immediately by comparing the new payload against the documented schema. Without that baseline document, the change would have gone unnoticed until data quality degraded again.

2. Never Trust Auto-Mapping for Production Payloads

Auto-mapping is a discovery tool, not a production configuration. It is useful for quickly visualizing what fields a module exposes. It is not reliable for nested objects, variable-length arrays, or any field where a null value has financial, compliance, or candidate-experience implications. Every production mapping in Make™ should be set explicitly, with the source field name verified against the live payload schema. The guide on Make™ mapping functions for HR data transformation details the explicit-mapping workflow.

3. Build the Empty-Array Guard Before You Need It

The empty-array edge case does not appear in controlled tests. Test candidates always have skills listed. Production candidates sometimes do not. Building the guard after the first production error is a recoverable mistake, but it is an avoidable one. Every iterator in a production HR scenario should have an explicit route for zero-length arrays on day one. This is not optional error handling — it is baseline architecture.

SHRM data on HR technology adoption consistently points to data quality and integration reliability as the top barriers to automation ROI. The teams that close that gap earliest are the ones that treat data structure as an engineering concern from the first day of discovery.

The Takeaway for HR and Recruiting Teams

JSON, arrays, and objects are not developer concepts that HR professionals need to learn abstractly. They are the specific structural patterns that determine whether your Make™ automation surfaces complete, accurate candidate data — or silently drops the fields that matter most.

Nick’s team did not need to become developers. They needed a structured discovery process that documented the data landscape before build, and a four-module parsing stack that treated each data type according to its actual structure. That combination eliminated 150+ hours of monthly manual work and produced a scenario that has handled every variable payload the ATS has sent since launch.

The broader discipline that makes this possible — knowing where data integrity breaks down across the full HR automation stack — is what the parent guide on data filtering and mapping in Make™ for HR automation covers end to end.

For teams that have already built automation but suspect their data is incomplete, the starting point is not a rebuild — it is a schema audit. Map what you are sending, map what the destination system expects, and close every gap explicitly. The guide to eliminating manual HR data entry with automation outlines the audit workflow in practical terms.

For teams ready to scope a full pipeline, the recruiter productivity through Make™ data transformation case study and the unified HR data integration case study show what production-grade pipelines look like across the full HR tech stack.