How to Use Iterator and Aggregator in Make for HR Data Transformation
Every HR automation pipeline eventually hits the same wall: a module returns 50 employee records in one bundle, and the next step needs to act on each record individually. Or the reverse — you’ve processed 40 individual résumés and need to deliver one consolidated report. Flat mapping won’t solve either problem. Iterator and Aggregator will. This guide walks through exactly how to configure both modules in Make™ for HR use cases — step by step, with the configuration details that matter and the mistakes that will cost you hours if you skip them. It’s a direct drill-down into one of the most important mechanics covered in the parent guide on data filtering and mapping in Make for HR automation.
Before You Start
Iterator and Aggregator are intermediate-to-advanced Make™ mechanics. Before building, confirm you have these prerequisites in place.
- Tools: An active Make™ account, at least one live scenario with a trigger already configured, and access to the HR system (ATS, HRIS, or inbox) that generates the bulk data you intend to process.
- Data structure knowledge: You need to know whether your source module outputs an array (a list of items inside square brackets in the bundle inspector) or a single object. Iterator only works on arrays. If you’re unsure, review JSON arrays and objects in Make HR automation first.
- Time: Allow 60–90 minutes for a first build; 20–30 minutes once the pattern is familiar.
- Risk: Running an Iterator loop against a live production system without testing first can trigger duplicate writes, notification floods, or API rate-limit errors. Always test against a sandboxed or sample dataset.
- Operations budget: Each bundle the Iterator emits consumes at least one Make™ operation. A loop over 100 records with three modules inside costs 300+ operations minimum. Check your plan’s monthly operation cap before scaling.
Step 1 — Identify Your Bulk Data Source and Confirm It Outputs an Array
Before placing an Iterator, you must confirm the upstream module returns an array. Run the scenario manually, open the bundle inspector on the source module’s output, and look for square brackets surrounding the data. If you see [ { }, { }, { } ], you have an array. If you see a single { } object, the Iterator has nothing to split.
Common HR bulk data sources that output arrays:
- ATS API endpoints that return all candidates for a given job posting
- HRIS report exports returning all active employees
- Email trigger modules returning all file attachments on a single message
- Spreadsheet modules reading all rows from an onboarding tracker
- Survey platforms returning all responses for a performance review cycle
McKinsey Global Institute research finds that organizations processing structured data at scale reduce manual reconciliation time significantly — but only when the data pipeline is engineered to handle collections correctly from the start. Flat mapping a 50-record array into a single downstream field is the most common pipeline failure point we see in HR scenario audits.
Once you’ve confirmed your array, note the exact field name that contains it. You’ll map this field into the Iterator in the next step.
Step 2 — Insert and Configure the Iterator Module
Place the Iterator immediately after the module that outputs your array. Do not insert any other modules between the source and the Iterator — the Iterator must receive the collection directly.
Configuration steps:
- Click the + icon after your source module and search for “Iterator.”
- In the Iterator’s Array field, open the mapping panel and select the array field from your source module’s output. For an ATS API response, this might be
candidates[]ordata.results[]. - Save the module.
- Run the scenario with a small test dataset (three to five records). Open the Iterator’s output in the bundle inspector and verify you see one bundle per record — not one bundle containing all records.
If the Iterator emits a single bundle instead of multiple, your upstream module is not outputting a true array. Use a Set Variable or Array module to reshape the data before the Iterator, or review the source module’s pagination settings — some HR APIs paginate results and won’t return a full array without a pagination loop.
This is also the point to wire in any filters. A filter placed between the Iterator and the next module acts on each individual bundle, letting you exclude records that don’t meet criteria — for example, filtering out candidates below a minimum qualification threshold before the records hit your ATS. The guide on essential Make filters for recruitment data covers filter configuration in detail.
Step 3 — Build Per-Record Logic Inside the Iterator Loop
Everything between the Iterator and the Aggregator executes once per bundle — meaning once per record in your original array. This is where you build all the processing logic that needs to happen at the individual-record level.
Typical per-record modules for HR workflows:
- Field mapping: Transform raw API fields into the exact format your ATS or HRIS expects. For a deep dive on this, see mapping résumé data to ATS custom fields.
- Conditional Router™: Branch logic based on per-record attributes — seniority level, department, location, employment type. The Router™ module splits each bundle into multiple paths, each with its own subsequent modules. See routing complex HR data flows with Make routers for full configuration.
- API writes: Create or update individual records in your HRIS, ATS, or payroll system. Each write happens atomically per record, which prevents the all-or-nothing failure modes that bulk API calls introduce.
- Error handlers: Wrap critical per-record steps in an error handler so a single bad record doesn’t halt the entire loop. The guide on error handling in Make for resilient HR workflows shows the exact configuration.
- Data enrichment: Call a secondary API to add context to each record — for example, pulling a candidate’s LinkedIn headline or verifying a phone number format — before writing to the destination system.
Parseur’s Manual Data Entry Report found that manual data entry errors cost organizations roughly $28,500 per employee per year in rework and error correction. Per-record processing inside an Iterator loop eliminates the class of errors that comes from batch-writing misaligned fields — the same category of error that cost David $27,000 when an ATS-to-HRIS transcription mistake turned a $103,000 offer into a $130,000 payroll entry.
Keep the loop lean. Every module inside the loop multiplies your operation count by the number of records. If a step can happen outside the loop (before the Iterator or after the Aggregator), move it outside.
Step 4 — Select the Correct Aggregator Type and Set Source Module
When your per-record processing is complete, place the Aggregator immediately after the last module in the loop. Make™ offers four Aggregator types — choose based on what the downstream module expects to receive.
| Aggregator Type | Output Format | Best HR Use Case |
|---|---|---|
| Text Aggregator | Single concatenated string | Recruiter digest emails, Slack notifications listing all processed candidates |
| Array Aggregator | JSON array | Batch API writes, dashboard data pushes, data warehouse inserts |
| Numeric Aggregator | Sum, average, min, or max | Performance score roll-ups, headcount totals, time-to-fill averages |
| Table Aggregator | HTML or CSV-formatted table | Payroll validation reports, onboarding status tables sent to HR leadership |
Critical configuration: Source Module. Inside the Aggregator’s settings panel, the Source Module dropdown defines where Make™ should consider the loop to have started. Set this to your Iterator module — not the trigger, not the source API module. If you set it to the wrong module, Make™ will aggregate bundles from outside your intended loop, producing duplicate or incomplete outputs. This is the single most common Aggregator misconfiguration in client scenario audits.
Text Aggregator separator: If you’re building a digest email or Slack message, configure the separator character (typically a newline \n or a double newline) to control how individual record lines are spaced in the output.
Array Aggregator field mapping: In the Array Aggregator, you’ll map which fields from each processed bundle to include in the output array. Map only the fields the downstream system needs — not the entire bundle — to keep payload sizes manageable and API responses clean.
Step 5 — Map Aggregator Output to Downstream Systems
The Aggregator emits a single bundle regardless of how many records were processed. That single bundle is what every downstream module sees. Wire it to the appropriate endpoint.
Common downstream destinations for HR Iterator-Aggregator pipelines:
- Email or messaging module: Send the Text Aggregator’s concatenated string as a single recruiter digest covering all processed candidates in one run.
- HTTP module (batch API write): POST the Array Aggregator’s JSON payload to a bulk-insert endpoint in your HRIS or data warehouse — one API call instead of N individual calls.
- Google Sheets / Excel row: Write the Table Aggregator’s output to a reporting spreadsheet that HR leadership accesses for daily or weekly pipeline reviews.
- Database module: Insert an aggregated performance summary into a relational database table for long-term retention and trend analysis.
- Notification service: Push a Numeric Aggregator result — for example, average days-to-offer across all open roles — to a KPI dashboard or business intelligence tool.
Asana’s Anatomy of Work research found that workers spend a significant portion of their week on work about work — status updates, compiling reports, formatting data for stakeholders. An Aggregator that auto-compiles and delivers a structured HR report eliminates that entire category of manual assembly. APQC benchmarks similarly show that HR organizations that automate reporting workflows reduce administrative labor costs measurably versus those that rely on manual data consolidation.
For advanced reporting scenarios, see automating HR reports with advanced data export filters, which covers filter-to-Aggregator pipeline architecture for scheduled HR reporting.
Step 6 — Validate with a Small Test Dataset Before Production
Before running your Iterator-Aggregator scenario against live HR data, validate the full pipeline against three to five real records — small enough to inspect manually but representative enough to surface structural issues.
Validation checklist:
- Open the Iterator’s output in the execution log. Confirm the number of bundles emitted matches the number of records in your test dataset exactly.
- Inspect one individual bundle inside the loop. Confirm every field you mapped is present, correctly typed, and formatted for the downstream system.
- Open the Aggregator’s output. Confirm it emits exactly one bundle, regardless of how many records were processed.
- Verify the Aggregator’s output structure matches what the downstream module expects — JSON array for API writes, plain text for email, etc.
- Check operation count in the scenario’s execution details. Multiply by your expected production record volume and confirm it fits your monthly operation budget.
- Confirm error handlers fire correctly by temporarily mapping a bad value into a required field and verifying the error path activates instead of halting the scenario.
Once validation passes on the small dataset, run a medium-scale test (20–30 records) and inspect for edge cases: records with null fields, records where conditional routing takes an unexpected path, records where the Aggregator’s separator produces unexpected formatting. Gartner research on HR technology implementations consistently identifies inadequate pre-production testing as a primary driver of automation rollback. Test before you scale.
How to Know It Worked
A correctly functioning Iterator-Aggregator pipeline produces three observable outcomes:
- Bundle count in the Iterator equals record count in the source array. If your HRIS returned 47 employees, the Iterator should emit exactly 47 bundles. Inspect the execution log to confirm.
- Downstream module receives exactly one bundle from the Aggregator. Whether that bundle is a JSON array of 47 objects or a text string of 47 lines, it arrives as a single unit. If your email module sends 47 separate emails instead of one digest, the Aggregator is either missing or misconfigured.
- No manual reconciliation required after the run. The point of this architecture is to eliminate the copy-paste loop. If someone on your team still needs to verify, reformat, or re-enter any data after the scenario runs, identify which step produced inconsistent output and add a filter or transformation before it reaches the Aggregator.
Common Mistakes and How to Fix Them
Mistake 1: Source Module Set to Trigger Instead of Iterator
Symptom: Aggregator output contains far more records than expected, or the scenario runs indefinitely. Fix: Open the Aggregator configuration and change Source Module from the trigger to the Iterator module. Save and re-run.
Mistake 2: Iterator Placed After a Non-Array Module
Symptom: Iterator emits one bundle containing the entire data structure instead of one bundle per record. Fix: Confirm the upstream module’s output is an array in the bundle inspector. If it outputs a single object containing an array, use the Iterator’s Array field to navigate to the nested array using dot notation (e.g., data.candidates).
Mistake 3: Modules Outside the Loop That Should Be Inside
Symptom: All records receive the same value for a field that should vary per record (e.g., all candidates get the same status label). Fix: Move the mapping module to inside the loop, between the Iterator and the Aggregator, so it executes with per-record context.
Mistake 4: No Error Handler Inside a Long Loop
Symptom: One bad record at position 23 of 200 halts the entire scenario, leaving 177 records unprocessed with no notification. Fix: Add a Make™ error handler (Break, Resume, or Rollback) to the most failure-prone module inside the loop. See error handling in Make for resilient HR workflows for exact setup.
Mistake 5: Not Accounting for API Pagination
Symptom: Iterator processes only the first page of results (typically 25–100 records) even though hundreds more exist. Fix: Enable pagination in the source module’s settings and configure the maximum number of results to retrieve. Without pagination, your Iterator loop is silently incomplete.
Putting It Together: The Full HR Résumé Batch Pipeline
Here’s how a complete Iterator-Aggregator scenario looks for a high-volume résumé processing workflow — the same pattern that enabled Nick’s recruiting team to reclaim 150+ hours per month across three recruiters:
- Trigger: Email Watch module fires when a new message arrives in the résumés inbox.
- Iterator: Splits the email’s
attachments[]array — each PDF résumé becomes one bundle. - Parser module: Extracts structured fields (name, email, phone, skills, experience years) from each individual PDF bundle.
- Filter: Routes only candidates meeting minimum experience criteria forward; discards the rest with a notification to the recruiter.
- Router™: Branches by seniority — senior candidates route to one ATS pipeline, junior candidates to another.
- ATS write module: Creates an individual candidate record in the ATS for each bundle that passes routing.
- JSON Aggregator: Reassembles all processed candidate records into a single JSON payload. Source Module set to the Iterator.
- Slack/Email module: Delivers the aggregated candidate digest to the recruiting team — one message, all candidates, zero manual compilation.
This architecture directly addresses what the parent pillar establishes: HR automation breaks at the data layer, not the AI layer. Iterator and Aggregator are data-layer mechanics. They enforce structure, eliminate manual handling, and produce the clean, consistent outputs that downstream reporting and compliance processes depend on. For teams ready to extend this pipeline into eliminating manual HR data entry entirely, the Iterator-Aggregator pattern is the foundation everything else is built on.
The eight essential Make™ modules that complement this pattern — including routers, data stores, and HTTP modules — are catalogued in the guide to eight essential Make modules for HR data transformation. Start there to see where Iterator and Aggregator fit in the broader HR automation architecture.




