How to Build a GDPR-Compliant HR Data Filter in Make: A Step-by-Step Guide

GDPR compliance in HR automation is not a policy exercise — it is an architecture decision. Every automated workflow that moves employee data is either enforcing privacy principles at the data layer or quietly violating them. The filter configuration inside your automation platform is where data minimization, purpose limitation, and consent gating either happen or do not. This guide walks through exactly how to build those controls inside your automation platform, step by step, so that non-compliant data never reaches a downstream system in the first place.

This is one specific application of the broader discipline covered in data filtering and mapping in Make for HR automation. If you are new to filter architecture, start there. This guide assumes you are building on that foundation and need to apply it to a regulated data environment.


Before You Start

Completing this guide requires the following before you open a single scenario.

  • Time: 2–4 hours for a single processing purpose; longer for a full HR data flow audit.
  • Access: Admin access to your automation platform and read access to your HRIS, ATS, and any relevant third-party processors.
  • Documentation: A current Record of Processing Activities (RoPA) or equivalent data inventory. If you do not have one, build it before configuring filters — you cannot enforce purpose limitation if you have not defined the purposes.
  • Legal alignment: Confirm the lawful basis for each processing activity with your Data Protection Officer (DPO) or legal counsel before encoding it into workflow logic. This guide covers technical implementation, not legal interpretation.
  • Risk: Misconfigured filters can silently pass non-compliant data or block legitimate processing. Test every scenario against non-compliant sample data before enabling it in production.

Step 1 — Audit and Classify Your HR Data Fields

You cannot filter what you have not catalogued. Before touching a single module, document every data field your HR scenarios handle and assign each a classification.

Use three tiers:

  • Standard personal data: Name, email, job title, employment start date, department.
  • Sensitive personal data: Salary, bank account details, national insurance or Social Security number, home address, date of birth.
  • Special-category data (GDPR Article 9): Health and medical records, racial or ethnic origin, religious beliefs, political opinions, trade union membership, genetic data, biometric data, sexual orientation data.

For each field, record:

  1. Which source system holds it.
  2. Which downstream systems currently receive it (by automation or manual process).
  3. The lawful basis for each transmission (contract, legal obligation, legitimate interest, consent, vital interest, public task).
  4. Whether the receiving system actually requires that field to perform its function.

That last question is the core of data minimization. Gartner research on data governance consistently finds that organizations transmit more data than downstream systems use — the excess is compliance exposure with no operational value.

Your audit output becomes the specification for your filter conditions. Do not skip it.


Step 2 — Define One Processing Purpose Per Scenario

The single most common GDPR architecture failure in HR automation is the omnibus scenario: one workflow triggered by a new hire event that simultaneously pushes data to the HRIS, payroll, benefits portal, learning platform, and an analytics dashboard. Each destination has a different processing purpose, a different lawful basis, and a different set of required fields. Routing a single full data bundle to all of them is a purpose limitation violation encoded into your infrastructure.

The fix is structural: one scenario per processing purpose.

For a new hire event, that means separate scenarios for:

  • HRIS record creation (lawful basis: contract)
  • Payroll enrollment (lawful basis: legal obligation)
  • Benefits portal provisioning (lawful basis: contract)
  • Learning platform account creation (lawful basis: legitimate interest or contract)
  • Workforce analytics update (lawful basis: legitimate interest — anonymized or aggregated only)

Each scenario maps only the fields required for its specific destination. None shares data with the others. If a lawful basis changes or a consent record is withdrawn, you update or disable one scenario — not a tangled multi-branch workflow that affects five systems simultaneously.

This structure also makes audits tractable. When a supervisory authority asks “what data did you transmit to System X and on what basis,” you have a single scenario with a defined field map and a documented lawful basis, not a branching workflow that requires forensic reconstruction.


Step 3 — Build a Data Minimization Filter

With purposes defined and scenarios separated, build the filter that enforces minimization inside each scenario.

A data minimization filter does two things: it blocks the scenario if required fields are absent or malformed, and it ensures only specified fields are mapped to the output module.

Part A — Field validation filter:

Add a Filter module immediately after your trigger. Configure conditions that require the fields your downstream system needs. For example, for a payroll enrollment scenario:

  • Employee ID — exists and is not empty
  • Employment start date — exists and is a valid date
  • Salary — exists and is numeric
  • Bank account reference — exists

If any condition fails, the scenario stops. No data proceeds to the payroll system. This is a compliance checkpoint, not an error — a failed filter on a malformed record is the system working correctly. Pair this with the error handling practices for Make HR operations so that blocked records are logged and reviewed rather than silently dropped.

Part B — Output field mapping:

In the output module (the HTTP request, app connector, or database write that sends data to your destination), map only the fields your audit confirmed are required. Do not pass the full data bundle and rely on the receiving system to ignore what it does not need. The transmission of excess fields is the violation — what the receiving system does with them is a separate and additional risk.

For guidance on precise field-level mapping techniques, the guide on essential Make.com™ filters for recruitment data covers the conditional logic patterns in depth.


Step 4 — Add a Consent Gate for Optional Processing

Any processing activity that relies on consent as its lawful basis requires a real-time consent check before the scenario runs. Consent must be freely given, specific, informed, and withdrawable — which means a consent flag captured at onboarding and never refreshed is not a reliable gate.

Build the consent gate as follows:

  1. Lookup step: Add an HTTP or app module at the start of the scenario that queries your HRIS or dedicated consent management system for the current consent status of the data subject. Use the employee ID or applicant ID from the trigger as the lookup key.
  2. Filter module: Immediately after the lookup, add a Filter module with a single condition: consent status equals “active” (or your equivalent positive value). If the condition is false — consent absent, withdrawn, or expired — the scenario stops.
  3. Do not cache consent values: The lookup must execute on every scenario run. Storing consent status as a variable from a previous run and reusing it defeats the purpose of real-time gating.

SHRM research on HR compliance consistently identifies consent management as an area where manual processes introduce lag between withdrawal and cessation of processing. Automating the consent gate eliminates that lag entirely — withdrawal in your consent system stops processing in your automation platform on the next execution cycle.


Step 5 — Mask or Anonymize Special-Category Fields

For scenarios that must reference special-category data for a documented purpose (health data for occupational health compliance, for example), the data should be masked, pseudonymized, or anonymized before reaching any analytical or reporting destination.

Inside your automation platform, implement masking upstream of any transmission module:

  • Direct identifier stripping: Use a Set Variable module to create a version of the data bundle with name, email, and employee ID replaced with null values or a hash before the data is passed to the analytics module.
  • Special-category field blocking: Add a Filter condition that checks for a “lawful basis confirmed” flag on the specific record. If the flag is absent or set to false, the scenario stops before the special-category field is transmitted anywhere.
  • RegEx-based data cleaning: For free-text fields that may inadvertently contain special-category data (open-text survey responses, notes fields), use pattern matching to detect and redact known sensitive formats before transmission. The RegEx-based HR data cleaning guide covers implementation patterns for this.

A note on anonymization: stripping a name and email from a record does not guarantee anonymization under GDPR if the remaining fields (department, role, age band, location) create a combination that re-identifies the individual in a small team. Consult your DPO on what constitutes genuine anonymization versus pseudonymization in your specific context.


Step 6 — Configure an Audit Log Step

GDPR’s accountability principle requires that you be able to demonstrate compliance — not just claim it. Your automation platform’s native execution history is not sufficient for a formal audit trail. It is too granular in some ways (storing actual data values) and not structured enough in others (not queryable by your DPO without platform access).

Add a dedicated audit log module as the final step in every compliance-sensitive scenario:

  • What to log: Timestamp, scenario name, trigger event ID, destination system, list of field names transmitted (not values), lawful basis invoked, and data subject identifier (employee ID or applicant ID — not name or email, to avoid logging personal data in your compliance log unnecessarily).
  • Where to log it: A Google Sheet, database table, or dedicated compliance platform that your DPO can access independently of your automation platform account.
  • What not to log: Do not write actual personal data values (salary figures, health information) into your audit log. The log should confirm that a transmission occurred and on what basis — not reproduce the data itself.

Parseur’s Manual Data Entry Report quantifies that human error in manual data handling costs organizations an average of $28,500 per employee per year in rework and downstream corrections. Automated, structured audit logging eliminates the human error vector from compliance record-keeping entirely.


Step 7 — Test with Non-Compliant Sample Data

Filter logic that has never been tested against data it should reject is not a compliance control — it is an assumption. Before enabling any GDPR-sensitive scenario in production, run a structured test battery.

Test cases to run:

  • A data bundle with a missing required field — verify the scenario stops at the minimization filter.
  • A data bundle for a data subject with withdrawn consent — verify the scenario stops at the consent gate.
  • A data bundle containing a special-category field without a lawful basis flag — verify the field is blocked or the scenario stops before transmission.
  • A full compliant data bundle — verify all fields reach the destination correctly and the audit log entry is created.
  • A data bundle with excess fields beyond what is required — verify those fields do not appear in the destination system’s received payload.

Document the test results. “Based on our testing” is a phrase that matters when you are demonstrating accountability to a supervisory authority. A documented test run that shows your filters rejected non-compliant data is evidence. An undocumented assumption is not.


How to Know It Worked

Your GDPR filter architecture is functioning correctly when all of the following are true:

  • A manual review of your audit log shows no special-category fields appearing in destination systems that lack a documented lawful basis for receiving them.
  • A withdrawal of consent in your HRIS stops the relevant scenario from executing on the next trigger cycle — testable by withdrawing a test employee’s consent and running the scenario manually.
  • Your DPO can independently query the audit log and reconstruct every data transmission event for any given employee for the past 12 months without requiring access to your automation platform.
  • A spot-check of three destination systems confirms that only the fields specified in your field map for that purpose are present in received records.
  • Your non-compliant test data still fails the filter conditions after any scenario update — regression testing is ongoing, not one-time.

Common Mistakes and How to Avoid Them

Mistake 1 — Treating filter configuration as a one-time task

HR data flows change constantly: new systems are added, processing purposes expand, and lawful bases evolve with regulatory guidance. Schedule a quarterly review of every compliance-sensitive scenario against your current RoPA. Scenarios built for one purpose can silently acquire new purposes as the business adds integrations.

Mistake 2 — Relying on downstream systems to enforce minimization

The transmission is the event. Sending excess data to a system that then discards it does not satisfy data minimization — the transfer itself created the exposure. Filter before you transmit.

Mistake 3 — Checking consent once at intake and never again

Consent is revocable. A real-time lookup on every execution is not optional for consent-dependent processing. Build the lookup into the scenario — not into a periodic batch refresh that introduces a processing window after withdrawal.

Mistake 4 — Conflating pseudonymization with anonymization

Replacing a name with a hash is pseudonymization if the hash can be reversed or the record can be re-identified through other fields. GDPR still applies to pseudonymized data. True anonymization requires irreversibility and resistance to re-identification — a higher bar that requires DPO sign-off on the specific technique and context.

Mistake 5 — Not logging filter rejections

A scenario that stops at a filter condition has performed a compliance action. That action should appear in your audit log as a rejection event with the reason. Without this, you cannot demonstrate that your controls are actively functioning — only that they exist in theory.


Next Steps

With your GDPR filter architecture in place, the natural next expansions are broader data integrity controls and system integration governance. The guide on clean HR data workflows covers data quality enforcement beyond compliance contexts. For teams managing data flows across multiple systems, connecting ATS, HRIS, and payroll in Make addresses integration architecture at the stack level. And for the broader framework these controls sit within, logic-driven HR automation workflows covers conditional branching patterns that apply across your entire HR automation program.

GDPR compliance in automated HR workflows is not a configuration you complete once. It is an ongoing practice of auditing, testing, and updating the data layer as your systems and processing purposes evolve. The architecture described here gives you the structural foundation — maintaining it is what keeps that foundation sound.