How a 12-Person HR Team Built a Repeatable Data Portability Response System That Passed GDPR Audit
Case Snapshot
| Organization | Mid-market regional employer, ~400 employees across three EU and two U.S. states (GDPR and CCPA dual-exposure) |
| HR Team Size | 12 HR staff, no dedicated privacy officer at program start |
| Baseline Problem | Data portability requests handled manually, ad hoc — average response time 22 days, no documented scope policy, machine-readable output not configured |
| Constraints | Seven siloed data systems, no unified data map, legal support available only part-time |
| Approach | Cross-system data mapping → scope matrix → automated intake and fulfillment workflow → machine-readable export configuration |
| Outcome | Average response time reduced to under 4 days; passed subsequent GDPR supervisory authority review with zero findings on portability handling |
Employee data portability is one of the most operationally demanding rights in the GDPR and CCPA frameworks — and one of the most consistently mishandled. Most HR teams treat portability requests the way they once treated FOIA requests: as rare events managed through improvisation. That approach was defensible a decade ago. It is not defensible now.
This case study documents how a dual-jurisdiction HR operation rebuilt its data portability response from the ground up — not by hiring a privacy attorney on retainer, but by building a repeatable system with the resources already inside the team. The result cleared a GDPR supervisory authority review and became the operational template for the organization’s broader secure HR data compliance and privacy framework.
Context and Baseline: What “Ad Hoc” Actually Looks Like
The organization’s portability problem wasn’t visible until it became a liability. At baseline, the HR team had no documented procedure for handling data subject access or portability requests. When requests arrived — typically via email — they were routed to whoever happened to be available, investigated manually across seven separate systems, and compiled into PDF exports that the HRIS dashboard produced by default.
The seven systems holding employee personal data were:
- Core HRIS (active employee records)
- Payroll platform (compensation history, tax documents)
- Applicant tracking system (ATS), including records for candidates who became employees
- Learning management system (LMS) with training completion records
- Performance management tool (manager ratings, goal records, development notes)
- Benefits administration platform
- Offboarded legacy HRIS from a prior vendor migration (containing records for employees hired before the platform switch)
No single team member had access to all seven systems. No data map existed that showed which system held which categories of employee data. The scope of a portability request — which data was legally in scope versus exempt — was decided differently by each coordinator who handled an incoming request.
Average response time at baseline: 22 days. The GDPR window is 30 days. The team was operating with an 8-day buffer — which evaporated instantly on any request requiring access to the legacy HRIS or a scope question requiring legal input. SHRM research confirms that manual, siloed HR data processes routinely produce compliance lag that organizations don’t measure until they’re forced to.
The trigger for change was a formal data subject request from a former employee in Germany, combined with an internal legal review that flagged the PDF-only export format as non-compliant with GDPR’s machine-readable standard. Legal counsel confirmed: the team’s current process would not survive a supervisory authority inquiry.
Approach: Three Problems That Had to Be Solved in Order
Three structural defects drove the compliance risk, and they had to be addressed in sequence — because each one was a prerequisite for the next.
Problem 1 — No Data Map
You cannot fulfill a portability request for data you cannot locate. Before any workflow could be designed, the team needed a complete inventory of what employee data existed, which system held it, what format it was in, and who had access to retrieve it. This was not a technology project — it was a documentation project.
Problem 2 — No Scope Policy
GDPR Article 20 applies to personal data “provided by” the individual that is processed on the basis of consent or contract. Employer-generated data — manager performance notes, internally calculated ratings, inferred analytics — is generally outside scope. But “generally” is not good enough for a regulatory defense. The team needed a documented scope matrix, signed off by legal counsel, that established which data categories were in scope, which were exempt, and how to handle ambiguous cases. Without this, every request triggered a new internal debate that consumed compliance days.
Problem 3 — No Machine-Readable Export
The HRIS default export was PDF. PDF does not satisfy GDPR’s machine-readable requirement. The payroll platform, the LMS, and the ATS all had CSV export capability — but it was not enabled or configured for this use case. The format gap was, in legal terms, a standing violation on every request fulfilled to date.
Implementation: What the Team Actually Built
Phase 1 — Cross-System Data Map (Weeks 1–3)
The data mapping exercise was conducted system by system, using a shared spreadsheet maintained by HR operations with input from IT and part-time legal support. For each of the seven systems, the team documented:
- Data categories present (e.g., “compensation history,” “training completions,” “manager performance ratings”)
- Whether each category was “provided by the employee” or “generated by the employer”
- The available export format (CSV, JSON, PDF, API, none)
- Who held system access and who could execute a data pull
- Retention period and whether archived or offboarded records were accessible
The legacy HRIS presented the most significant challenge. It was still hosted but no longer actively maintained, and export functionality had not been tested since the migration. IT confirmed that CSV export was technically available but required manual database query execution — a 45-minute process per request. This was logged as a known constraint requiring a longer-term remediation plan.
The data map also surfaced a retention problem: former employee records in the ATS were being retained indefinitely, with no deletion trigger tied to the organization’s own HR data retention policy. This was flagged separately for remediation but logged in the data map as a risk note.
Phase 2 — Scope Matrix and Legal Sign-Off (Weeks 3–4)
With the data map complete, legal counsel worked through each data category and assigned a scope determination: In Scope (IS), Out of Scope (OOS), or Requires Review (RR). The output was a two-page reference document formatted as a decision table.
Key determinations:
- In scope: Payroll records, benefits enrollment data, application materials submitted by the candidate, training completion records, contract documents signed by the employee
- Out of scope: Manager performance narrative notes, internally calculated performance ratings, HR investigation notes, anonymized analytics outputs
- Requires Review: 360-degree feedback (visible to employee during employment), goal-tracking data with mixed employer/employee inputs
The Requires Review category was assigned a 48-hour legal consultation SLA within the intake workflow — this prevented RR items from stalling the entire response. This scope matrix is the single document that most shortened fulfillment time. Coordinators stopped making individual judgment calls on scope and started applying a standing policy.
This scope matrix also directly informed how the team handled related request types, including right to erasure and data deletion requests and GDPR right to rectification for HR records — all three workflows now drew from the same data map and scope logic.
Phase 3 — Intake and Fulfillment Workflow (Weeks 4–7)
The intake process was rebuilt around a structured webform that replaced the email-to-whoever process. The form collected:
- Requester name and contact information
- Employment status (current, former, applicant)
- Specific data categories requested (with checkboxes mapped to the scope matrix)
- Jurisdiction of residence (EU member state or California, to apply the correct regulatory window)
- Identity verification method (government ID upload or employee ID for current staff)
Form submission automatically created a timestamped case record and triggered a deadline alert set to day 20 — 10 days inside the GDPR window — to allow buffer for compilation and review. GDPR’s 30-day response requirement and CCPA and CPRA compliance for HR teams both require tracked intake timestamps; the form made this automatic rather than dependent on individual coordinator discipline.
The fulfillment workflow was mapped as a checklist tied to the scope matrix: for each in-scope data category, a named system owner was responsible for executing the export and depositing it into a secure shared folder within 5 business days of case creation. Machine-readable format (CSV for tabular records, JSON for structured API exports) was specified as the output standard. PDF outputs were rejected at the compilation step.
The compilation coordinator then assembled all exports, generated a cover letter explaining what was included and what was excluded (with references to the scope matrix for excluded items), and delivered the package via a time-limited secure download link rather than email attachment.
Phase 4 — Format Configuration (Weeks 5–6, parallel)
Simultaneously, IT worked through each platform to enable or configure CSV/JSON exports. The HRIS required a configuration change in the admin panel — 20 minutes of work. The payroll platform had a built-in “data export” module that had never been activated. The LMS required a support ticket with the vendor to enable structured export. The legacy HRIS remained a manual database query — this was documented as the only exception, with a note that it added 2–3 business days to any legacy-record request.
Results: What Changed and by How Much
The workflow was operational by week 8 of the implementation. Over the following six months, the team processed 14 data portability requests. Results compared to the 12-month baseline period:
| Metric | Baseline | Post-Implementation |
|---|---|---|
| Average response time | 22 days | Under 4 days |
| Requests exceeding 30-day window | 3 of 9 (33%) | 0 of 14 (0%) |
| Requests delivered in non-compliant format (PDF only) | 9 of 9 (100%) | 0 of 14 (0%) |
| Scope disputed internally per request | Estimated 4–6 hrs per request | <30 min (scope matrix lookup) |
| Regulatory audit findings on portability | N/A (no audit) | Zero findings |
The GDPR supervisory authority review — triggered by an unrelated data subject complaint in a different area of the business — included a specific inquiry into data portability response practices. The auditor reviewed three portability case files. All three showed structured intake timestamps, machine-readable exports, documented scope decisions with legal sign-off, and responses delivered within the legal window. The finding: no deficiencies in portability handling.
Gartner research on privacy program maturity consistently identifies documented scope policies and automated deadline tracking as the two controls most correlated with audit survival in data subject rights management. This case bore that out precisely.
Lessons Learned: What Worked, What Didn’t, and What We’d Do Differently
What Worked
Sequencing mattered. Attempting to build the fulfillment workflow before the data map was complete would have produced a workflow with gaps. Attempting to train coordinators on scope before the legal scope matrix was signed would have produced inconsistent decisions. The dependency order — map first, scope second, workflow third, format fourth — was not obvious in advance but proved essential in retrospect.
The scope matrix paid dividends beyond portability. The same document became the reference for deletion request scoping and rectification request scoping. Three separate workflows were unified around one legal policy document. This is consistent with the GDPR Article 5 data processing principles framework — accountability and data minimization are not request-type-specific; they’re organizational postures.
Treating each request as a data audit surface was the highest-value habit. The retention problem discovered in the ATS during the data mapping exercise — indefinitely retained former employee records — was a ticking liability. Finding it during the portability build, rather than during a separate audit, allowed remediation before it compounded.
What Didn’t Work
The legacy HRIS exception was underestimated. Documenting it as “adds 2–3 days” was accurate but insufficient. Two of the 14 post-implementation requests involved former employees hired before the platform migration. Both required the legacy database query process, and both tested the day-20 deadline buffer. Legacy system remediation — migration to a queryable archive or vendor-assisted export automation — should have been scoped as Phase 1 work, not deferred.
The webform created a new single point of failure. When the form was briefly unavailable due to a platform update, one incoming request arrived via email and was almost missed. The backup intake channel (dedicated email address with auto-acknowledgment and manual case creation) should have been documented and trained from day one.
What We’d Do Differently
Build the HR data audit cycle directly into the portability program cadence from the start. Quarterly reviews of the data map — confirming that new systems have been added, that retention triggers are firing, and that the scope matrix reflects any regulatory changes — would have surfaced the ATS retention problem earlier and kept the legacy HRIS remediation on a visible roadmap rather than a deferred risk log.
Also: involve IT in week 1, not week 4. The format configuration work took less than two weeks once IT was engaged, but the delay waiting for IT capacity pushed the format fix to the end of the implementation timeline. Machine-readable export configuration is IT work, not HR work — but HR owns the requirement and must initiate it immediately.
Applying This Framework to Your Organization
The pattern documented here is not specific to this organization’s size, industry, or regulatory exposure. Any HR team with multiple data systems, dual-jurisdiction exposure, or a history of ad hoc portability handling can apply the same four-phase sequence:
- Map the data before touching the workflow. Every system that holds employee personal data must be on the map before a single workflow step is designed.
- Get legal sign-off on scope — once, in writing. A standing scope matrix eliminates per-request legal delays and creates a defensible audit record.
- Automate intake and deadline tracking. Manual intake processes create timestamp gaps that regulators treat as non-compliance indicators, not administrative oversights.
- Configure machine-readable export before the next request arrives. This is a configuration task, not a development project. Most platforms support it natively. The only barrier is awareness that it’s required.
For organizations building this alongside complementary programs, the sequence connects directly to your existing essential HR data security practices. Data you can locate, scope, and export in compliant format is also data you can secure, retain correctly, and delete on schedule. These are not separate programs — they are the same data map, applied to different legal obligations.
The complete governance context for this work — including access management, retention schedules, anonymization protocols, and breach response — is covered in the parent guide on secure HR data compliance and privacy frameworks. Build that foundation first. Data portability compliance is one of the most visible tests of whether that foundation is real.




