How to Automate HR Performance Review Summaries with Make.com™ and AI

Performance review cycles drain HR bandwidth not because writing is hard, but because data assembly is hard. Pulling manager ratings from one system, peer feedback from another, and self-assessments from a third — then normalizing all of it into a coherent narrative — is exactly the kind of repetitive, structured work that breaks people and breaks deadlines. This guide shows you how to eliminate that assembly burden using Make.com™ as the orchestration layer and AI as the narrative engine. The result is a draft-ready performance summary that lands in a manager’s inbox within minutes of data submission — consistent, structured, and ready for human judgment rather than human assembly.

This satellite drills into one specific workflow within the broader framework of smart AI workflows for HR and recruiting with Make.com™ — read that pillar first if you’re new to the sequence of automation before AI.


Before You Start

Do not open Make.com™ until you’ve completed this checklist. Skipping prerequisites is the number-one reason performance summary pilots fail.

  • Tools required: Make.com™ account (any paid tier), access to your HRIS or performance management platform’s API or export function, a spreadsheet or form tool for survey inputs, and an AI API key (your organization’s approved vendor).
  • Data discipline first: Every field that will feed the AI prompt must have a consistent name and format across all source systems. “manager_rating” and “Manager Rating” are different fields to an automation. Standardize before you build.
  • Compliance review: Confirm with your legal or privacy team that sending employee performance data to an external AI API is permitted under your data governance policy. See our guide on securing Make.com™ AI HR workflows for data and compliance before proceeding.
  • Prompt template drafted: Write your AI prompt template on paper before building the scenario. Knowing exactly what you’ll send determines how you structure the data collection steps.
  • Scope limit: Start with one review category — goal attainment is the easiest because data is quantitative. Add qualitative peer and self-assessment inputs in a second iteration.
  • Time estimate: One focused session (3–4 hours) for a pilot covering a single data source. Two sessions for a multi-source workflow with routing and approval logic.
  • Risk: If the AI draft bypasses human review and writes directly to an employee record, you have a compliance and trust problem. Human approval is not optional — it is a required step in this workflow.

Step 1 — Audit and Map Your Performance Data Sources

Before you build anything, know exactly what data exists, where it lives, and what format it’s in.

List every system that holds a data point relevant to a performance summary. Common sources include:

  • HRIS — employee record, tenure, role, department, compensation band (if relevant)
  • Performance management platform — goal scores, competency ratings, manager assessment fields
  • Survey tool — peer feedback responses, 360 inputs
  • Spreadsheet or form — self-assessment narrative responses

For each source, document: the field name, data type (numeric score, free-text, dropdown), whether it is mandatory or optional, and the system’s export or API method. This becomes your data dictionary — the foundation of every Make.com™ module you’ll build next.

Gartner research consistently identifies data fragmentation as the primary obstacle to effective performance management technology adoption. A data dictionary built before automation is the mitigation. Do not skip it.

Output of this step: A single-page data dictionary listing every field, its source system, its format, and whether the AI prompt needs it.


Step 2 — Build the Make.com™ Data-Collection Scenario

The Make.com™ scenario’s job in this step is purely mechanical: retrieve records, normalize field names, and output one clean JSON object per employee. No AI yet.

Choose your trigger

The scenario can trigger on a schedule (e.g., run nightly during review cycle weeks), on a webhook from your performance platform when a manager submits ratings, or manually via a button in a spreadsheet. Schedule-based triggers work well for batch processing entire cohorts; webhook triggers work better for real-time individual processing.

Add source modules

Add one module per data source. Use Make.com™’s native app connectors where available, or HTTP modules for REST API calls. Map each source field to your standardized data dictionary field name. Use Make.com™’s built-in text parsing and numeric formatting tools to normalize data types — convert all rating scales to the same range (e.g., 1–5) and strip any HTML or special characters from free-text fields.

Aggregate into a single data object

Use a Set Variable or JSON module to compile all normalized fields into a single structured object per employee. This object is what you’ll pass to the AI in the next step. A clean structure looks like this conceptually:

{
  "employee_name": "...",
  "review_period": "Q4 2025",
  "goal_attainment_score": 4.2,
  "manager_rating_communication": 4,
  "peer_feedback_summary": "...",
  "self_assessment_narrative": "...",
  "missing_fields": []
}

Add a conditional branch here: if missing_fields is not empty, route the record to an alert email to the HR coordinator rather than continuing to AI generation. This prevents the AI from fabricating data to fill gaps — the single most important guard rail in the entire workflow.

For a detailed breakdown of which Make.com™ modules handle each of these steps most efficiently, see our guide to essential Make.com™ modules for HR AI automation.

Output of this step: A working Make.com™ scenario that reliably produces one structured, normalized JSON object per employee with a missing-field alert branch.


Step 3 — Engineer the AI Prompt Template

The prompt template, not the AI model, determines whether the output is usable. Invest time here before you connect the API.

A production-ready performance summary prompt has five components:

1. Role and context instruction

Tell the AI what role it is playing and what the output is for. Example: “You are an HR writing assistant generating a first-draft performance summary for internal review. This draft will be reviewed and edited by a manager before being filed.”

2. Data handoff

Insert the structured JSON object from Step 2 directly into the prompt. Reference specific field names explicitly so the AI knows where every data point comes from. Do not ask the AI to infer data that isn’t in the object.

3. Output structure instruction

Specify exactly what sections you want and in what order. Example sections: Goal Attainment Summary, Strengths Observed, Development Areas, Recommended Next Steps. Specifying structure here eliminates most reformatting work in the routing step.

4. Language and tone guardrails

Instruct the AI to use neutral, professional language, avoid demographic inference, reference only the data fields provided, and flag any field marked as missing with a visible placeholder (e.g., [MISSING: peer_feedback_summary]). This is your primary bias-mitigation lever. Harvard Business Review and SHRM research both identify language consistency as a measurable factor in review fairness — prompt engineering is how you operationalize that at scale.

5. Length and format constraint

Specify maximum word count per section and whether the output should be plain text or formatted with headers. Shorter, tighter constraints produce more usable first drafts.

Test the prompt independently — paste it manually into your AI tool with sample data before connecting it to Make.com™. Iterate until the output requires minimal editing. Based on our testing, three to five manual iterations are typical before a prompt template is production-ready.

Output of this step: A finalized, tested prompt template ready for parameterization in Make.com™.


Step 4 — Connect Make.com™ to the AI API

With a clean data object and a tested prompt template, the API connection step is straightforward.

In Make.com™, add an HTTP module (or a native AI module if your platform offers one) after the data-aggregation step. Configure it to send a POST request to your AI API endpoint with:

  • Authorization header using your API key (stored as a Make.com™ environment variable — never hard-coded)
  • Request body containing your prompt template with the employee JSON object injected at the designated data-handoff position using Make.com™’s dynamic variable mapping
  • Model and parameter settings: use a deterministic temperature setting (0.3–0.5) for HR writing tasks — lower temperature produces more consistent, less creative output, which is exactly what you want for compliance-sensitive documents

Capture the AI response in a Parse JSON module and extract the summary text. Add error handling: if the API returns an error or an empty response, route the record to a retry queue or an HR coordinator alert rather than silently dropping it.

UC Irvine research on interruption and task-switching shows that context switches cost an average of 23 minutes of refocus time. Every manual fallback your workflow requires creates that cost for an HR team member. Build error handling that eliminates the need for human intervention except at the intentional review step.

Output of this step: A Make.com™ scenario that reliably calls the AI API and returns a structured summary draft for each employee record.


Step 5 — Route the AI Draft for Human Review and Approval

This step is non-negotiable. No AI-generated performance summary goes directly into an employee record. Every draft routes to the assigned manager for review, edit, and explicit approval.

Build the routing module

Use Make.com™ to look up the assigned manager’s email from your HRIS using the employee record. Send the AI draft to the manager with:

  • The full draft text formatted for readability
  • The source data fields used to generate it (so the manager can verify accuracy)
  • Clear instructions: edit directly, approve as-is, or flag for HR coordinator escalation
  • A deadline aligned to your review cycle calendar

Capture manager edits

Depending on your toolset, manager edits can be captured via a linked Google Doc, a form submission, or a direct edit in your performance management platform. If using a document, a second Make.com™ scenario can watch for document completion and retrieve the final text automatically.

Require explicit approval

The workflow must not proceed to filing until an approval action is recorded. A simple approval can be a form button, a specific email reply trigger, or a status field update in your performance platform. Make.com™ can watch for any of these and gate the next step on confirmation.

This routing pattern mirrors what we use in AI candidate screening workflows with Make.com™ — AI generates the draft, humans make the call.

Output of this step: Every AI-generated summary has been reviewed and explicitly approved by the assigned manager before moving forward.


Step 6 — Log, Archive, and Build the Audit Trail

The final scenario step writes the approved summary and a complete audit record back to your systems of record.

Write the following to your HRIS or document management system:

  • Approved summary text
  • Original AI draft (pre-manager edits)
  • Raw input data object used to generate the draft
  • Manager approval timestamp and approver identity
  • AI model version and prompt template version used

Parseur’s research on manual data entry errors estimates the cost of a single significant data error at $28,500 per employee per year when downstream payroll and compliance impacts are included. An audit trail that captures both the AI output and the human approval decision is your primary defense against that exposure.

Tag every record with a workflow version number so that if you update the prompt template in a future review cycle, you can distinguish which summaries were generated under which version. This matters when a summary is disputed months later.

Output of this step: A complete, compliant audit trail for every AI-generated summary filed during the review cycle.


How to Know It Worked

Measure these three metrics before launch (baseline) and after two full review cycles:

  1. Draft-to-approval time: How long from data submission to manager-approved summary? A functioning workflow should reduce this from days to hours in cycle one, and further in cycle two as managers become familiar with the draft format.
  2. Manager edit rate per summary: Track what percentage of the AI draft text managers change. High edit rates (above 40%) in cycle one are normal. If edit rates are not declining by cycle two, your prompt template needs revision.
  3. HR coordinator hours per review cycle: Measure total hours spent on data collection, chasing submissions, and formatting summaries. Asana’s Anatomy of Work research finds that knowledge workers spend a significant portion of their week on work about work — data collection and status chasing for performance reviews is a textbook example. This metric should fall meaningfully after workflow launch.

If draft-to-approval time increases, the bottleneck is almost always the manager review step — not the automation. Address that with deadline reminders built into the routing module, not by removing the human approval requirement.


Common Mistakes and Troubleshooting

Mistake: Building the scenario before standardizing data fields

You will rebuild the scenario. Standardize the data dictionary first, every time.

Mistake: Using a high temperature setting for the AI model

High temperature produces creative variation — the opposite of what you need for consistent, compliant HR documents. Set temperature at 0.3–0.5 for this use case.

Mistake: Sending the AI draft directly to the employee

The draft goes to the manager, not the employee. Always. Routing AI output directly to the subject of that output, without human review, is both a compliance risk and a trust-destroying event.

Mistake: No error handling on the API call

API calls fail. If your scenario has no error branch, failed calls drop silently and you discover the gap when an employee asks about their missing review. Add an error alert to HR for every failed API call.

Mistake: One prompt template for all roles

A prompt template written for individual contributor goal attainment produces poor output for a manager’s leadership competency review. Build separate templates for materially different role types and review frameworks.


Where This Fits in Your Broader HR Automation Strategy

Performance summary automation is one node in a larger workflow ecosystem. Teams that implement it successfully typically already have adjacent workflows handling related tasks — meeting note summaries, onboarding communications, or candidate screening — so the data infrastructure and Make.com™ conventions are already established.

For the meeting-note parallel, see our guide on automating HR meeting note summaries with AI and Make.com™. For the business case behind investing in this layer of infrastructure, the ROI case for Make.com™ AI in HR provides the financial framework. And for teams ready to go beyond individual workflows into a connected HR automation architecture, advanced AI workflows for strategic HR with Make.com™ maps the full strategic layer.

McKinsey Global Institute research on generative AI estimates that automating data synthesis and document drafting tasks could reclaim significant knowledge-worker capacity annually. Performance review summary generation is one of the most structurally straightforward applications of that recapture — the data already exists, the output format is known, and the human review step is built into the existing process. The only missing piece is the workflow to connect them.

If you want a structured assessment of where this workflow fits in your HR operations and what the realistic implementation sequence looks like, an OpsMap™ engagement is where that starts. If you’ve already done the assessment and are ready to build, an OpsSprint™ is designed for exactly this scope — one focused workflow, built and validated, handed off to your team to run.

Explore the full landscape of practical AI workflow implementations for HR efficiency to see how performance summary automation connects to the rest of your stack.