Post: How to Automate HR Meeting Summaries with AI: A Step-by-Step Make.com Workflow

By Published On: August 13, 2025

How to Automate HR Meeting Summaries with AI: A Step-by-Step Make.com™ Workflow

Every HR team runs on meetings. Interviews, performance reviews, disciplinary conversations, onboarding check-ins, and strategy sessions — each one produces decisions and action items that need to be documented, distributed, and acted on. The problem is not that HR professionals lack discipline. The problem is that manual note synthesis is a slow, error-prone, and deeply low-value use of time that no one has bothered to automate yet.

This guide builds the workflow that changes that. It follows the same logic as smart AI workflows for HR and recruiting with Make.com: deterministic automation handles the mechanical spine first — file detection, API calls, data routing — and AI fires only at the discrete judgment point where rules cannot decide (summarizing language). Sequence is everything.

By the end of this guide, you will have a production-ready Make.com™ scenario that detects a new meeting recording, transcribes it, generates a structured summary with decisions and action items, and pushes the output into your HRIS — in under 10 minutes, without anyone typing a single word.

Before You Start

This workflow requires four things in place before you open Make.com™. Missing any one of them will stall implementation mid-build.

  • Cloud storage with a dedicated recordings folder. Google Drive, Dropbox, OneDrive, or any S3-compatible storage. Your video conferencing platform must be able to auto-save recordings to a specific folder within this storage.
  • A transcription API account. OpenAI Whisper (via the OpenAI API) is the most common choice for teams already using GPT-class models. Google Cloud Speech-to-Text is the alternative for data-residency requirements. Have your API key ready.
  • An OpenAI API account (or equivalent LLM API). This handles the summarization step. Whisper handles transcription; a separate GPT call handles summarization. They are different calls with different purposes.
  • Write access to your destination system. Whether that is an HRIS, an ATS, Notion, or a Google Sheet serving as a temporary knowledge base, you need API credentials or a Make.com™ native connector with write permissions confirmed before building.
  • Time estimate. Initial build: 2–3 hours for someone new to Make.com™. Refinement and testing: 1–2 additional hours. Error handling setup: 30–60 minutes. Total: one focused afternoon.
  • Compliance check. Before processing real meeting recordings, confirm that your organization’s data retention policy, applicable employment law, and your transcription API’s data processing agreement are aligned. Disciplinary and accommodation-related conversations carry elevated legal sensitivity. See the dedicated guide on securing Make.com™ AI HR workflows before going live with those meeting types.

Step 1 — Audit and Baseline Your Current Meeting Note Workflow

Before building, measure what you are replacing. You cannot prove ROI or catch edge cases without a baseline.

Spend 30 minutes answering these questions about your current state:

  • How many meetings per week produce notes or summaries that get filed anywhere?
  • Who writes those notes — the facilitator, a dedicated note-taker, or the most junior person in the room?
  • How long does summarization take per meeting, on average?
  • Where do finished summaries go — HRIS, shared drive, email thread, or nowhere consistently?
  • How often do action items from meetings go untracked?

This audit serves two purposes. First, it surfaces the meeting types and routing rules you need to encode in the workflow. Second, it creates the before-state data you need to calculate ROI after go-live. Research from the Asana Anatomy of Work report consistently finds that knowledge workers spend a disproportionate share of their workday on work about work — documentation, status updates, and coordination overhead — rather than skilled work. Meeting summarization sits squarely in that category.

Log your baseline numbers. You will need them in Step 7.

Step 2 — Configure Cloud Storage and Recording Auto-Save

The workflow trigger is a new file appearing in a watched folder. Everything downstream depends on recordings landing there reliably and automatically — not on someone manually uploading a file.

Configure your video conferencing platform to auto-save cloud recordings:

  • Zoom: Settings → Recording → Cloud Recording → set save location to your designated folder via Zoom’s cloud storage integration, or use Zoom’s built-in cloud recording and retrieve via API.
  • Microsoft Teams: Meeting recordings save automatically to OneDrive or SharePoint depending on meeting type. Set a consistent destination folder through your Teams admin policy.
  • Google Meet: Recordings save to the meeting organizer’s Google Drive. Use a shared Drive folder with a consistent path for HR meetings.

Create a folder structure that separates meeting types from the start: /HR-Recordings/Interviews/, /HR-Recordings/Performance-Reviews/, /HR-Recordings/Strategy/. Make.com™ can watch a parent folder and detect subfolders, which lets you route different meeting types to different summarization prompts and different HRIS destinations in a single scenario.

Test the auto-save by running a 2-minute test recording and confirming the file appears in the correct folder within 5 minutes of the meeting ending. Do not proceed to Step 3 until this is confirmed.

Step 3 — Build the Make.com™ Watch Folder Trigger

Log into Make.com and create a new scenario. The first module is your trigger.

  1. Click the + to add the first module and search for your cloud storage provider (Google Drive, Dropbox, OneDrive, or Box).
  2. Select the Watch Files (or Watch New Files in Folder) trigger module.
  3. Authenticate with your cloud storage account using a dedicated service account — not a personal account. Service accounts prevent the workflow from breaking when individuals leave the organization.
  4. Set the watched folder to your HR recordings parent folder.
  5. Set the file type filter to audio/video extensions only: .mp4, .m4a, .mp3, .wav. This prevents the trigger from firing on unrelated files.
  6. Set polling interval. Make.com™ watches folders on a schedule — 15 minutes is standard for non-urgent use cases; 5 minutes if same-day summary delivery is required.

Save the trigger and run it once manually against a test recording to confirm the file metadata (name, path, download URL) maps correctly to scenario variables before continuing.

Step 4 — Add the Transcription API Module

The transcription step converts the audio file to text. This is a deterministic API call — no judgment required, just accurate execution.

  1. Add an HTTP Make a Request module after the Watch trigger.
  2. Set method to POST and URL to your transcription API endpoint. For OpenAI Whisper, the endpoint is https://api.openai.com/v1/audio/transcriptions.
  3. In the Headers section, add your Authorization header: Bearer [your API key]. Store the API key in Make.com™’s secure data store — never paste it as plain text in a module field.
  4. In the Body section, configure a multipart/form-data request. Map the audio file download URL from Step 3 to the file field. Set model to whisper-1 (or your equivalent). Set response_format to text for a clean plain-text transcript.
  5. Map the response body to a scenario variable named transcript_text.

Run a test with a real recording. Verify that the transcript variable contains readable text. Check for speaker-labeling if your API supports it — speaker labels improve summarization quality significantly for multi-participant meetings.

Add an error handler to this module now, before moving to Step 5. Set the error route to write the file path and error message to a fallback log (a Google Sheet or dedicated Make.com™ data store row works well) and send an alert to the HR admin email. A failed transcription that logs itself is a recoverable event. A failed transcription that silently disappears is a compliance gap.

For deeper context on transcript-first interview workflows, see the companion guide on how to automate HR interview transcription with Make.com and AI.

Step 5 — Design the Structured LLM Summarization Prompt

This is the highest-leverage step in the entire workflow. The quality of every output summary is determined here. Vague prompts produce vague summaries. Structured prompts produce structured, actionable output your HRIS can actually ingest.

The prompt must specify:

  • Role: “You are an expert HR documentation specialist.”
  • Task: “Analyze the following meeting transcript and return a JSON object.”
  • Output fields and constraints: Specify every field by name, with character or word limits where relevant.
  • Tone: “Use neutral, factual language. Do not editorialize. Do not infer intent beyond what is stated.”

A production-grade prompt for an interview debrief looks like this:

You are an expert HR documentation specialist. Analyze the following meeting transcript and return ONLY a valid JSON object with these fields:

{
  "meeting_type": "string — one of: Interview, Performance Review, Disciplinary, Onboarding, Strategy",
  "summary": "string — 3-5 sentence factual summary of the conversation, max 150 words",
  "key_decisions": ["array of strings — each decision stated in one sentence"],
  "action_items": [
    {
      "item": "string — specific action",
      "owner": "string — name or role of responsible party",
      "due_date": "string — stated date or 'Not specified'"
    }
  ],
  "sentiment": "string — one of: Positive, Neutral, Mixed, Concerning",
  "follow_up_required": "boolean",
  "follow_up_notes": "string — specific follow-up context or empty string if false"
}

Transcript:
{{transcript_text}}

Note the {{transcript_text}} variable reference at the end. In Make.com™, replace this with the mapped variable from Step 4.

The JSON output format is deliberate. Make.com™ can parse JSON natively, which means each field routes to exactly the right destination in Step 6 without manual extraction. Teams that instruct the LLM to return plain prose must then manually parse that prose downstream — which reintroduces the human effort the automation was built to eliminate.

Build different prompt variants for different meeting types now. An interview debrief and a performance review have different required fields. Store each prompt as a Make.com™ text template so they are easy to maintain.

Step 6 — Add the LLM Module and Parse the JSON Response

With the prompt designed, wire it into the scenario.

  1. Add an HTTP Make a Request module (or the native OpenAI module if available in your Make.com™ account).
  2. Set the endpoint to https://api.openai.com/v1/chat/completions.
  3. In the request body, pass your structured prompt as the user message content, with the transcript_text variable injected at the designated placeholder.
  4. Set model to your chosen GPT-class model. Set temperature to 0.2 — lower temperature produces more consistent, less hallucinated output for structured extraction tasks.
  5. Map the response to a variable named llm_raw_response.

Next, add a JSON Parse module:

  1. Set the input to llm_raw_response (specifically the choices[0].message.content path from the OpenAI response structure).
  2. Define the data structure matching your prompt’s JSON schema — Make.com™ will auto-generate this from a sample response.
  3. After parsing, each field (summary, action_items, key_decisions, etc.) is an independently mappable variable.

Add an error handler here as well. If the LLM returns malformed JSON (rare but possible), the parse module will fail. Error route: log the raw LLM response and transcript to the fallback log and alert the HR admin. Do not silently drop.

Step 7 — Route Outputs to HRIS, Task Manager, and Notifications

Parsed fields now route to their destinations. This step is where the workflow pays off.

Add parallel branches after the JSON parse module:

  • HRIS write: Use your HRIS’s native Make.com™ connector (most major HRIS platforms have one) or an HTTP module hitting your HRIS API. Write the summary, meeting_type, key_decisions, and sentiment fields to the relevant employee or candidate record.
  • Task manager write: For each item in the action_items array, use an iterator module to loop through the array and create one task per item. Map item to task title, owner to assignee, and due_date to the task due date.
  • Notification: If follow_up_required is true, trigger an email or Slack notification to the meeting organizer containing the follow_up_notes field and a link to the full summary in the HRIS.
  • Archive: Move the original recording file from the watched folder to an archive folder with a timestamp-prefixed filename. This keeps the watched folder clean and prevents re-processing.

The same structural approach applies to performance review documentation — see the guide on how to automate performance review summaries with Make.com and AI for field-specific routing differences in that context.

How to Know It Worked

Run five end-to-end tests using real recordings of different meeting types before declaring the workflow production-ready. For each test, verify:

  • The recording file triggered the scenario within the expected polling interval.
  • The transcript variable contains readable, complete text (spot-check against the recording).
  • The LLM returned valid, parseable JSON — no truncation, no prose wrapper around the JSON object.
  • Each JSON field mapped correctly: summary in HRIS, action items as tasks, notification sent if follow-up was flagged.
  • The original recording file moved to the archive folder and no longer appears in the watched folder.
  • The error log remained empty for all five tests (confirming no silent failures).

After go-live, pull your baseline numbers from Step 1 at the 30-day mark and compare: minutes spent on manual summarization, action item completion rate, and summary-to-HRIS lag time. The Make.com™ AI Workflows ROI satellite provides a structured framework for quantifying these gains at the leadership level.

Common Mistakes and Troubleshooting

Mistake 1: Using a personal cloud storage account as the trigger source

When the account owner leaves the organization, the scenario breaks and recordings stop processing. Always use a dedicated service account with a shared folder that HR admins can access regardless of individual employee status.

Mistake 2: Sending the full transcript without chunking for long meetings

LLM APIs have context window limits. A 3-hour all-hands recording may exceed the input limit of your chosen model. For meetings over 90 minutes, add a text chunking step before the LLM call — split the transcript into segments, summarize each, then pass all segment summaries to a final aggregation call. More complex to build, but necessary for long-form recordings.

Mistake 3: Skipping the temperature: 0.2 setting

Default temperature (1.0) produces creative, variable output — ideal for content generation, wrong for structured data extraction. Low temperature produces consistent JSON structure call after call. Set it low from day one.

Mistake 4: Routing ALL meeting types to the same HRIS endpoint

An interview debrief belongs on a candidate record. A performance review belongs on an employee record. A disciplinary meeting may belong in a separate compliance log with restricted access. Build routing logic using Make.com™’s Router module and the meeting_type field to send each summary to the correct destination.

Mistake 5: Treating this as a set-and-forget deployment

LLM model versions change, API endpoints deprecate, and your HRIS field schema will evolve. Schedule a monthly 15-minute review of the scenario execution log. Catch drift before it becomes a data gap.

What to Build Next

Once this workflow runs cleanly in production, two natural extensions deliver compounding value:

Cross-meeting insight aggregation. Run a weekly Make.com™ scenario that pulls all summaries from the past 7 days, sends them to an LLM for pattern extraction (“What themes appeared across multiple performance reviews this week?”), and delivers a digest to HR leadership. This moves from documentation to decision support.

Candidate experience loop. For interview summaries, trigger an automated, personalized follow-up to the candidate within hours of the debrief completion — not days. The summary workflow has already parsed the meeting; the follow-up is one additional routing step. For a detailed implementation, see the guide on essential Make.com™ modules for HR AI automation.

The broader architecture this workflow fits into — where automation handles the mechanical spine and AI handles discrete judgment points — is covered in depth in the guide on advanced AI workflows for strategic HR. That is the logical next read once this workflow is live and you are ready to expand scope.