Post: How to Automate Job Descriptions with Make.com and Generative AI

By Published On: August 11, 2025

How to Automate Job Descriptions with Make.com and Generative AI

Job description drafting is one of the clearest examples of high-repetition, high-stakes knowledge work that automation should own. HR professionals spend hours sifting through outdated templates, hiring managers add inconsistent detail, and the output — too often — is either too generic to attract strong candidates or so role-specific it accidentally narrows the pool. This is exactly the problem that smart AI workflows for HR and recruiting with Make.com™ are designed to solve.

The architecture here is not “let AI write everything.” It is deterministic automation handling data collection, normalization, and routing — and AI firing at the single point where rules cannot decide: the prose draft itself. Get the sequence right and you get consistent, compliant, brand-aligned job descriptions in minutes instead of hours. Get it wrong and you get inconsistent AI outputs that create more editing work than they save.

This guide walks you through seven steps to build the workflow from scratch.


Before You Start

Before building, confirm you have the following in place:

  • A Make.com™ account with at least a Core plan (required for multi-step scenarios with HTTP modules).
  • Access to a generative AI API — OpenAI GPT-4-class recommended. Have your API key ready.
  • A defined role data source — your HRIS, ATS, or a structured intake form (Google Forms, Typeform, or a native ATS intake form all work).
  • A destination system — where finished JDs will live: ATS job posting queue, Google Drive folder, or internal knowledge base.
  • A human reviewer identified — one named person or team inbox that owns the approval gate. No reviewer, no workflow.
  • Brand voice documentation — even a one-paragraph summary of your company tone is enough to meaningfully improve AI output consistency.

Time estimate: One to two days to build a working prototype. Two to three additional days for prompt iteration and testing before live production use.

Risk to flag: AI-generated JDs require human legal review for EEOC, ADA, and jurisdiction-specific compliance. This workflow improves consistency — it does not replace compliance review.


Step 1 — Audit and Structure Your Role Data Sources

The quality of every AI draft traces directly to the quality of the structured data you feed the prompt. Start here before you open Make.com™.

Map the minimum viable data set for a job description: job title, department, seniority level, top five required skills, three to five key responsibilities, reporting structure, and location or remote status. Then identify which system holds each field today — HRIS, ATS requisition record, or a manual intake form submitted by hiring managers.

The most common failure point at this stage is discovering that hiring manager intake forms are freeform text fields. Freeform input produces inconsistent prompt inputs, which produces inconsistent AI drafts. If your intake process is freeform today, restructure it before building the automation. Use a form tool that enforces required fields with dropdown or select inputs for role family, seniority, and department. Free text is acceptable only for the responsibilities field, where specificity is actually valuable.

Document your data map as a simple table: field name, source system, data type, required or optional. This becomes your variable reference when building Make.com™ data mappers in Step 3.

Based on our testing: Teams that skip this audit and build directly in Make.com™ spend twice as long on prompt debugging because they are solving a data quality problem with a prompt engineering fix. Do the audit first.


Step 2 — Build the Intake Trigger in Make.com™

Open Make.com™ and create a new scenario. Your trigger module determines how the workflow starts — choose the one that matches your data source:

  • Form submission trigger (Google Forms, Typeform, Jotform): Use the native module for your form tool. The scenario fires the moment a hiring manager submits a completed intake request.
  • ATS webhook trigger: If your ATS supports outbound webhooks on requisition creation, configure a Make.com™ Custom Webhook module as the receiver. This is the most seamless option — no separate intake form required.
  • Scheduled data pull: For ATS platforms without webhook support, use a scheduled trigger (hourly or daily) paired with a search/filter module to pull new requisitions created since the last run.

Test the trigger with a real submission before moving on. Confirm that all required fields from Step 1 are present in the trigger output bundle. If any field is missing, fix the upstream source — do not patch it downstream in the scenario.


Step 3 — Map and Enrich Data Variables

With the trigger firing cleanly, add data transformation modules to normalize and enrich the raw input before it reaches the prompt builder.

Normalization tasks to complete at this step:

  • Standardize seniority labels: Map “Sr.”, “Senior”, “Sr” to a single value (“Senior”) so the AI receives consistent language.
  • Clean skills lists: If skills arrive as comma-separated text, use a Make.com™ text parser or array aggregator to convert them to a clean bulleted list format for prompt injection.
  • Append static context: Use a Make.com™ Set Variable module to store your company brand voice guidelines, tone instructions, and a short example JD excerpt. These static blocks get appended to every prompt run — store them once here, not inside the prompt module.
  • Flag incomplete records: Add a router module that checks for required fields. If any required field is empty, branch to a notification step that sends a completion request back to the submitter. Only complete records proceed to the AI step.

This step is where you eliminate the variability that produces inconsistent AI output. Asana’s Anatomy of Work research identifies context-switching and unclear inputs as primary drivers of knowledge worker inefficiency — the same principle applies to AI models. Clean inputs produce focused outputs.


Step 4 — Engineer the Dynamic Prompt

Prompt engineering is the highest-leverage step in this entire workflow. A well-constructed prompt produces a draft that needs light editing. A weak prompt produces output that takes longer to fix than writing from scratch.

Structure your prompt with two components:

System prompt (static, stored in Set Variable):

  • Role context: “You are an expert HR writer specializing in inclusive, skills-based job descriptions.”
  • Tone and brand voice guidelines (paste your documentation from Step 3).
  • Hard constraints: “Use gender-neutral language throughout. Avoid corporate jargon. Write at a 10th-grade reading level. Do not include salary ranges unless explicitly provided.”
  • Compliance instructions: “Do not include any language that specifies or implies preference for age, physical ability, or national origin.”
  • Output format: Specify the exact sections you want — About the Role, Key Responsibilities, Required Qualifications, Preferred Qualifications, What We Offer — and their order.

User prompt (dynamic, assembled by Make.com™ text aggregator):

  • Inject all normalized variables from Step 3: {{job_title}}, {{department}}, {{seniority_level}}, {{required_skills}}, {{key_responsibilities}}, {{reporting_structure}}, {{location}}.
  • Include any optional enrichment fields if present (team size, tech stack, growth trajectory).
  • Close with: “Write a complete job description using the structure above. Length: 400–600 words.”

Use Make.com™’s text aggregator module to concatenate the dynamic variables into the user prompt string. Map the system prompt and user prompt as separate fields in the AI call module — do not merge them into one string.

This is also the right moment to think about building ethical AI workflows for HR. Bias constraints in the system prompt are far more effective than post-production editing — they prevent the problem rather than catching it.


Step 5 — Call the AI Model

Add an HTTP module or the native OpenAI module (if available in your Make.com™ plan) to send the assembled prompt to your generative AI endpoint.

Configuration checklist:

  • Model: GPT-4-class model recommended for nuanced, constraint-following output. Smaller models frequently ignore multi-constraint system prompts.
  • Temperature: Set between 0.5 and 0.7. Lower values produce more consistent, less creative output — appropriate for compliance-sensitive content. Higher values introduce tonal variety you may not want.
  • Max tokens: Set to 900–1,200 to accommodate a 400–600 word JD with headings and formatting.
  • Error handling: Add a Make.com™ error handler on this module. If the API call fails (timeout, rate limit), route to a notification step rather than silently dropping the request.

Capture the AI response body and parse the text content from the JSON response. Store it as a scenario variable for use in subsequent steps.

This is the one step in the entire workflow where AI is doing the work. Every other step is deterministic Make.com™ logic — and that is intentional. For a deeper look at how this fits the broader architecture, see the discussion of AI candidate screening workflows that follow the same structure-before-intelligence principle.


Step 6 — Route to Human Review Gate

No AI-generated job description should publish without a human seeing it first. This is not a hedge — it is a system design requirement.

Add a router module after the AI response is captured. The “approved” path proceeds to Step 7. The “review required” path (which is the default for every first run) sends the draft to a reviewer via:

  • Slack: Post a formatted message to a dedicated #jd-review channel with the draft text and two action buttons — Approve or Request Revision.
  • Email: Send the draft as an HTML-formatted email to the hiring manager and HR reviewer. Include a reply-to address or approval link.
  • ATS staging queue: If your ATS supports draft/staging status, push the JD directly into a staging requisition for in-system review.

For the “Request Revision” branch, route to a notification that captures the reviewer’s feedback and either re-prompts the AI with the additional context or flags the requisition for manual drafting. Do not loop the AI revision more than twice automatically — after two attempts, escalate to manual.

Gartner research on HR technology adoption consistently finds that human-in-the-loop gates are the primary trust-building mechanism for AI-assisted processes. Build the gate in from day one, not as an afterthought.


Step 7 — Publish to Destination and Archive

Once the reviewer approves, the final branch of the scenario handles publishing and archiving simultaneously.

Publishing options (configure based on your stack):

  • ATS: Update the requisition record status to “Active” and populate the JD field with the approved text.
  • Google Drive: Create a new Google Doc in a designated folder, formatted with your standard JD template, with the approved content populated.
  • Job board API: If your ATS does not handle distribution, use Make.com™’s HTTP module to POST directly to job board APIs that support programmatic posting.
  • Slack/Teams notification: Send a confirmation to the hiring manager that their JD is live, with a link to the published posting.

Archive every approved JD — including the prompt inputs that generated it — to a Google Sheet or database table. This archive becomes the foundation for fine-tuning your prompt over time and, eventually, for training a custom model on your organization’s specific JD style. Every approved JD is a labeled training example.

For context on the downstream ROI this workflow generates, the analysis of ROI of Make.com™ AI workflows in HR breaks down the cost-per-hire implications of faster, more consistent job postings. And for teams thinking about how JD automation connects to the full recruiting pipeline, reducing time-to-hire with Make.com™ AI automation covers the end-to-end sequence.


How to Know It Worked

Measure these four signals in the first 30 days after launch:

  1. Draft cycle time: Time from intake form submission to approved JD. Target: under 24 hours for standard roles, down from the typical 3–5 day manual cycle.
  2. Revision rate: Percentage of AI drafts approved on first review without change requests. A well-engineered prompt should hit 70%+ first-pass approval within 30 days of iteration.
  3. Reviewer feedback quality: Track what reviewers are changing. If corrections cluster around the same issue (tone, a specific section, a constraint the AI ignores), update the system prompt — do not ask reviewers to fix it every time.
  4. Applicant relevance signal: After 60–90 days, compare applicant-to-interview conversion rates for automated JDs versus historical manual JDs for comparable roles. Tighter, skills-based language typically improves this ratio.

Common Mistakes and How to Avoid Them

Mistake: Trying to handle all role types with one static prompt.
Fix: Build prompt variants for different role families — technical, operational, leadership — using Make.com™’s router to select the right system prompt template based on the department field.

Mistake: Skipping the incomplete-record gate (Step 3).
Fix: Always validate required fields before the AI call. Thin inputs produce thin drafts — and reviewers who distrust thin drafts stop using the workflow.

Mistake: Setting temperature too high for compliance-sensitive content.
Fix: Keep temperature at or below 0.7 for JD generation. Creative variation is a liability, not a feature, in regulated employment content.

Mistake: Publishing without a reviewer because “the AI is good enough.”
Fix: The human gate is not about AI quality — it is about legal accountability. SHRM data consistently shows that job description language is one of the most scrutinized elements in employment discrimination claims. A 30-second review is not optional overhead; it is risk management.

Mistake: Not archiving approved drafts with their inputs.
Fix: Every approved JD you discard without archiving is a lost training signal. Parseur’s Manual Data Entry Report documents that organizations lose an average of $28,500 per knowledge worker per year to low-value processing tasks — archiving is the step that lets this workflow compound its value over time instead of resetting every run.


What Comes Next

Job description automation is one module in a complete AI-assisted recruiting pipeline. Once this workflow is stable, the natural next build is connecting the published JD to intelligent HR communications automation — so that the same role data that generated the JD also powers candidate outreach, interview confirmation, and status update messaging through the same Make.com™ scenario architecture.

For teams looking to extend this further without writing code, customizing AI models for HR without coding covers how to adapt AI model behavior — including fine-tuning on your archived JD library — using Make.com™’s no-code interface.

The architecture described here — deterministic data handling, AI at one judgment point, human gate before output — is the same pattern that scales across every HR workflow. Build it right once and you have the template for every subsequent automation in your stack.