
Post: TA Metrics Glossary: KPIs and AI Recruitment Definitions
TA Metrics Glossary: KPIs and AI Recruitment Definitions
Talent acquisition professionals face a terminology problem. Vendors describe their products using the same words — AI screening, skills matching, bias auditing — to mean fundamentally different things. KPIs get reported without agreement on what they measure. Teams debate tool performance without shared definitions. This glossary closes that gap. Every term below is defined precisely, with direct notes on how AI and automation interact with that concept, so your team and your leadership speak the same language when evaluating tools, benchmarking performance, and building the case for change.
These definitions support the broader HR AI strategy roadmap for ethical talent acquisition — which establishes the sequencing principle this glossary assumes: automate deterministic steps first, apply AI only at the probabilistic judgment moments where rules break down.
Core Talent Acquisition Terms
These foundational terms define the function itself before any technology layer is added.
Talent Acquisition (TA)
Talent acquisition is the strategic, ongoing function of identifying, attracting, assessing, and hiring the people an organization needs to execute its business plan. It is not synonymous with recruiting. Recruiting is a transaction — fill the open role. Talent acquisition is a continuous capability — build the pipelines, relationships, and employer brand that make future hiring faster and higher quality.
- Scope: Workforce planning, employer branding, candidate relationship management, sourcing strategy, selection design, and onboarding hand-off.
- AI interaction: AI tools impact TA at the sourcing (candidate identification), screening (resume parsing and scoring), and analytics (pipeline forecasting) layers. They do not replace the strategic decisions around role design, compensation positioning, or culture definition.
- Common misconception: “TA is just HR with a fancier name.” The distinction matters for budget allocation — TA investment in pipeline and brand yields compounding returns; transactional recruiting spend resets each cycle.
Requisition (Req)
A requisition is a formally approved request to hire for a specific role, including job title, department, compensation band, reporting structure, and hiring manager. The requisition is the trigger event for most TA KPI clocks.
- Time-to-fill and time-to-hire both start at req approval, not at job posting.
- Req quality — how precisely a role is defined — is a stronger predictor of hiring outcome than any downstream AI tool. Vague reqs produce vague results regardless of screening sophistication.
Hiring Funnel
The hiring funnel is the sequential stages a candidate moves through from initial awareness to hire: Sourced → Applied → Screened → Assessed → Interviewed → Offered → Hired. Each stage has a conversion rate. AI tools affect conversion rates at the screening and assessment stages most directly.
- Funnel analysis reveals where qualified candidates drop — whether that’s a screening filter that’s too aggressive, an interview process that runs too long, or an offer stage where compensation misaligns with market.
- McKinsey research on talent strategy identifies funnel conversion optimization as one of the highest-leverage points for reducing cost-per-hire without reducing candidate quality.
Key Performance Indicators (KPIs)
KPIs are quantifiable measures that evaluate whether the talent acquisition function is achieving its objectives. These are the metrics your AI talent acquisition scorecard should track, with precise definitions to prevent measurement drift.
Time-to-Hire
Time-to-hire measures the elapsed calendar days from the moment a candidate enters the active pipeline (first application or sourced contact) to the moment that candidate accepts an offer. It reflects the speed of the screening and decision process.
- Do not confuse with time-to-fill: Time-to-fill starts at req approval and includes sourcing time. Time-to-hire starts when a specific candidate is in play.
- APQC benchmark: Top-quartile organizations achieve time-to-hire under 20 days; median is approximately 40 days.
- AI impact: Automated screening, scheduling automation, and AI-driven shortlisting eliminate the wait-state bottlenecks that inflate time-to-hire — the hours a candidate sits in “under review” while a recruiter’s inbox fills up.
- Caution: Optimizing time-to-hire in isolation produces faster bad hires. Pair it with quality of hire to ensure speed improvements are not coming at the expense of fit.
Time-to-Fill
Time-to-fill measures the elapsed days from requisition approval to offer acceptance. It includes sourcing time that time-to-hire excludes, making it the relevant metric for workforce planning and business operations impact.
- A role unfilled costs the business in lost productivity and team strain. SHRM research places the average cost of an unfilled position at meaningful ongoing business impact — Forbes composite estimates exceed $4,000 per month for professional roles.
- AI sourcing tools and talent pipeline automation reduce time-to-fill at the front of the funnel by surfacing pre-qualified candidates before a req opens.
Cost-per-Hire
Cost-per-hire is the total recruiting expenditure divided by total hires in a period. It includes advertising, agency fees, recruiter compensation allocated to the role, assessment tool costs, and background check fees.
- Formula: (Internal recruiting costs + External recruiting costs) ÷ Total hires
- SHRM benchmark: Average cost-per-hire across industries is approximately $4,129 (SHRM/Forbes composite).
- AI impact: Automation reduces cost-per-hire by compressing recruiter time-per-requisition, reducing agency dependency, and improving source-of-hire data so spend concentrates on the most productive channels.
- Limitation: Cost-per-hire ignores quality. A low cost-per-hire achieved by cutting assessment rigor often produces high replacement costs within 12 months.
Quality of Hire
Quality of hire assesses the value a new employee delivers to the organization, typically measured by a composite of early performance ratings, retention at 90 days and 12 months, time-to-productivity, and hiring manager satisfaction scores.
- Gartner identifies quality of hire as the single most important TA metric and the one most organizations measure least rigorously.
- Why it’s hard to measure: It requires connecting ATS data to performance management system data — an integration most HR stacks lack.
- AI impact: AI screening tools that expand signal inputs beyond resume keywords — incorporating skills inference, assessment data, and structured interview scores — demonstrably improve quality of hire by reducing the false-positive rate in shortlisting.
- For more on evaluating whether an AI tool actually moves this metric, see the guide on how to evaluate AI resume parser performance.
Offer Acceptance Rate
Offer acceptance rate is the percentage of job offers extended that candidates accept. A declining acceptance rate signals compensation misalignment, a poor candidate experience during the process, or a competing employer winning late-stage candidates.
- Formula: (Offers accepted ÷ Offers extended) × 100
- Top-quartile organizations maintain offer acceptance rates above 90% (APQC benchmarks).
- AI tools reduce offer rejection by enabling faster time-to-offer (less candidate drop-off during delays) and by surfacing compensation market data earlier in the process so offers land within the candidate’s range.
Source of Hire
Source of hire tracks which channel — job board, employee referral, direct sourcing, social, agency, career site — produced each successful hire. It enables recruiting spend optimization by identifying which sources deliver candidates that convert all the way to hire.
- Source of hire data is only as reliable as the attribution logic in your ATS. Multi-touch sourcing (candidate saw a LinkedIn post, then applied via Indeed, then was referred internally) requires explicit attribution rules or the data misleads.
- AI sourcing tools generate a new “AI-sourced” channel that should be tracked separately to measure platform ROI.
Candidate Net Promoter Score (cNPS)
Candidate Net Promoter Score measures how likely candidates — both hired and rejected — are to recommend your organization’s hiring process to others. It is a direct measure of candidate experience quality.
- Formula: % Promoters (score 9-10) − % Detractors (score 0-6)
- A negative cNPS from rejected candidates damages employer brand — rejected candidates are prospective customers and future applicants. Automated, timely rejection communications with specific feedback outperform silence or generic “we’ll keep your resume on file” responses.
Diversity Hiring Rate
Diversity hiring rate tracks the proportion of hires from underrepresented groups at each stage of the funnel — application, screen, shortlist, offer, hire. Measuring only at the final hire stage masks where in the funnel representation is lost.
- Funnel-stage diversity measurement exposes whether a screening tool is introducing disparate impact at the shortlisting stage.
- See the complete treatment of this issue in the guide to bias detection strategies for fair AI resume parsing.
AI and Automation Terms in Recruiting
These terms describe the technologies reshaping talent acquisition. Precision matters — vendors use these words inconsistently, and imprecise procurement leads to tools that underdeliver against stated requirements.
AI Resume Parser
An AI resume parser is a software component that converts an unstructured resume document — PDF, Word, plain text, HTML — into structured, queryable data fields: candidate name, contact information, work history entries, job titles, employment dates, skills, education, certifications, and languages.
- What a parser does not do: It does not score, rank, or filter candidates. Those are downstream functions that consume the parser’s structured output.
- Why AI vs. rules-based parsing matters: Rules-based parsers break on non-standard resume formats. AI parsers use natural language processing to infer meaning from context, handling non-linear formats, gaps, and non-English documents more reliably.
- Accuracy dimensions: Field extraction accuracy (did it pull the right data into the right field?), format handling (does performance degrade on creative or multi-column layouts?), and skills taxonomy depth (how granularly does it classify skills?).
Natural Language Processing (NLP)
Natural language processing is the branch of machine learning that enables software to understand, interpret, and generate human language. In recruiting, NLP powers resume parsing, job description analysis, chatbot candidate communication, and semantic skills matching.
- NLP-driven matching can recognize that “managed a team of eight engineers” implies leadership competency even when “leadership” does not appear as a keyword on the resume.
- NLP model quality varies significantly across vendors. The relevant evaluation question is: what training data was used, and does it reflect your industry’s terminology?
Skills-Based Matching
Skills-based matching evaluates candidates on demonstrated or AI-inferred competencies rather than on job titles, years of experience, or educational credentials. An AI system extracts skill signals from a resume and maps them to a skills ontology derived from the job description.
- Why it matters: Title-based matching excludes qualified candidates whose career paths don’t follow conventional progressions. Harvard Business Review research on skills-based hiring shows that restricting searches to credential proxies eliminates large pools of capable talent.
- Ontology depth: A shallow skills ontology treats “Python” and “Python 3 for data pipelines” as equivalent. A deep ontology distinguishes them. Ontology depth is a meaningful differentiator when evaluating AI matching platforms.
- For a full breakdown of how skills matching works in practice, see the guide on AI skills matching for precision hiring.
Automated Screening
Automated screening applies pre-configured rules to a candidate pool to advance, hold, or decline applications without manual recruiter review. It is a deterministic process — the same input always produces the same output based on the defined criteria.
- Automated screening ≠ AI screening: Automated screening executes rules. AI screening applies probabilistic models to infer fit beyond rule coverage.
- Common automated screening criteria: minimum years of experience, required certifications, geographic eligibility, compensation range acknowledgment.
- The risk: automated screening configured with overly restrictive criteria eliminates candidates a human recruiter would advance. Audit pass-through rates quarterly.
Predictive Analytics in Recruiting
Predictive analytics in recruiting uses historical hiring data and machine learning models to forecast future outcomes — which candidates are most likely to accept offers, which sourcing channels will yield quality hires next quarter, which roles are at risk of extended time-to-fill.
- Predictive models require historical data volume and quality to function. Organizations with fewer than 200 hires per year often lack sufficient data to train reliable predictive models without vendor-supplied benchmarks.
- Forrester research on AI in HR identifies predictive attrition risk modeling as one of the highest-ROI applications when connected to quality-of-hire outcome data.
Candidate Relationship Management (CRM)
A candidate relationship management system is a platform or module that manages ongoing communication and engagement with candidates not yet in an active hiring process — passive candidates, silver-medalists from previous searches, and early-stage pipeline contacts.
- CRM automation enables personalized nurture sequences at scale — a recruiter can maintain relationships with hundreds of candidates simultaneously through automated check-ins triggered by time or life events.
- The distinction from ATS: an ATS manages active applicants in a req process; a CRM manages pre-req and post-req relationships.
AI Ethics and Compliance Terms
These terms define the legal and ethical framework within which AI recruiting tools must operate. Understanding them is not optional — AI bias and compliance exposure represent the primary risk category in AI talent acquisition strategy.
Disparate Impact
Disparate impact occurs when an employment practice that appears neutral on its face produces statistically different outcomes across protected demographic groups — race, sex, national origin, religion, disability, age. It is a legal standard, not an intent standard: a tool causes disparate impact whether or not discrimination was intended.
- Legal basis: Disparate impact doctrine derives from Title VII of the Civil Rights Act and the EEOC’s Uniform Guidelines on Employee Selection Procedures (UGESP).
- The 80% rule: A selection rate for any protected group that is less than 80% of the selection rate for the highest-selected group triggers adverse impact scrutiny under UGESP.
- An AI screening tool trained on historical hiring data from a homogeneous workforce will encode that homogeneity into its outputs — producing disparate impact at scale, faster than any human process could.
Bias Audit
A bias audit is a systematic, quantitative evaluation of whether an AI hiring tool produces differential outcomes across demographic groups at each stage of the screening funnel. It is both a risk management process and, in some jurisdictions (New York City Local Law 144), a legal requirement.
- Audit components: Define demographic groups; collect selection rate data by group at each funnel stage; apply the 80% rule; test for statistical significance; document findings and remediation steps.
- Audit frequency: An audit conducted once at implementation is insufficient. Model drift — the degradation of model performance over time as the applicant population changes — requires recurring audits at minimum annually.
- For practical implementation guidance, the complete how-to is covered in the piece on bias detection strategies for fair AI resume parsing.
Algorithmic Transparency
Algorithmic transparency is the degree to which an AI system’s inputs, weighting logic, and decision outputs can be examined and explained. In hiring contexts, it determines whether a rejected candidate (or their attorney) can understand why the system advanced or declined their application.
- Explainability is not just a regulatory nicety — it enables internal quality control. A system whose logic cannot be examined cannot be debugged when it produces bad outputs.
- Gartner identifies explainable AI as a top priority for HR technology procurement decisions, particularly for screening and assessment tools.
Data Privacy in Recruiting
Data privacy in recruiting governs how candidate personal data — resume content, assessment scores, interview recordings, background check results — is collected, stored, processed, and deleted. Relevant frameworks include GDPR for European candidates and various US state laws including the California Consumer Privacy Act (CCPA).
- AI tools that store candidate data to train or improve their models may create data retention obligations your legal team needs to review before deployment.
- Candidate consent to AI-driven screening is becoming a regulatory requirement in an increasing number of jurisdictions.
Data Quality Terms
AI recruiting tools amplify whatever data they consume. These terms define the data quality concepts every TA team needs to understand before evaluating any AI platform.
1-10-100 Rule
The 1-10-100 rule is a data quality principle establishing that it costs $1 to verify data at entry, $10 to correct an error discovered downstream, and $100 (or more) to address the business consequences of a bad decision made on uncorrected data. The rule was formalized by Labovitz and Chang and is widely cited in MarTech and data governance literature.
- In talent acquisition, the “100” scenario is a bad hire driven by inaccurate candidate data or corrupted HRIS records. The replacement cost of a mis-hire reaches 50-200% of annual salary in fully-loaded terms.
- AI platforms do not fix upstream data quality. They scale the consequences of bad data faster than any manual process could. Audit your ATS data taxonomy before deployment.
Structured Data vs. Unstructured Data
Structured data is information organized in a predefined format — fields in a database, rows in a spreadsheet, ATS profile fields. Unstructured data lacks inherent organization — resume documents, cover letters, interview recordings, email threads.
- Most candidate information originates as unstructured data (resumes). AI parsers convert it to structured data so it can be queried, filtered, and analyzed at scale.
- The quality of structured output depends on parser accuracy. Errors in parsing propagate through every downstream analysis.
Skills Ontology
A skills ontology is a hierarchically organized, machine-readable taxonomy of skills, competencies, and their relationships. It defines how skills are classified (is “machine learning” a subcategory of “data science”?), synonymized (is “ML” the same as “machine learning”?), and mapped to job roles.
- The breadth and depth of a vendor’s skills ontology is one of the most important and least-evaluated differentiators in AI resume parsing and matching platforms.
- A shallow ontology produces false negatives — qualified candidates excluded because their skills are labeled differently than the ontology expects.
Related Terms and Distinctions
ATS vs. CRM vs. HCM
These three platform categories are frequently conflated but serve distinct functions:
- ATS (Applicant Tracking System): Manages active applicants through a specific req process — application receipt, status tracking, interview scheduling, offer management.
- CRM (Candidate Relationship Management): Manages pre-req and post-req candidate relationships — talent pipelines, passive candidates, silver-medalists.
- HCM (Human Capital Management): The broader HR platform managing the employee lifecycle post-hire — HRIS, payroll, performance management, learning and development.
- AI tools operate across all three but integrate differently with each. The most common integration failure point is the ATS-to-HRIS hand-off — the gap where David’s $103K-to-$130K transcription error occurred, producing a $27K payroll overpayment and an eventual attrition event.
Automation vs. AI — The Critical Distinction
Automation is deterministic: the same input always produces the same output based on defined rules. AI is probabilistic: outputs are based on pattern-matching against training data, with inherent uncertainty.
- Automate: Interview scheduling, status email triggers, application receipt confirmations, ATS-to-HRIS data transfer, offer letter generation.
- Apply AI: Skills inference from non-standard resumes, candidate quality scoring, attrition risk prediction, personalized candidate experience optimization.
- The sequencing principle — automate first, then apply AI — is developed in full in the HR AI strategy roadmap. Deploying AI on top of unautomated, manual processes produces amplified chaos, not amplified performance.
Passive vs. Active Candidates
An active candidate is actively searching for a new role — they are applying to job postings, updating profiles, and engaging with recruiters. A passive candidate is currently employed and not actively searching but may be open to the right opportunity.
- McKinsey talent research consistently finds that the highest-impact hires at senior and specialized levels come disproportionately from passive candidate pipelines.
- AI sourcing tools and CRM automation are specifically designed to scale passive candidate engagement — making it economically viable for recruiting teams to maintain relationships with hundreds of passive candidates simultaneously.
Common Misconceptions
These are the definitional errors most likely to produce bad technology decisions or bad benchmarking conclusions.
- “AI screening is objective.” False. AI screening reflects the patterns in its training data. If historical hires were demographically homogeneous, the model learns to replicate that homogeneity. Objectivity requires deliberate bias auditing, not assumed neutrality.
- “Time-to-hire and time-to-fill measure the same thing.” False. Time-to-hire measures candidate-in-pipeline to offer. Time-to-fill measures req-approval to offer. Confusing them produces inaccurate process diagnosis.
- “A lower cost-per-hire means better recruiting performance.” False. Cost-per-hire ignores quality. Reducing cost-per-hire by cutting screening rigor or assessment investment trades a visible metric improvement for invisible downstream replacement costs.
- “AI parsing reads resumes the way a recruiter does.” False. A recruiter reads holistically, making inferences from narrative context, formatting choices, and career trajectory patterns. A parser extracts fields. The two processes are complementary, not equivalent.
- “Once deployed, AI tools don’t need ongoing attention.” False. Model drift, changes in the applicant population, and changes in role requirements all degrade AI tool performance over time. Ongoing monitoring and periodic retraining are requirements, not optional add-ons.
Building Your Measurement Foundation
A glossary is only useful if it drives consistent measurement practice. The definitions above establish the shared language; the next step is connecting them to your operating reality.
Start with the hidden costs of manual screening vs AI to quantify the baseline your team is operating from. Then use the recruitment AI readiness assessment to evaluate whether your data infrastructure can support the AI tools these KPIs require. The metrics are only as reliable as the systems capturing them.
The goal is not a dashboard full of numbers. The goal is a small set of precisely defined metrics — time-to-hire, quality of hire, offer acceptance rate, diversity hire rate — measured consistently, connected to business outcomes, and used to make decisions about where AI tools belong in your workflow and where human judgment remains irreplaceable.