Methodology

How the AI Career Threat Index is calculated

A transparent, replicable scoring of 76 professions on AI displacement risk. Updated quarterly. Open data. Open methodology.

Last reviewed: 2026-05-06 · Version 2026.2 · License: MIT · 76 roles · 10 categories

The question this dataset answers

"How exposed is my profession to AI displacement right now, and where is that exposure heading?" Most discussion of AI and jobs is anecdotal. The Threat Index gives one numeric answer per role plus a structured breakdown of the tasks driving it.

Three-factor scoring

Every role's 0–100 score is a weighted composite of three independent factors. Higher composite = greater AI displacement risk.

  1. Task automation potential (50% weight). The role is decomposed into 8–12 representative tasks. Each task is graded on whether current general-purpose AI (LLMs, multimodal models, vertical AI tools) can perform it with ≥90% reliability for production-grade output. The percentage of tasks meeting that bar drives the factor.
  2. AI tool maturity in the field (30% weight). How developed and widely-deployed are the relevant AI capabilities? A task that's automatable in theory but only via experimental research is weighted lower than one with a mature commercial tool.
  3. Industry adoption rate (20% weight). What percentage of employers in the field are actively integrating AI for those task categories? Sourced from labor-market signals, employer surveys, and job-posting analytics.

Each factor is normalized 0–100, then composited. A role at 100 on factor 1 with a mature toolset and high adoption sits in "Very High" risk; the same task automation with an immature toolset and zero adoption produces a much lower composite.

Risk bands

Composite scores collapse to four risk bands for human readability. The bands are designed to be statistically meaningful: each band corresponds to roughly different career-strategy advice.

BandScore rangeWhat it means
Low0–35Core role functions remain human-driven. AI is a productivity multiplier, not a replacement. Build AI fluency to compound advantage.
Moderate36–50Specific tasks within the role are automating, but the role's center of gravity holds. Pivot toward higher-judgment specializations.
High51–75Significant fraction of role's tasks already automatable. Career durability requires meaningful re-skilling within 12–24 months.
Very High76–100Role is structurally exposed. Most candidates should plan a transition to an adjacent or higher-value specialization.

Source data

Each scoring component draws on a distinct set of sources reviewed quarterly:

  • Task decomposition: O*NET occupational task lists, real job postings sampled from major job boards, expert input from talent-acquisition leaders.
  • AI capability: Public benchmarks (HumanEval, MMLU, custom domain-specific evals), product launches, primary testing of major LLMs and vertical tools against representative tasks.
  • Industry adoption: Job posting language analysis (% of postings mentioning specific AI tools or skills), employer surveys (BCG, Gallup, World Economic Forum, McKinsey AI adoption reports), public earnings transcripts referencing AI deployments.
  • Salary data: BLS Occupational Employment Statistics, public Glassdoor and Levels.fyi aggregates, employer job postings.
  • Historical scores: Quarterly snapshots maintained from Q1 2025 forward — see the historical trend on each role's page.

Industry modifiers

The headline score for a role is a global composite. Industry modifiers adjust the score for the context where someone actually works. A software engineer's exposure differs in tech (highest AI tooling adoption) vs. government (slowest). Industry modifiers are stored as point adjustments per role × industry pair (e.g., Software Engineer in Tech: +5; in Government: -10).

Industries currently modeled: Tech, Finance, Healthcare, Government, Retail, Manufacturing. The modifier set is published in the dataset's industryModifiers field per role.

Review cadence

Every role is reviewed quarterly. A review covers:

  1. Re-grading task automation against new model releases (e.g., the quarterly LLM and vertical tool launches)
  2. Refreshing adoption-rate signals from job posting analytics and surveys
  3. Re-checking salary trend direction
  4. Adjusting risk-band assignments only if composite shifts ≥5 points

Score changes <5 points between quarters are absorbed without a band change to avoid noise. Notable mid-quarter movements are documented in the dataset changelog.

Defense skills

For each role we publish three defense skills — the highest-leverage capabilities to build given the role's specific exposure pattern. Defense skills are chosen by:

  • Mapping the role's growing tasks to required skills
  • Cross-referencing with current job posting frequency for that skill
  • Filtering for skills with a clear training path (course, certification, or measurable practice)

Each defense skill links to a deeper guide on the site. We deliberately avoid recommending generic "learn AI prompting" — every defense skill is tied to a specific practice or credential.

Limitations and what this dataset does not claim

  1. This is task-level, not job-level. A high score does not mean the role disappears. It means the role's task mix shifts, and most working professionals must adapt.
  2. It's a leading indicator, not a labor forecast. Hiring slowdowns lag automation potential by 12–36 months as employers, contracts, and regulations adjust.
  3. Geography varies. Adoption rates differ between US, EU, and emerging markets. The headline score reflects global signal; cited industry modifiers help, but truly local data is out of scope.
  4. Sub-specialization matters. "Software Engineer" covers everyone from a junior frontend developer to a principal distributed-systems architect. The score is a center-of-mass; tail roles will be more or less exposed.
  5. It's not a regression model. The methodology is structured-expert-judgment with clear sources and weights — closer to the way the World Economic Forum's Future of Jobs report or PwC's automation studies work than to a black-box ML model. We chose this for transparency over false precision.

Reproducibility and access

The full dataset including methodology weights, task lists, historical snapshots, and industry modifiers is open-source under MIT license. Three ways to access:

Citing this work

APA format:

Otterson, J. (2026). AI Career Threat Index. MeritForge AI. https://www.meritforgeai.com/data/ai-career-threat-index/

For other citation formats, brand assets, and press contact, see the press kit.

Questions, corrections, or custom data

Spotted an error? Disagree with a score? Need data sliced a different way for a piece you're writing? Email jeff_otterson@yahoo.com. Methodology improvements get incorporated into the next quarterly review with attribution where appropriate.