Which Jobs Are Most Exposed to AI, and Which Are Least Exposed?
This is the second in a series exploring what the research actually says about AI and your career. Last week: AI displacement is real. This week: where exactly it's hitting.
When most people think about AI taking jobs, they picture physical automation. Robots on factory floors, self-checkout kiosks, driverless taxis. That picture is real, and it's accelerating. Tesla, Figure, and several others are pushing the boundaries of physical AI robots that understand language and take actions.
But there's another front that's been getting significant attention due to recent layoffs: AI displacing cognitive work in knowledge-worker roles. And the roles most affected are not the ones people expect. The research shows that AI exposure correlates with higher wages and more education, not less. The displacement frontier has moved upmarket, into professional roles that most people assumed were safe.
Here's how we know, and what it means.
1. The exposure map is upside-down
The most comprehensive assessment of AI's reach across the economy comes from Eloundou et al. (2024). Their team took the entire US occupational taxonomy: 923 roles, 19,265 individual work tasks from the Department of Labor's O*NET database. Human researchers rated every task on a simple question:
“Can an LLM (or software built on top of one) cut the time to complete this task by at least 50% without sacrificing quality?”
The results split into three buckets:
🟢 E0 — Not exposed (55% of all tasks) AI can't meaningfully speed up this task. Physical work, in-person judgment, hands-on care.
🟡 E1 — Directly exposed (28% of all tasks) A chatbot-style interface alone cuts the time in half. Writing, translating, summarizing, drafting emails.
🔴 E2 — Exposed via applications (16% of all tasks) An LLM alone isn't enough, but software built on top of it could. Searching knowledge bases, data-driven recommendations, database maintenance.
About 44% of all work tasks have some level of AI exposure (E1 + E2).
This is the opposite of every prior automation wave. Industrial robots hit factory workers. Computerization hit clerical workers. LLMs hit upward.
The most exposed roles (by automation score): correspondence clerks (0.86), interpreters and translators (0.80), court clerks (0.76), medical transcriptionists (0.74), telemarketers (0.73), word processors (0.72), payroll clerks (0.70). All dominated by structured, codifiable cognitive tasks.
The least exposed: roles requiring physical dexterity, manual labor, or real-time interpersonal interaction with a human being physically present.
Takeaway
Unlike every prior automation wave, LLMs hit upward — the most exposed roles are higher-paid and higher-educated. But exposure alone doesn't tell you what happens next.
2. Exposure alone doesn't determine your fate. The mix does.
A Harvard Business School team led by Suraj Srinivasan took the same Eloundou task data and asked:
“Does high exposure actually lead to fewer jobs? Or does it depend on how the exposure is distributed within a role?”
They derived two separate scores from the same dataset. The automation score measures what share of a role's tasks AI can simply do. The augmentation score measures whether a role has a productive mix of AI-doable and AI-proof tasks (roughly 50/50 between exposed and non-exposed). A role with high augmentation is one where AI handles the routine cognitive work while the human focuses on judgment, creativity, or relationships. The role doesn't shrink. It gets restructured.
To test this, they analyzed nearly the entire universe of US job postings (2019 through March 2025) using a causal design built around ChatGPT's launch in November 2022. Before that date, hiring for both groups followed parallel trends. After that date, they diverged:
The skill requirements diverged too. In automation-prone roles, total required skills per posting dropped 24% and new skills dropped 38%. Jobs are getting simpler. In augmentation-prone roles, required skills grew 15% and new skills grew 17%. Jobs are getting more complex, demanding AI literacy alongside existing domain expertise.
What does "augmentation-prone" actually look like? Clinical neuropsychologists, agricultural engineers, microbiologists, first-line supervisors of police, arbitrators and mediators. These are roles with a rich task mix: some work AI handles well (data analysis, report drafting), other work that requires physical presence, expert judgment, or interpersonal engagement that AI can't touch.
“Two roles with the same total AI exposure can have opposite outcomes. What matters is whether your exposure is concentrated or mixed.”
Takeaway
The question isn't how much of your work AI can do. It's whether your role has enough non-automatable work to restructure around. Concentrated exposure leads to contraction. Mixed exposure leads to growth.
Explore the full table below. Search for any role, click to see the task-level breakdown. A few things to note:
- Role is the O*NET occupation category, not a specific job title. Many titles map to the same category (e.g., "Machine Learning Engineer" and "Data Engineer" both map to "Data Scientists").
- Automation is the share of tasks in this role that AI can perform, weighted by task importance. Higher = more replaceable.
- Augmentation measures how balanced the mix is between AI-doable and AI-proof tasks. Higher = more likely to be reshaped rather than replaced.
- Resilience is a 0-100 score (higher = more protected). The label shows the risk tier: Exposed (≤35), Moderate (36-60), or Resilient (61+).
3. Agentic AI is raising the stakes
Everything above reflects the 2023-2024 picture: AI as a single-task tool. You give it a task, it does the task, it hands control back. The human stays in the loop, deciding what's next, coordinating across steps, handling exceptions.
Starting in 2025, the picture is shifting. Agentic AI systems don't receive a task. They receive a goal. They select tools, execute a multi-step plan, check their own output, and deliver a finished work product without intermediate human handoff.
This matters because prior task-level analyses share a quiet assumption: automation picks off tasks one at a time, and the occupation survives as a coordination shell around the remaining human work. A paralegal loses the document review task, but the role persists because coordinating the work still requires human judgment: which clause to research first, when to escalate, how to handle a client who changes the fact pattern mid-memo.
An agentic system collapses that coordination. It receives the research goal, retrieves case law, drafts the memo, cross-checks citations, and delivers the finished product. The coordination that protected the role is now inside the agent.
Gupta and Kumar (2026) formalized this with the Agentic Task Exposure (ATE) score, which adds a workflow coverage factor to the standard task-level analysis.
“Not just "can AI do this task?" but "can AI do the entire workflow without human handoff?"”
Four things that reduce agentic displacement risk
- Interpersonal engagement (negotiation, counseling, rapport). You can't automate a relationship.
- Regulatory accountability (legal liability, certification, diagnosis). Governance frameworks don't yet support autonomous AI here.
- Physical presence (hands-on work, site inspection). No software completes a physical task.
- Exception handling (crisis response, novel situations, ambiguous judgment). The things outside any standard workflow.
Applied across 236 roles in five US tech metros, the results shift the displacement frontier further upmarket. The highest-scoring roles (most exposed due to agentic AI) aren't clerical workers. They're credit analysts, market research analysts, financial examiners, sustainability specialists, insurance underwriters, and personal financial advisors. Professional roles with well-bounded digital workflows where every step can be digitized, chained, and executed end-to-end.
But the important caveat: no occupation reaches the "high risk" threshold, even by 2030. The top 20 all cluster in a moderate-risk range. Even the most exposed professional roles retain interpersonal, regulatory, or exception-handling tasks that agents can't complete autonomously. In other words, through 2030, expect widespread moderate pressure on professional roles, not mass displacement.
Geography creates a time lag. In the SF Bay Area, 71% of analyzed roles cross the moderate-risk threshold by 2027. In Seattle, Austin, and Boston: 0% by 2027, but they converge to the Bay Area's 2027 levels by 2030. Same roles, same exposure. The only difference is timing, and remote work is compressing that gap.
Takeaway
Through 2030, expect widespread moderate pressure on professional roles, not mass displacement. Agentic AI raises the stakes for roles with well-bounded digital workflows, but interpersonal, regulatory, physical, and exception-handling work remains protected.
4. So where does your role stand?
The research across these three studies points to a consistent picture.
Most exposed: roles dominated by structured cognitive tasks with well-bounded digital workflows. Correspondence clerks, translators, payroll clerks, and increasingly professional roles like credit analysts, market researchers, and financial examiners. As AI evolves from task-level tools to agentic systems that handle entire workflows, the exposure frontier keeps moving upmarket.
Least exposed: roles anchored in physical presence, deep interpersonal engagement, regulatory accountability, or constant exception handling. Healthcare support, trades, first responders, counselors, roles requiring hands-on work or real-time judgment in unpredictable situations.
The middle (where most knowledge workers sit): mixed profiles. Some tasks AI handles well. Others require your judgment, your relationships, your domain expertise. For these roles, the question isn't "will AI affect my job?" It will. The question is: which side of the mix dominates your day?
Takeaway
Your job title doesn't determine your risk. Your task mix does. Most knowledge workers sit in the middle.
Most people don't know their own mix. That's part of why I built Alignment Check. It maps the 19,265 tasks from this research into eight work categories and shows you how your role's time splits across them, each color-coded by AI exposure level. Free, no sign-up, 10 seconds.
Alignment Check shows your task-level AI exposure in 10 seconds. No sign-up, no email required.
Studies referenced:
- Eloundou, Manning, Mishkin & Rock, "GPTs are GPTs: Labor Market Impact Potential of LLMs," Science, 2024
- Chen, Srinivasan & Zakerinia, "Displacement or Complementarity? The Labor Market Impact of Generative AI," Harvard Business School Working Paper, August 2025
- Gupta & Kumar, "Agentic AI and Occupational Displacement: A Multi-Regional Task Exposure Analysis," arXiv, March 2026