The work (to do the work)

Reshape, not replace: what AI is actually changing about our work today.

Michael Eggleton

February 2026

AI's most immediate impact isn't replacing people - it's eating the work beneath the work they were actually hired to do.

Many professionals are rationally tuning it out[1]: The AI conversation is all 'AGI revolution' and 2035 timelines, while their reality is that not much has really changed. The predictable, repeatable work was moved to production lines and software decades ago.

Within the work that remains uniquely human - messy, contextual, shaped by judgment - there are layers. The layer just below the surface, the preparation, the documentation, the process work that enables it, is where the shift is actually happening.

The line: Above and Below

Every role has a version of this. There's the thing you were hired to do: the primary judgment, expertise, or outcome that justifies your seat. And then there's everything underneath it: the preparation, the documentation, the sub-tasks and process work required to get there.

Above the line, and below the line.

Judgment · Expertise
Sub-tasks · Process
Primary Care Physician
Clinical judgment
Patient relationship
Treatment decisions
Reading the room
Software Developer
Architecture decisions
Product thinking
Problem selection
Trade-off judgment
the line
Review intake forms
Pull up medical history
Document the visit
Write referral letters
Follow-up notes
Writing boilerplate
Wiring up patterns
Applying design consistently
Writing tests
Formatting docs
where the shift starts

The line isn't a permission slip to automate everything beneath it - it's a map for where AI effort makes sense first.

A Primary Care Physician’s above-the-line work is delivering care - listening to a patient, exercising clinical judgment, making decisions about treatment. Below the line is reviewing the intake form, pulling up medical history, documenting the visit, writing referral letters, handling follow-up notes. All necessary, but none of it is the reason they went to medical school.

In my interviews with primary care physicians, the situation was stark. Many referenced "pyjama time", where they were expected to prepare or catch up on documentation at home, unpaid, just to keep up. In their compressed appointments, a significant portion of each visit was spent looking at a screen rather than the patient. It's no wonder the industry has massive burnout and supply issues: long wait times, clinician shortages, or no available doctors at all. The below-the-line work is crushing the people who do the above-the-line work.

A software developer's above-the-line work is solving problems through technology - understanding what to build, designing architecture, making trade-offs about systems. Below the line is writing boilerplate, wiring up standard patterns, applying design consistently, writing tests, formatting documentation.

Where the time goes
TodayRebalanced
Above the line
more time for
judgment, care,
decisions, thinking
Below the line
administration,
surfacing information,
documentation,
formatting
faster, cheaper,
offloaded
the line
The role doesn’t disappear — it changes shape.

Below the line is where AI fits today. Below-the-line work naturally decomposes into specific, bounded sub-tasks: review this, summarise that, generate this. It’s exactly what agents are built to do - complete bounded tasks with clear, focused goals. Above-the-line work is the opposite: ambiguous, contextual, shaped by judgment - and typically requires a mix of 'below the line' tasks to be effective.

What this actually looks like: the clinical scribe

In healthcare, the most visible AI deployment right now is the ambient scribe - a tool that listens to the patient encounter and generates a clinical note. That's a textbook below-the-line task. The documentation was never the point; the care was the point.

The scribe produces a note, the clinician reviews it and accepts it. That note review and acceptance is firmly above-the-line work: it requires knowing whether the model captured the right clinical intent, whether it hallucinated a medication that was discussed but not prescribed, whether it correctly attributed a symptom to the presenting complaint rather than a passing mention of a family member's condition. Ambient audio is messy: patients talk over clinicians, context shifts mid-sentence, and the system has to decide what counts as clinically relevant.

The failure mode isn't dramatic - it's extremely subtle, and full of individual nuance and circumstance. A note that looks correct but misrepresents the clinical reasoning. The clinician who rubber-stamps it creates risk, but a care system that doesn't give clinicians enough time to review properly has just moved the below-the-line problem from 'write the note' to 'catch the model's mistakes'.

When it works, though, the shift is real. A clinician who isn't spending twenty minutes per patient on documentation can spend that time on the judgment calls that actually determine outcomes. Our roles don't disappear: they change shape.

Our inflection point: Where does the time go?

The line sits in a different place for every role, seniority, and context. But the question is the same: when below-the-line work gets automated, where does the freed-up capacity go?

When below-the-line work gets automated and nobody rethinks how the time gets spent, it disappears: into another meeting, another administrative task, another layer of process. The documentation got faster, but nothing changed. Or worse, the goalposts move: same people, more expected output, no room to breathe. The efficiency gain gets absorbed by organizational entropy before anyone notices it happened.

This is the default outcome we are seeing today. Most organisations treat this as either a cost saving or a productivity gain: fewer people for the same output, or the same people expected to produce more. Both optimise for the same work, done faster. The better question is: what high-value, above-the-line work wasn't getting done before because there was never enough time? And below the line, what's now possible at a skill and cost threshold that didn't exist six months ago?

The teams getting this right are asking a simple question: where is the line, and are we deliberately moving people above it? From what I've seen building these systems, the difference isn't the technology, it's whether someone sat down and made that decision intentionally. That's not a five-year strategy - it's an essential conversation today.

What else?

Do more, or spend less? When you make below-the-line work cheaper, do you do the same amount of work with fewer people, or do you do more work with the same people? In healthcare: fewer staff, or more patients? In engineering: smaller team, or more features? There isn't a single right answer... but there is a wrong one, which is not choosing at all.

The line moves. What sits above the line today may be below it in eighteen months. In software engineering, writing code is already crossing the line (but the code was never the job anyway). As models improve, more of what sits above the line will start to become assistable, then automatable. This isn't a reason to panic, it's a reason to keep reassessing where your line is and investing in the capabilities that remain above it. Dario Amodei's conversation on this progression[2] is worth the watch.

The economics are becoming measurable. Craig Hepburn's The Price of Thinking and the accompanying humanortoken.com project are putting real cost data against professional tasks - and the ratios on below-the-line work are already striking. Understanding the gap between what AI costs and what human work costs is going to be a core operational skill, and it's already starting to show up as a new line item that finance teams are having to make sense of.

Just one thing about 2035... When below-the-line work also serves as the training ground for above-the-line judgment (think: a junior developer who's never written boilerplate, or a medical resident who's never synthesized and hand-written a clinical note), automating it without replacing that learning path hollows out the pipeline. Whether these are skills you learn once or muscles that need continual use is an open question - but it's one worth asking now, not in ten years.

Efficiency doesn't have a great track record. Every major productivity technology promised to free up time for higher-order work - but almost none did. The gains got absorbed into doing the same thing cheaper, not doing something better. The promise of technology was never "same work, fewer people." It's "work that wasn't possible before." (A complex care appointment that never made economic sense; an engineering team that can take on problems previously out of scope.) Cutting headcount is a one-time gain; unlocking work that didn't exist before is a compounding one.

[1] https://www.ben-evans.com/benedictevans/2025/5/25/genais-adoption-puzzle

[2] https://youtu.be/n1E9IZfvGMA?si=NWAh077IG_2hio8Q&t=1013


Keep reading