So much of the AI conversation is painting aspirational pictures of five or ten years from now. Autonomous agents running businesses. Models that out-reason humans. AGI timelines. It makes for great keynotes. It's not very useful if you're trying to understand the impact today, make decisions this quarter, or shape your worldview on what is noise and what is useful.
For the vast majority of people, it doesn’t register more than annoying background noise that they can ignore[1] - after all, they are busy doing their jobs! Many professionals are not focused on AI - and that's a reasonable response when the headlines are all about revolution while your day-to-day work hasn't changed.
They're not wrong. The reality for most people is that their work is messy, contextual, and human. It's why people are so good at it, and why machines struggle with it. The predictable, repeatable, structured tasks that were easy to systematise were moved to production lines and software decades ago. Factories in the physical world. RPA in the digital world. What's left is the hard stuff.
There's a quieter shift happening underneath the noise. The past few years, having proximity to the coalface of AI has shaped how I think about this. Seeing how large health enterprises and early-stage startups design systems, deploy tools - and what actually happens when AI meets real workflows. Not the pitch deck version - the version where people are trying to get work done on a Tuesday afternoon.
Here's what I keep seeing: AI's most immediate impact isn't replacing people. It's eating the work beneath the work they were actually hired to do.
The line: Above and Below
Every role has a version of this. There's the thing you were hired to do: the primary judgment, expertise, or outcome that justifies your seat. And then there's everything underneath it: the preparation, the documentation, the sub-tasks and process work required to get there.
I think of this as above the line and below the line.
A Primary Care Physician’s above-the-line work is delivering care - listening to a patient, exercising clinical judgment, making decisions about treatment. Below the line is reviewing the intake form, pulling up medical history, documenting the visit, writing referral letters, handling follow-up notes. All necessary. None of it is the reason they went to medical school.
A software developer's above-the-line work is solving problems through technology - understanding what to build, designing architecture, making trade-offs about systems. Below the line is writing boilerplate, wiring up standard patterns, applying design consistently, writing tests, formatting documentation.
The line isn't universal. It depends on role, seniority, and context. But within any given role, most people can draw it instinctively. They know which parts of their day are the real work, and which parts are the work around the work.
Below the line is where AI fits today - there are already strong applications across industries. Below-the-line work naturally decomposes into specific, bounded sub-tasks: review this, summarise that, generate this. It’s exactly what agents are built to do - complete bounded tasks with clear, focused goals. Above-the-line work is the opposite: ambiguous, contextual, shaped by judgment - and typically requires a mix of 'below the line' tasks to be effective.
What this looks like in practice
In healthcare, the most visible AI deployment right now is the scribe - a model that listens to the patient encounter and generates a clinical note. That's a below-the-line task. The documentation was never the point; the care was the point.
But the interesting move isn't just automating the note. It's what happens next. When a clinician isn't spending twenty minutes per patient on documentation, they can spend more time on clinical reasoning, on the patient relationship, on the judgment calls that actually determine outcomes. AI didn't replace the doctor - it freed them to do more medicine.
In product development, the same pattern is playing out. AI tools are writing more and more code - and people keep confusing that with replacing engineers. As Anthropic's Dario Amodei put it recently: writing the code was never the job[2]. The job is knowing what to build, how to architect it, whether it solves the problem, and what trade-offs are worth making. You still need someone who can look at what the model produced and know whether it's right. And when the cost of building drops, things that were never worth building start to make sense: more testing, more variants, more experiments that the roadmap could never justify before.
For us? Below-the-line work gets faster and cheaper. Above-the-line work becomes a larger share of what humans actually do. Our roles don’t disappear - they change shape.
Our inflection point
The freed-up capacity is only valuable if it goes somewhere. When below-the-line work gets automated and nobody rethinks how the time gets spent, it disappears - into another meeting, another administrative task, another layer of process. The documentation got faster, but nothing changed. Or the goalposts move - same people, more expected output, no room to breathe.
The teams getting this right are asking a simple question: where is the line, and are we deliberately moving people above it? From what I've seen building these systems, the difference isn't the technology — it's whether someone sat down and made that decision intentionally. That's not a five-year strategy, it's a today conversation.
The promise of technology has always been to help people do their best work. Those of us closest to it right now get to shape whether it does.
[2] https://youtu.be/n1E9IZfvGMA?si=NWAh077IG_2hio8Q&t=1013
What else?
Do more, or spend less? When you make below-the-line work cheaper and faster, do you do the same amount of work with fewer people, or do you do more work with the same people? In healthcare: fewer staff, or more patients? In engineering: smaller team, or more features? There isn't a single right answer... but there is a wrong one, which is not choosing at all.
The line moves. What sits above the line today may be below it in eighteen months. As models improve, some of that judgment work will start to become assistable, then automatable. This isn't a reason to panic, it's a reason to keep reassessing where your line is and investing in the capabilities that remain above it.
The economics are becoming measurable. Craig Hepburn's recent piece The Price of Thinking and the accompanying humanortoken.com project are putting real cost data against professional tasks - and the ratios on below-the-line work are already striking. Understanding the gap between what AI costs and what human work costs is going to be a core operational skill, and it's already starting to show up as a new line item that finance teams are having to make sense of.
The human value lives above the line - but it's hard to measure. A doctor reading the room. A builder knowing which problem is actually worth solving. These don't show up in cost comparisons. The organisations that understand this will invest in their people's above-the-line capabilities. The ones that only see the cost ratios below the line will optimise themselves into trouble.