All articles

The Price of Speed

AI didn't take away the hard work. It took away the easy work and left only the hard part behind.

There's a promise that runs through every AI keynote, every product video, every LinkedIn post: AI will lighten your load. Less routine work. Less boilerplate. Less mindless grinding. More time for what matters.

That sounds good. It sounds logical. And it's not wrong.

But it's only half the truth.

The other half: Anyone who seriously integrates AI into their daily work doesn't work less. They work differently. And this "differently" is more exhausting than most are willing to admit.

More Output, More Load

The math goes like this: AI handles the implementation. You review, evaluate, steer. The feature that used to take three days is done in half a day. Great. But what happens with the remaining two and a half days?

You fill them. Not with leisure. With the next feature. The next review. The next evaluation.

Before AI, a developer might have worked on two or three topics a day. Implement something, do a code review, have a brief conversation about an architecture decision. In between: phases of typing where the mind could stay on the same topic. Monotonous, but stable.

With AI, that's gone. Instead: review of a generated module. New topic. Evaluation of an architecture proposal. New topic. Review of a test strategy. New topic. Assessment of a refactoring approach. New topic.

At the end of the day, you've produced ten times the output. And you're exhausted in a way that feels different from a day of intense programming. Not the good exhaustion after eight hours in flow. But the fragmented fatigue after a hundred small evaluations, each of which seems trivial on its own, but in sum empties your head completely.

When Every Answer Creates a New Question

Previously, a developer decided how to implement a feature. One big decision, then execution. Today, the AI generates three variants – and you choose. Then it generates tests – and you check whether the scenarios are correct. Then it suggests a refactoring – and you weigh whether it's worth the effort.

Every answer from the AI opens a new decision. And every decision costs energy. Not much individually. But by four in the afternoon, you've made more micro-decisions than you used to make in a week.

What falls by the wayside isn't the obvious: it's the subtle. The quiet moment where you wonder if the data model really makes sense. The pause where you realize the feature solves the wrong problem. The slow thought that comes precisely because you're not evaluating but dwelling.

The Loss of Natural Idle Time

Before AI, there were natural downtimes in a developer's day. Waiting for a build. Waiting for a colleague's response. A test running for fifteen minutes. Staring at the code while thinking.

These weren't wasted time. They were buffer. Space for the brain to consolidate what had happened. Many of the best architectural ideas didn't come during implementation but during the gaps.

AI eliminates these gaps. Not maliciously. But effectively. Because when you can keep going, you do. When the next module is only one prompt away, there's no natural break.

Paradoxically, the time we've gained through AI is exactly the time we've lost: the time to think.

The Conversation We're Not Having

In most teams I know, AI is framed as a productivity tool. More output, same time. The metrics confirm this. PRs per week: up. Features per sprint: up. Time-to-deploy: down.

What the metrics don't show: How often someone stares at a screen at the end of the day, unable to form a clear thought. How the quality of decisions in the afternoon noticeably decreases. How architecturally relevant questions are waved through because the head is already empty.

We're not having this conversation. Because whoever brings it up sounds like they can't keep up. Like they're not "AI-native" enough. Like the problem sits between the chair and the keyboard.

But the problem isn't individual capacity. The problem is that we've increased the pace without adjusting the rhythm.

What Could Help

I don't have a simple solution. But I have observations about what works:

Conscious blocks. Not everything the AI generates needs to be reviewed immediately. Those who batch work – generating in one block, reviewing in the next – reduce context switching.

Less is more. The ability to say "we're not doing that this sprint" becomes more important when everything is technically feasible. Focus isn't a limitation, it's a survival strategy.

Honest retrospectives. Not "how much did we deliver?" but "how sustainable is our pace?" When a team delivers more every sprint but the quality of decisions decreases, something is off.

Time for thinking, not just doing. An hour a day without tools, without prompts, without IDE. Just thinking about the problem. Feels unproductive. Is the most productive thing you can do.


AI has made us faster. But it hasn't made us more resilient. It has shifted the bottleneck – away from implementation, toward judgment. And judgment doesn't scale like code.

The real price of speed isn't burnout. It's the slow erosion of the ability that matters most in software development: thinking clearly under pressure.

We should talk about this. Before the metrics look great but the people behind them don't.