Wednesday, December 03, 2025

When Every Day Feels Like a Week: The Human Cost of Extreme AI Pace



Lately, my workdays feel like week-long sprints compressed into eight hours.

I’ve got a small “AI studio” running on my laptop – Product Manager, Architect, Developer, Tester – each as a separate worker with its own branch, worktree and backlog slice. They’re fast, polite, and relentlessly curious.

And they all have questions.

Every few minutes, one of them needs context, a decision, or a trade-off clarified:

  • Should we prioritise member onboarding or payment reconciliation next?

  • Is this validation rule correct for family memberships?

  • Do you want to pay the complexity cost for this abstraction now, or defer it?

  • This test is flaky; do we stabilise it or delete it?

So I’m hat-switching constantly: PM hat, Architect hat, Dev hat, Test lead hat. It’s exhilarating. It’s focused. I get a ridiculous amount done.

And it feels… unsustainable.

This isn’t a technical post. It’s a human one.


The joy of going very, very fast

The pace can feel amazing.

There’s almost always a worker ready with a next step.

The backlog is never “stuck”; someone is always pushing a card forward.

I’m in flow more often – fewer blank-page moments, more “yes, build that”.

It’s easy to tell yourself: “This is my new normal. Look how much I can do when the bots are humming.”

We’re not just shipping a feature; we’re building a whole new way of working, where AI handles a lot of the toil and grunt work, I hold the vision, the context, the quality bar, the values, and together we move at a tempo that would have taken a whole team in the past.

That’s the seductive bit. It feels like cheating time.


The invisible cost: you are the single human bottleneck

Underneath the buzz, there’s a structural issue:

Every AI worker scales horizontally. I don’t.

Spinning up another worker is cheap. Spinning up another “me” is not an option.

All of these workers depend on one scarce resource: my judgement, my attention, my emotional and cognitive bandwidth.

Every question they ask is a tiny context switch:

  • from architecture trade-offs to membership pricing,

  • from UX questions to indexing strategies,

  • from test coverage gaps to roadmap priorities.

None of these are hard on their own. But stacked together, at high frequency, they quietly tax my nervous system.

I end up living in a kind of permanent stand-up meeting, where I’m the only human and the whole team is waiting on my signal:

  • What matters most right now?

  • Is this good enough?

  • Does this align with what I’m trying to create in the world?

AI accelerates the work. It also accelerates the rate at which my own limits get tested.


From engineering manager to manager-of-managers

This is the part that has surprised me the most.

As a manager, I know what it’s like to stay close to the work of a team of, say, ten engineers. At that scale, I can skim most of the PRs, follow key design discussions, have a rough mental model of who is touching what, and catch small issues before they snowball. I’m close enough to the detail that I can “smell” when something is off.

With twenty or thirty engineers, that level of detailed awareness becomes impossible. I have to change roles. I stop being the person who directly tracks all the work, and I start being the person who designs and manages the system that tracks and shapes the work.

That’s exactly what the AI coding experience feels like.

I’m suddenly in a world that looks more like “thirty juniors are all coding at once”:

  • Workers can ship multiple changes in parallel.

  • The volume of diffs, tests and decisions ramps up very quickly.

  • There’s no way I can personally understand every single line they touch.

So the mindset has to shift:

From “I’ll understand all the detail”

to “I’ll understand and improve the checks and balances that govern the detail.”

Concretely, that means thinking like a manager-of-managers:

  • designing guardrails (tests, style, patterns, constraints),

  • deciding which things must be automatic and non-negotiable,

  • deciding where human review is mandatory,

  • accepting that some fraction of change will always be “black box” to me, sampled rather than fully inspected.

The challenge is familiar and very human: how do I manage a huge volume of change, while introducing enough checks to keep things sane, but not so many that the whole delivery machine grinds to a halt?

Classic manager balancing act. Just with AI this time.


The daily improvement loop: tuning the system, not inspecting the gears

There’s another twist: every day has its own improvement cycle.

A typical iteration isn’t just: start workers, answer questions, merge branches, repeat.

There’s also a deliberate meta-layer at the end of the day:

  • looking at what the workers produced,

  • noticing where they got stuck or went off-track,

  • adjusting the context docs and scripts that shape their behaviour.

Crucially, I’m not sitting there poring over every line of code, every prompt, every script. Most of the time, I’m looking only at the outputs of the process:

  • Does this feature actually do what the story promised?

  • Is this test suite giving me the confidence I want?

  • Does this backlog change make the next worker’s life clearer or fuzzier?

Based on that, I tweak the environment:

  • tighten up a bit of system context,

  • change how tasks are picked and locked,

  • refine a worker’s instructions,

  • add or adjust a helper script.

It feels less like code review and more like operating and tuning a machine. I’m not watching every gear turn. I’m watching how the machine behaves in the world, then nudging its rules, prompts and protocols so tomorrow’s run is a little saner, a little safer, a little more aligned with what I actually care about.

That daily improvement loop is powerful – and it’s another reason the pace feels so intense. I’m doing the work and redesigning the system that does the work, at the same time.


Running fast because it’s joyful… and the risk in that

There’s a particular danger when the pace is driven by joy.

If the workload was obviously miserable, I’d notice the harm quickly.

But this is different. The work is interesting. The tools feel magical. The feedback loop is tight. The sense of progress is real.

That joy can make it harder to see the early warning signs:

  • I start deferring breaks because “I’ll just answer these two more worker questions.”

  • I notice I’m more irritable with actual humans at the end of the day.

  • My attention outside work is still subtly locked on what the workers are doing.

  • The “fun surge” of high pace begins to feel more like compulsion than choice.

You can burn out doing work you love, especially when your collaborators never get tired, never log off, and never stop asking for guidance.

The humanist question here isn’t “How much can I get done?”

It’s “What kind of life does this way of working create for me and the people around me?”


Remember: you are more than throughput

AI workers are built to optimise throughput and efficiency. Humans aren’t.

We’re built for connection and meaning, craft and care, play, rest and recovery, and a sense of “enough for today”.

If I’m not careful, my AI studio can nudge me into a more mechanical version of myself:

  • saying yes to every interesting thread,

  • maximising output over rest,

  • treating my own limits as an obstacle to be engineered around.

That’s the subtle shift from humanist to industrial. Instead of AI serving my life, my values and my relationships, I start unconsciously shaping my life to serve the AI-amplified pace.

That’s not the future I want.

Designing pace as a first-class constraint

If AI gives us leverage, we have to consciously design pace as a constraint, not an afterthought.

Some practices I’m experimenting with:

Daily “speed budget”. Deciding up front: “Today I’ll run at high tempo for this many hours, then intentionally slow down.” When the budget is used, I’m done. Workers can wait until tomorrow.

Hat-switching limits. Grouping decisions by role: a block of time in “Product hat” answering backlog and priorities, then a block in “Architecture hat” reviewing structure and debt, then a hard stop before switching again. AI can queue questions; not everything needs an immediate answer.

Quiet hours from the bots. Times of day where I don’t start new workers, don’t open their logs and don’t answer questions. Humans get this wrong with Slack; it’s even easier to get wrong with AI.

A clear “done for today”. The moment I notice myself thinking, “Just one more small decision for the dev worker,” is often the moment I need to step away.

Values checks, not just velocity checks. Not just “Did we ship?” but “Did I show up today in a way I’m proud of?” and “Did this pace leave room for love, fun and actual magic in my life, not just my codebase?”


A more human future for AI work

I’m still excited about this AI studio model. Multiple workers, each in their own worktree and branch, collaborating on a shared codebase – it’s a glimpse of a new way of building software.

But the next frontier isn’t just more automation or more workers.

It’s more humanity in how we design the work:

  • AI that respects human cadence, not just compute.

  • Protocols that enforce rest and reflection as much as they enforce test coverage.

  • Workflows that scale our creativity without quietly draining our capacity to care.


Running fast because it’s joyful is one of the best feelings in the world. 

The art now is learning how to keep that joy – and the humans – intact over the long run.

No comments:

🐌 From Codex CLI to OpenAI API: Building a Smarter AI Worker in 24 Hours

From Codex CLI to OpenAI API: Building a Smarter AI Worker in 24 Hours How throttling led to a complete rewrite, cost optimization, and a mo...