AI code is fun until the complexity eats your face
It’s fun, for sure. It’s addictive, for sure. But it’s non-deterministic, and it adds complexity at a rate we’re not equipped to handle.
If you’re using AI coding tools seriously and trying to keep up with the pace of change, you’ll likely recognise a pattern:
- There’s a strong FOMO element driving adoption
- Small things, side quests, are almost free
- The big things start off fun and easy. Then the pain increases. Complexity eats your face.
- Someone online claims there’s a “secret way”. It’s a skill issue. The secret sauce is out there, and you’ll have none of it.
What are we talking about here?
I’m talking about agentic coding-heavy work.
The kind where you might have very good system design, very nice requirements, but you’re not writing any of the code. None.
Give the AI the specs and you go into another window. Every now and again you look over, check, adjust a few things. You do that again, and again, and again.
More on the loop than in the loop.
Not my code
You do that “agentic loop” enough times, you end up with something. Something that works.
Most of the time, the coding agents will give you back something roughly in the shape you asked for.
One problem though.
It’s not my code. This ends up being a big problem on multiple levels.
Ownership
One thing I value when working with people is freedom. Having freedom and giving freedom. To do things, to try things, to experiment.
With that freedom comes the responsibility to fix it when things go wrong. I don’t mind people making mistakes as long as they own the outcome.
With AI, that breaks down.
It can’t fix what it broke. Not in the way a developer would. You end up spending mental energy on problems you didn’t create, debugging decisions you didn’t make.
The fast track to productivity becomes a fast track into tech debt. On choices you didn’t even make directly.
The common retort is to use more AI to fix it. That’s mostly pointless, today’s models can’t handle the complexity they generate. The total amount overwhelms them.
Better to just start from scratch.
The promise of “better models” stands as a counterpoint. We’ll see.
I would like this to work; I’m trying to make it work. It’s not though.
Review loops and iteration help with some problems. They don’t fix the problem of massive changesets across large codebases.
How did the system change? Why is this page loading in 2 seconds now? With only a couple of hundred records?
I think this remains the unsolved problem of the AI coding paradigm. It breaks down after a certain level of complexity.
If you don’t know on an instinctive level what complexity is, what it feels like, this point won’t land. And that’s fine.
Mental Models
Even if the previous point gets resolved. A new model “solves” programming. Again.
You still didn’t build the system. You can’t reason about it.
Read the Naur paper (Peter Naur, Programming as Theory Building). The code is secondary. It’s an artefact of the mental models of what the systems are.
When you haven’t written a single line in the codebase, how can you answer questions about it? How it behaves? Why it behaves the way it does? What will happen if you change X, Y, or Z?
How do you reason about something you have no mental framework for?
This is the simplest, most foundational observation about software development. And yet it’s surprisingly hard for some to accept, especially those for whom the implications are inconvenient.
Even if AI labs build coding agents that can construct complete systems and answer questions about them, then what? You have a rented black box as your business. You know nothing about your own system and are completely at their mercy.
Most companies don’t want to end up there. But the pressure to reduce engineering costs may push many in that direction anyway.
Possible solutions
Honestly, I’m not sure.
Company-wise, fully outsourcing your software development to AI is a risky bet.
For individual engineers? I’d say go deep instead of wide.
It’s popular to celebrate the age of the generalist. And there is some truth to that, as a chronic generalist myself, I’d like to think our time has come. It helps to have a broad base of knowledge.
But what about those massive changesets? The mental model building where you need to do it fast and at scale? That looks like depth to me. Only by going deep can you hope to pull off that mental feat.
The other possible solution: Go even smaller in the size of things. More micro, more serverless, more distributed. The smaller the chunks of work, the better the AI can handle them, and the more you can reason about the results.
I know monoliths were making a comeback. But I’m thinking the opposite direction now. You’ll need to pay the distributed systems tax. But this complexity was always coming, one way or another. Pick your poison.
The adoption pressure
The push towards full agentic coding isn’t purely organic, it’s often top-down. Decision-makers attend workshops, hear about startups with extraordinary productivity gains, and set mandates. The claims are rarely backed by public, verifiable evidence at scale.
We could take the productivity gains where they naturally exist and leave it at that. But that’s unlikely to happen.
Everyone is side-questing
On the brighter side, I think the complexity problem is exactly why so many developers are focused on side projects and tooling right now.
That’s the fun part of using AI.
Lots of people are building their own workflows (myself included). Focusing on dev tooling. Making tools to make tools. Building the factory to build factories.
It makes sense. The current tools fall apart after a certain point. Developers feel the pain, so they go build better tools.
The result is an explosion of custom setups. Things move so fast that everyone thinks everyone else is behind, so they build their own thing.
Don’t get me wrong, I’m guilty too. It’s fun.
Final thoughts
- There’s a way forward, but it won’t be smooth.
- Keep building your skills.
- Stay grounded.