
AI is now writing a non-trivial percentage of the world’s software.
That part is no longer controversial.
What is controversial is what comes next: a growing share of production systems now include code that nobody meaningfully read, reasoned about, or truly understands.
This is not a failure of discipline or intelligence. It is a structural shift in how software is built.
And if we misread this shift, the future of software will not be faster or more creative. It will be brittle, opaque, and quietly dangerous.
This essay is not anti-AI.
It is anti-complacency.
For most of software history, development followed a stable cognitive loop:
Understand the problem
Design a solution
Write code
Read it back
Refactor until it makes sense
AI breaks this loop.
Today, the workflow increasingly looks like this:
Describe intent in natural language
Receive large blocks of plausible code
Run it
Patch errors until it works
The center of gravity has shifted from writing to assembling.
This feels like progress. Iteration is faster. Friction is lower. Output looks polished earlier.
But speed hides tradeoffs.
When you do not write something, you are less likely to question its structure.
When you do not read something carefully, you are less likely to understand how it fails.
AI does not remove work. It displaces it.
And most of that work shows up later, when change becomes necessary.
Bad code is not new. The industry has always shipped bad code.
Unread code is new at scale.
AI-generated code often:
Compiles cleanly
Looks idiomatic
Passes basic tests
Fails silently under edge conditions
Encodes assumptions no one remembers approving
This is not dangerous because the code is “wrong.”
It is dangerous because it is unowned.
Ownership used to come from authorship. Someone wrote it. Someone remembered why it existed.
Now authorship is diffuse. Responsibility is abstracted away.
When a system fails and no one can confidently explain why it behaves the way it does, you do not have a bug. You have technical fog.
Fog does not cause explosions.
It causes slow, compounding failure.
It is tempting to say this problem hits junior developers first. That is partially true, but incomplete.
The real fault line is incentive alignment.
People under pressure to:
Ship faster
Reduce cognitive load
Demonstrate momentum
Outsource thinking to tools that appear smarter
are the most likely to accumulate unread code.
That includes juniors, yes.
It also includes startups, understaffed teams, and even experienced engineers operating under growth pressure.
The problem does not appear immediately.
It appears later:
When onboarding becomes slow or impossible
When refactors stall because nothing feels safe to touch
When a small change breaks unrelated systems
When no one knows what can be deleted without consequences
Unread code does not fail loudly.
It fails when change is required.
This is the most accurate mental model available today.
AI:
Writes extremely fast
Knows syntax perfectly
Recognizes patterns statistically
Does not understand intent
Does not remember past decisions
Does not feel the cost of complexity
That makes it closer to a junior engineer with infinite stamina than a staff engineer with judgment.
You would not merge a junior’s work without review.
You should not do that with AI either.
The uncomfortable truth is that as AI improves, its mistakes become more convincing. The code looks right. The structure feels familiar. The confidence is misplaced.
We are past the era where writing code is the slowest part of development.
Verification is.
Reading, understanding, and validating AI-generated code requires focus and experience. In many cases, it takes longer than writing the code manually would have.
This creates a dangerous feedback loop:
Teams adopt AI to move faster
Verification is rushed to preserve velocity
Risk accumulates invisibly
This is how technical debt is no longer written intentionally.
It is generated automatically.
Understanding code does not mean memorizing every line.
It means:
Knowing what assumptions are embedded
Knowing which parts are safe to change
Knowing where complexity actually lives
Knowing how the system behaves under stress
If you cannot explain a system’s behavior without re-prompting an AI, you do not understand the system.
You are renting it.
The claim that “developers will not need to understand code anymore” misunderstands engineering.
Engineering is not transcription.
It is constraint management.
AI removes mechanical effort.
It does not remove responsibility.
In fact, responsibility increases:
Systems grow larger faster
Abstraction layers stack invisibly
Failure modes multiply
The most valuable developers in the AI era will not be the fastest typists or the best prompters.
They will be the ones who:
Can reason across layers
Can simplify aggressively
Can say no to unnecessary code
Can read unfamiliar systems quickly
Understanding becomes rarer.
That makes it more valuable, not less.
The industry will not split into “AI users” and “non-AI users.”
Everyone will use AI.
The real divide will be between:
People who use AI to think better
People who use AI to avoid thinking
One group will build systems that survive contact with reality.
The other will build impressive demos that collapse under scale.
AI is here to stay. That is not the debate.
The real question is whether we treat AI as:
A force multiplier for understanding
or
A substitute for it
Code has always been a liability masquerading as an asset.
Unread code simply hides that liability more effectively.
The future does not belong to developers who generate the most code.
It belongs to those who can explain what their systems do, why they do it, and what will break when they change them.
AI can help with that.
But only if we keep reading.