The real cost of vibe coding
Vibe coding has a good story. You describe what you want, the AI writes the code, and you ship fast. For individual projects and prototypes, that story often holds. For production systems under pressure, it tends not to.
Jon Wingard's piece in Forbes put the issue directly. Developers using AI to generate code are increasingly working with output they do not fully understand. They can read it, run it, deploy it. But when it breaks in conditions the original prompt did not anticipate, the understanding needed to fix it quickly is not there.
The speed is real. The debt is also real.
The appeal is understandable
Vibe coding is not carelessness. It is a rational response to pressure. Development teams are asked to move fast. AI tools make moving fast easier than it has ever been. The temptation to treat generated code as understood code is entirely understandable.
The problem is that speed and understanding are different things. A team can ship quickly and accumulate a significant amount of code that no one on the team could explain under examination. That code sits in production. It runs. Until it does not.
Reactive Culture and the speed trap
The Stillness Dividend identifies a pattern called Reactive Culture: organisations that have come to treat speed as the primary measure of performance. In a Reactive Culture, the question asked most often is "how quickly can we get this done?" The question asked least often is "how well do we understand what we are building?"
Both questions matter. In practice, speed pressure consistently crowds out the second one.
Vibe coding in a Reactive Culture is a specific version of this problem. AI makes it possible to generate output faster than understanding can keep pace. The gap accumulates. Technical debt builds. At some point - usually under the wrong kind of pressure - the debt comes due.
The productivity formula in The Stillness Dividend treats speed as a multiplier within a larger equation: Productivity = Quality × Speed × Sustainability. Speed with low quality or low sustainability does not produce high productivity. It produces output that has to be redone.
The question is who decides
One capability AI cannot replace is judgement about what to build in the first place. The Stillness Dividend identifies four human capabilities that become more important as AI takes on more of the technical work: Critical Thinking, Creativity, Empathy, and Judgement.
Judgement is the relevant one here. AI is good at generating solutions to well-defined problems. The decision about which problems are worth solving, and which technical approaches carry acceptable risk, requires human judgement. In vibe coding workflows, that judgement often gets shortened or skipped.
The question for any team using AI in development is not whether to use it. It is whether the governance around its use is keeping pace with the speed it enables. Who reviews AI-generated code before it ships? Who is accountable when it fails? Where does human understanding have to exist before output is deployed?
These are not bureaucratic questions. They are the questions that determine whether speed becomes an advantage or a liability.
Three questions worth asking now
Can at least one person on the team explain how the AI-generated code works, without reading from the code itself?
Is there a defined point in your process where human review is required before deployment?
Is there a clear owner for production incidents involving AI-generated code?
If the answer to any of these is uncertain, your governance is behind your speed. That gap is manageable now and expensive later.
For a broader look at why leadership readiness determines AI performance, read Why Your Leadership Team Is the Biggest Variable in Your AI Investment.
Stillness Partners works with leadership teams building the governance structures that make AI adoption sustainable. Get in touch.