The Zero Wound
Most AI investments don't fail loudly: they fail quietly, expensively, and in slow motion. Companies buy the tools, integrate LLMs, and hire consultants with decks full of transformation frameworks, yet nothing compounds. The pilots end, the dashboards go dark, and six months later the organization is back where it started: lighter in budget and heavier in cynicism.
The uncomfortable truth is that this isn't a technology problem; it is a knowledge infrastructure problem.
The Trap: System 0 in the Enterprise
In cognitive science, if System 1 is fast, intuitive thinking and System 2 is slow, deliberate reasoning, then System 0 is the unexamined infrastructure of data, patterns, and externalized knowledge that shapes everything downstream. Individuals live inside their System 0, and so do organizations.
The epistemological trap is this: companies assume they understand their own cognitive infrastructure. They assume they know where their real knowledge lives, which workflows actually matter, and how decisions are truly made. When they deploy an AI agent or build a RAG system on top of unverified data, they aren't building on solid ground: they are building on the assumption of solid ground. That isn't a technical failure; it's a System 0 failure.
(The concept of System 0 as an external digital infrastructure that processes information before we even begin to think was formalized by Chiriatti, Riva, et al. in Nature Human Behaviour. What follows is an extension of that framework into organizational strategy.)
What It Actually Costs
We need to quantify what is usually ignored: failed pilots that never scale erode competitiveness and waste budget, even before accounting for the opportunity cost of burned-out talent. Implementations that do not stick create a second wave of hidden costs, including shadow processes that undermine the very efficiency AI was supposed to deliver.
The wound does not bleed all at once, which is exactly why no one stops it until the gap with a more cognitively aware competitor becomes hard to close.
The Fix: From Tech Audit to Cognitive Audit
Before technology comes diagnosis. This isn't an AI audit in the traditional IT sense, but a cognitive audit: an honest mapping of what an organization actually knows and where that knowledge resides. Is it in structured documents, in people's heads, or in those undocumented "shadow workflows" that exist only because a veteran employee has been doing it that way for a decade?
In recent implementations I supervised, ranging from a design studio to a photovoltaic company, this shift in focus changed the outcome. Before a single line of code was written, we surfaced the implicit knowledge that usually stays hidden. The resulting AI agents aren't "smarter" because of better algorithms: they are more effective because their foundation is honest.
This is the definitive line between AI that works and AI that merely looks like it works.
A New Responsibility for AI Leaders
As AI professionals, even if self-promoted experts, our primary responsibility is to help organizations see what they actually possess before they build what they think they need. This requires going beneath the surface, past the org charts and the tech stack, to map the Corporate System 0.
Unlike the individual System 0, whose content often operates without our conscious consent, a corporate version can be made deliberate. We have the chance to turn an invisible reflex into a transparent, architected advantage.
If your pilots keep stalling, the problem likely isn't the tools: it's that the right questions haven't been asked yet. Strategy isn't about the AI you buy, but about the System 0 you build.