AI sounds confident. That’s the problem.
There is a specific quality to well-written AI output. It sounds certain. It is fluent, structured, and assured. It does not qualify its claims, hedge its recommendations, or signal where the underlying data was thin.
This is not a bug in the system. It is how the system is designed. AI fluency is measured partly by confidence. Uncertainty, expressed clearly, would make the output less readable. So the models learn to write as though they know.
The LSE Business Review has documented what this costs. Businesses acting on AI-generated analysis, recommendations, and reports without adequate review are making decisions based on information that sounds more certain than it is. When those decisions turn out to be wrong, the confident presentation of the original output often means no one flagged the risk when it mattered.
The confidence problem in practice
A property consultancy we examined in The Stillness Dividend automated a significant portion of its thought leadership output. The AI could produce articles and market commentary at pace. The content was fluent. It was also generic.
The consultancy discovered this not from an internal audit but from clients. The distinctive voice that had built their reputation in the market had disappeared. Relationships built on genuine analysis were now being served content that could have come from any firm using the same tools.
The cost was not only reputational. The distinctive judgement that made the consultancy worth working with was the same judgement now being bypassed. The AI was generating output that sounded like expertise. It was not expertise.
The Ambiguity Bias Rule
The Stillness Dividend's AI Governance Protocol includes what we call the Ambiguity Bias Rule: when AI output involves significant ambiguity about intent, risk, or outcome, human judgement must be applied before the output is acted on.
This sounds obvious. In practice, it is consistently under-applied. The fluency of AI output creates a readability signal that can stand in for a reliability signal. Something that reads well and is well-structured feels as though it has been thought through carefully. It may not have been.
The Ambiguity Bias Rule requires organisations to build deliberate pause points into their processes. Before an AI-generated recommendation becomes a decision, someone with genuine expertise has to examine where the analysis might be wrong, where the data might be limited, and where the confidence of the output is outrunning the quality of the underlying thinking.
What human judgement is actually for
Two of the Big Four human capabilities identified in The Stillness Dividend are Critical Thinking and Judgement. Both are directly relevant here.
Critical Thinking is the capacity to examine an argument and identify where it is weak, where assumptions are doing heavy lifting, and where the evidence is thinner than it appears. AI output tends to suppress critical thinking in readers because it presents information in a form that feels complete.
Judgement is the ability to make a call in conditions of genuine uncertainty, where the right answer is not obvious and where the consequences of being wrong are real. AI does not exercise judgement. It generates outputs based on patterns. Judgement is what the person reviewing those outputs needs to bring.
Organisations that build AI-assisted processes without explicit spaces for Critical Thinking and Judgement do not get AI and human intelligence working together. They get AI outputs travelling through a system unchallenged.
Three practical questions
Where in your current AI-assisted workflows is there a defined review step before output becomes action?
Who is responsible for that review, and do they have the expertise to exercise genuine judgement on the content?
Are there categories of decision where your organisation has an explicit rule that AI output must be reviewed before it is acted upon?
If the answers are unclear, the Ambiguity Bias Rule is worth building explicitly into your processes. The organisations that do not are taking risks they have not fully priced.
Read our guide: Why Your Leadership Team Is the Biggest Variable in Your AI Investment.
Stillness Partners works with boards and leadership teams to build governance structures that make AI a source of advantage. Start a conversation.