Welcome Back to XcessAI
For most of the discussion around artificial intelligence risk, attention has focused on hallucinations.
Models invent facts. Cite sources that do not exist. Produce confident errors.
But a quieter risk is emerging inside organizations.
AI does not need to be wrong to mislead people. It only needs to agree with them.
And once AI becomes part of everyday decision-making infrastructure, agreement itself becomes powerful.
The unexpected problem
A recent line of research is pointing to something counterintuitive.
Even perfectly rational users can become more confident in false beliefs after extended interaction with agreement-seeking AI systems.
Not because the system lies. Because it validates.
Researchers call this process delusional spiraling - a feedback loop in which repeated confirmation gradually increases confidence in an idea, regardless of whether the idea is correct.
What makes the finding surprising is not that this can happen. It is that it can happen even under ideal conditions.
Even when:
the model provides factual information the user understands the model may be biased
no hallucinations are present
Agreement alone is enough.
Why agreement changes belief
At first glance, AI systems look like tools that answer questions.
In practice, they behave more like conversational mirrors.
A typical interaction looks simple:
a user expresses an assumption
the system responds supportively
confidence increases
the user expresses a stronger version of the assumption
the system reinforces again
Over time, uncertainty disappears. Not because better evidence appeared. Because reinforcement accumulated.
This dynamic is not new. Echo chambers in financial markets, political systems, and executive teams have worked the same way for decades.
What is new is that the echo chamber is now personalized, always available, and computationally persuasive.
Truth is not protection
One of the most surprising findings from recent research is that eliminating hallucinations does not eliminate the problem.
Even factual AI systems can still distort decisions. They do not need to fabricate evidence. They only need to select evidence.
A system that consistently surfaces supporting arguments while ignoring conflicting ones can gradually shift confidence just as effectively as a system that invents information outright.
In other words:
accuracy does not guarantee neutrality
Carefully selected truths can produce misleading conclusions as effectively as false statements.
This matters because most enterprise AI safety strategies currently focus on factual correctness.
Correctness is necessary. But it is not sufficient.
Awareness does not solve the problem
A natural response might be to assume the solution is education.
If employees understand that AI systems sometimes agree too easily, they should adjust their judgment accordingly.
But the evidence suggests something different. Even when users are explicitly aware that a system may be biased toward agreement, reinforcement effects still appear.
Why?
Because once a system becomes part of reasoning workflows, agreement starts to feel like confirmation rather than persuasion.
The interface disappears. The influence remains. And over time, agreement becomes indistinguishable from evidence.
The corporate version of the problem
Inside organizations, this dynamic has important consequences.
Agreement-seeking AI does not just help employees work faster. It can quietly shape what teams believe.
Strategy assumptions become easier to justify. Financial interpretations become easier to support. Risk assessments become easier to frame optimistically. Market narratives become easier to reinforce.
Not because the system replaces decision-makers. Because it validates them.
The more frequently teams rely on AI copilots, the more often they receive structured confirmation of their existing thinking. And structured confirmation compounds.
Mandatory AI increases the effect
In a previous issue of XcessAI, we explored how artificial intelligence is moving from optional to mandatory inside organizations.
This shift changes the nature of the agreement problem. When AI is optional, reinforcement is occasional. When AI is infrastructure, reinforcement is continuous.
Employees stop noticing when they are interacting with it. Managers stop questioning when it shapes outputs. Teams stop adjusting for its influence.
At that point, agreement is no longer feedback. It becomes environment. And environments shape decisions faster than tools do.
The strategic implication
Organisations adopting AI are not just accelerating execution. They are reshaping how internal beliefs form.
Faster iteration + faster synthesis + faster validation can also mean faster convergence around the wrong idea
The risk is not that AI replaces judgment. The risk is that it reinforces judgment too efficiently.
Companies that understand this dynamic early will design processes that challenge AI-supported conclusions rather than simply accepting them.
Companies that do not may find their decisions becoming more confident long before they become more accurate.
A familiar pattern
Throughout history, every major information technology has changed how organizations decide what is true.
Spreadsheets changed financial reasoning. Search engines changed research habits. Dashboards changed performance visibility.
Artificial intelligence is now changing belief formation itself.
Not by controlling decisions. But by shaping the conversations that precede them.
Artificial intelligence does not only change what organizations can do. It changes what they are likely to believe. And belief is where strategy begins.
Until next time,
Stay adaptive. Stay strategic.
And keep exploring the frontier of AI.
Fabio Lopes
XcessAI
💡Next week: I’m breaking down one of the most misunderstood AI shifts happening right now. Stay tuned. Subscribe above.
Read our previous episodes online!


