- XcessAI
- Posts
- Missing Context
Missing Context
Context Is the Real Bottleneck in AI

Welcome Back to XcessAI
By now, most organisations have seen impressive AI demos.
Models summarise documents, write code, answer questions, generate plans. In controlled settings, the outputs look intelligent.
And yet, when these same systems are deployed inside real organisations, something breaks.
Outputs become inconsistent.
Recommendations feel naïve.
Edge cases multiply.
Trust erodes.
The instinctive explanation is that the models aren’t good enough yet.
That diagnosis is wrong.
The real constraint is not intelligence.
It is context.
The illusion of intelligence
Modern AI systems perform exceptionally well in environments where the problem is clearly defined, the inputs are clean, and the objective is unambiguous.
That’s why benchmarks look strong.
According to Stanford’s AI Index, model performance on standardised tasks has improved dramatically year after year, with error rates collapsing across language, vision, and reasoning benchmarks.
But those benchmarks test capability, not deployment.
They measure whether a model can answer a question, not whether it understands the environment in which that answer will be used.
Intelligence without context does not feel smart.
It feels unreliable.
What people call “AI failure”
When executives describe AI initiatives that disappoint, the complaints are familiar:
“It hallucinated.”
“It didn’t understand the situation.”
“The answer changed when we phrased the question differently.”
“It worked in the pilot, then broke at scale.”
These are rarely intelligence failures.
They are context failures.
The model is responding correctly to the information it has, but the information it needs was never encoded.
Context is not data
This distinction is critical, and often misunderstood.
Data is:
documents
records
transactions
historical facts
Context is:
constraints
incentives
risk tolerance
prior decisions
ownership and accountability
what must not happen
Context answers questions like:
What has already been agreed?
What is politically or legally constrained?
What outcome is acceptable but suboptimal?
Who bears the downside if this goes wrong?
AI systems ingest data.
Organisations operate on context.
And context is rarely explicit.
Why context is so hard to encode
Context resists formalisation for structural reasons.
It is:
distributed across systems
embedded in processes
held in people’s heads
shaped by incentives and power, not documentation
Most of the context that governs real decisions is informal:
budget realities, historical scars, unspoken trade-offs, and institutional memory.
Prompting tries to compress this into language.
That works briefly, and poorly at scale.
Prompting is not a solution to missing context.
It is a workaround.
Why pilots work and deployments fail
This explains a pattern many organisations recognise.
Pilots succeed because:
scope is narrow
risk is low
humans actively supervise
context is manually supplied
Deployments fail because:
context fragments across teams
accountability becomes unclear
edge cases explode
incentives collide
McKinsey’s most recent AI surveys reflect this gap clearly: while over half of large organisations report experimenting with AI, only a minority report material, enterprise-wide impact.
The problem is not ambition.
It is execution under real organisational complexity.
AI works in controlled environments.
Organisations are not controlled environments.
Where context actually lives
If context isn’t in prompts or datasets, where is it?
It lives in:
governance frameworks
approval processes
budget constraints
regulatory boundaries
escalation paths
informal norms
In other words, context lives in the operating model.
And operating models are slow to change.
This is why simply “adding AI” to existing workflows rarely works. The intelligence layer improves, but the contextual substrate does not.
Why CFOs feel the problem early
This is also why CFOs often become sceptical before enthusiasm fades elsewhere.
From a finance perspective, the pattern is familiar:
AI spend increases
coordination costs rise
productivity gains lag
payback periods extend
According to PwC’s global surveys, fewer than one in three executives report that AI investments have yet delivered measurable financial benefits at scale.
Without context, AI increases activity before it increases output.
And increased activity without output is a cost problem, not a technology problem.
Context is now the scaling constraint
As models continue to improve, intelligence becomes cheaper, faster, and more accessible.
That shifts the bottleneck.
The limiting factor is no longer:
model capability
benchmark performance
prompt quality
It is:
integration
governance
ownership
feedback loops
constraint definition
In other words, execution.
The bottleneck has moved from intelligence to context.
What changes when organisations recognise this
Organisations that make progress with AI don’t obsess over tools.
They focus on:
where decisions actually live
who owns outcomes when something goes wrong
how constraints are enforced, not just documented
how feedback from real use flows back into the system
In practice, this often shows up in small but telling shifts.
For example, instead of asking an AI system what the best decision is, teams define which decisions the system is allowed to support, and which remain human-owned.
Instead of feeding models more data, they surface hard constraints explicitly: budget ceilings, regulatory boundaries, approval thresholds, and risk tolerances that previously lived only in people’s heads.
Instead of treating errors as model failures, they track where context was missing: a prior commitment the system didn’t know about, a downstream dependency it couldn’t see, or an incentive it wasn’t aware of.
None of this requires better models.
It requires organisations to make their own decision logic visible.
This is not a technology shift.
It is an organisational one.
AI does not fail because it lacks intelligence.
It fails because it is deployed into systems that do not surface their own context.
Naming the phase
We are at the beginning of execution reality.
Intelligence scales quickly.
Context does not.
And in complex organisations, that difference determines whether AI becomes leverage, or just another layer of noise.
Until next time,
Stay adaptive. Stay strategic.
And keep exploring the frontier of AI.
Fabio Lopes
XcessAI
💡Next week: I’m breaking down one of the most misunderstood AI shifts happening right now. Stay tuned. Subscribe above.
Read our previous episodes online!
Reply