- XcessAI
- Posts
- Incentive Asymmetry
Incentive Asymmetry
AI adds a new lens to professional advice

Welcome Back to XcessAI
When we seek advice from professionals (consultants, lawyers, doctors, psychologists) we usually assume one thing:
That the guidance is optimised for our best outcome.
In reality, advice is rarely optimisation.
It is navigation.
Not because professionals are incompetent or unethical, but because human advice is delivered inside real constraints:
liability
reputation
regulation
career risk
job security
professional norms
institutional incentives
These constraints don’t invalidate the advice.
But they do shape it, often invisibly.
AI changes the decision-making stack.
For the first time, individuals and organisations can add a second “advisor” that has no ego, no reputation to protect, and no need to manage relationships, while still being highly capable at analysis, scenario-building, and option generation.
Used properly, this does not replace professional judgment.
It makes professional advice stronger — more explicit, more stress-tested, and more aligned with the client’s true objective.
Let’s be precise.
When a lawyer advises you, they must simultaneously:
help you win
stay within ethical and regulatory rules
avoid avoidable liability
protect their licence and professional standing
avoid creating a paper trail that could later be used against you or them
When a psychologist advises you, they operate within:
therapeutic boundaries
client safety and emotional stability
malpractice risk
professional standards that discourage directive instruction
When a doctor advises you, they practise inside:
clinical guidelines
litigation and insurance frameworks
institutional protocols
time and information constraints
When a consultant advises you, the advice is often influenced by:
relationship preservation
internal politics and stakeholder sensitivities
scope boundaries
future fees and long-term account dynamics
This does not make them bad actors.
It makes them constrained actors. These constraints are the price of operating responsibly inside regulated, high-trust professions.
Human advice is often a blend of:
what is best for the client
and
what is defensible for the advisor
That blend is rational.
It is also rarely made explicit.
These constraints are part of the professional world we all operate in and don’t reflect a flaw, but rather a reality that AI has the potential to optimise.
What AI Adds (When Used Correctly)
AI does not have a career, a licence, or a reputation. It has no personal downside exposure — only the objectives and constraints we explicitly give it.
It does not need to protect itself socially.
As a result, it can assist in ways that humans often cannot — not because it is “braver”, but because it operates under a different constraint set.
AI can:
expand the option set rapidly
surface second- and third-order consequences
model trade-offs and expected value
challenge assumptions without social friction
produce a “cold” read that is not shaped by professional self-protection
That said, AI is not accountable. It also inherits bias from data, framing, and objectives — which is why human judgment remains essential.
It can be wrong, incomplete, or misleading if used carelessly.
So the right framing is not “AI replaces professionals.”
It is:
AI strengthens professionals — by separating optimisation from accountability.
A Simple Way to See the Difference
Ask a human professional:
“What is the safest, most defensible path?”
Then ask an AI:
“If my objective is X, what are the highest expected-value options — and what are the failure modes?”
The answers will often diverge.
Not because the human is “worse”, but because the human’s risk function includes:
their licence
their liability
their regulatory exposure
their reputational downside
While your risk function may be different.
AI exposes that gap cleanly — which is useful, not subversive.
The New Model: Two-Layer Advice
This is where the shift becomes constructive.
The future of high-quality advice will increasingly follow a two-layer model:
Layer 1 — AI as the optimisation engine
option generation
scenario analysis
decision trees and trade-offs
base rates and counterfactuals
“what would it take to win?” thinking
Layer 2 — the professional as the reality engine
feasibility, legality, and ethics
stakeholder and human dynamics
sequencing and implementation
risk containment and accountability
AI expands the decision space.
Professionals collapse it into an executable plan.
Combined, they produce something rare:
strategy that is both ambitious and defensible.
The Psychological Shift That’s Coming
As clients increasingly consult AI alongside human professionals, something subtle will occur.
They will start noticing when advice is:
conservative rather than optimal
shaped by defensibility rather than outcomes
vague where clarity is possible
more “best practice” than decision-specific
This creates tension — but also opportunity.
Clients will ask:
“Why didn’t you mention this option?”
And the honest answer may be:
“Because it wasn’t appropriate for me to recommend directly — but it’s worth discussing.”
The professionals who thrive will be those who can work in the open:
explicit about constraints
comfortable debating trade-offs
willing to engage with AI-generated options
able to translate raw optimisation into real-world execution
What This Means for Business and Leadership
For executives, this matters now.
AI will increasingly be used as:
an unfiltered second opinion
a check against internal consensus
a way to stress-test strategy before socialising it
a tool to surface uncomfortable risks early
a counterweight to politically filtered advice
Executives who rely on human advice alone will increasingly operate with an incomplete view of their decision space.
Boards and CEOs will not ask:
“What does the AI decide?”
They will ask:
“What does the AI surface — and what does management conclude?”
The strongest leaders will use AI not to automate decisions, but to de-bias and harden them.
AI Will Not Replace Professionals, But It Will Change the Standard
This is the key point.
AI will not replace lawyers, doctors, psychologists, or consultants.
But it will raise the standard by making constraints visible and options searchable.
It introduces a new category of input:
analysis without professional self-protection.
Not “truth”.
Not “authority”.
But a powerful counterweight — especially when decisions are high-stakes and incentives are messy.
Closing Thoughts
For centuries, advice has always been delivered with constraints, even when intentions were good.
AI does not remove constraints from the world. It removes them from one input into your decision. That does not make AI infallible, but it makes it uniquely useful.
Because once you can separate optimisation from accountability you can combine both, and make better decisions than either could produce alone.
Until next time,
Stay adaptive. Stay strategic.
And keep exploring the frontier of AI.
Fabio Lopes
XcessAI
💡Next week: I’m breaking down one of the most misunderstood AI shifts happening right now. Stay tuned. Subscribe above.
Read our previous episodes online!
Reply