Synthetic People

The moment identity becomes software

Welcome Back to XcessAI

For a long time, AI imitated outputs.

Text. Images. Audio. Code. Impressive, sometimes unsettling, but still clearly tools producing artefacts. The boundary between human and machine remained intact.

Recently, that boundary shifted.

AI systems are no longer just generating content. They are beginning to replicate people: tone, cadence, memory, reasoning style, and continuity. Not a face in a video or a voice in a clip, but something closer to presence: a system that can stand in, respond, and persist over time.

This is not a typical technological milestone.
It is a threshold.

Because once identity becomes reproducible, the question is no longer whether something is fake, but whether presence itself can be trusted.

From content to agents

Most public discussion still treats this as an extension of deepfakes: better audio, better video, better impersonation.

That framing is incomplete.

Deepfakes imitate moments.
Synthetic people imitate agents.

They are persistent rather than episodic. They don’t appear once and disappear. They converse, adapt, remember, and evolve. They are trained not just on what someone said, but on how they think, how they respond, how they behave under uncertainty.

The distinction matters:

  • Synthetic content imitates outputs.

  • Synthetic people imitate identity.

Once that shift occurs, AI stops being a tool that produces things and becomes something that can act on behalf of someone.

Why this is not “just deepfakes”

Deepfakes rely on spectacle.
Synthetic people rely on plausibility.

They don’t need to deceive everyone. They only need to be credible often enough that certainty erodes. The risk is not mass deception, but persistent ambiguity.

Was that message actually from her?
Did he really say that?
Is this a person, or their model?

The system doesn’t fail because people believe everything.
It degrades because people can no longer believe anything with confidence.

Truth becomes negotiable, not because lies win, but because verification loses scale.

When identity breaks, assumptions follow

Once identity becomes software, several long-standing assumptions quietly fracture.

Authorship
If an AI can speak and write in your voice, what does it mean to have authored something? Where does delegation end and authorship begin?

Accountability
If a synthetic agent communicates on your behalf, who is responsible for its words — the individual, the organisation, or the system?

Presence
What does it mean to “be there” — in a meeting, in a conversation, in a moment — when proxies can attend indefinitely?

Consent
Where does a person end and their trained model begin? Can identity be licensed, transferred, revoked?

Ownership is the unresolved question beneath all of this — who controls, modifies, or revokes a software version of a person.

These questions are not hypothetical. They simply lack stable answers.

Trust doesn’t collapse, it reorganises

There is a concept in misinformation research known as the liar’s dividend: when manipulation becomes easy, even genuine evidence can be dismissed as fake.

With synthetic people, this dynamic becomes personal.

Not “that video might be fake,” but
“that might not even be him or her.”

Yet trust does not disappear. It fragments.

It becomes:

  • local rather than global

  • relational rather than broadcast

  • contextual rather than absolute

People trust networks, not platforms.
Provenance matters more than realism.
Verification outweighs authenticity.

This is not a collapse of trust.
It is a reconfiguration.

Why synthetic people will be built

It would be easy to stop here and frame synthetic people as a threat to trust.

That would miss the other half of the story.

Most modern organisations are not constrained by ideas, products, or demand. They are constrained by attention — by the limited availability of the people who hold institutional knowledge, context, and judgment.

Synthetic people address that constraint directly.

Consider Investor Relations.

Public companies rely on a small number of professionals to answer questions about strategy, performance, risks, and disclosures. These interactions are repetitive, time-bound, and unevenly distributed across time zones. Access depends less on relevance than on timing.

A well-designed synthetic IR professional — trained on public filings, earnings calls, disclosures, and historical Q&A — could answer factual and contextual questions 24 hours a day. Not to replace human judgment, but to absorb the long tail of predictable, repetitive queries. Escalation, interpretation, and judgment would still sit with humans.

The productivity gains are obvious:

  • faster access to information

  • fewer interruptions for senior management

  • more consistent messaging

  • lower friction for analysts and investors

The same logic applies elsewhere.

Internal knowledge agents reduce dependence on a handful of experts.
Customer representatives provide continuity instead of queues.
Technical specialists preserve institutional memory as teams turn over.

In these cases, synthetic people are not impersonation.
They are interfaces.

Synthetic people as leverage, not deception

The difference between productivity and danger is not the technology.

It is disclosure, scope, and intent.

Synthetic people become destabilising when they pretend to be human.
They become powerful when they are clearly framed as representatives.

Used well, they extend expertise without obscuring accountability.
Used poorly, they dissolve trust by blurring identity.

This is not a moral dilemma.
It is a leverage problem.

And leverage, unmanaged, always amplifies failure.

The new rules of presence and authenticity

As identity becomes software, several rules quietly change.

Presence becomes delegable.
Authenticity becomes contextual.
Verification becomes more important than realism.

The future is not one where nothing is trusted — but one where trust is narrower, conditional, and harder to scale.

Institutions, markets, and organisations will need to adapt not just technologically, but socially.

Naming the moment

We will likely look back on this period as a transition point.

Not when AI crossed benchmarks.
Not when models became larger.

But when identity stopped being singular. And presence stopped being scarce.

Synthetic people do not replace humans.
They redefine what it means to be one — and how knowledge, expertise, and presence scale.

Once identity becomes software, there is no returning to the assumptions that came before.

Until next time,
Stay adaptive. Stay strategic.
And keep exploring the frontier of AI.

Fabio Lopes
XcessAI

💡Next week: I’m breaking down one of the most misunderstood AI shifts happening right now. Stay tuned. Subscribe above.

Read our previous episodes online!

Reply

or to participate.