
A few days ago I came across two images that has had the internet in a bit of a frenzy. They were created using the exact same prompt. Nothing about the wording changed. Nothing about the intent changed. The only difference was the model.
The first image was produced a few weeks ago using Gemini Nano Banana. It looked good and creative, but had the familiar softness we now associate with AI: a little too smooth, a little too uniform. Impressive, but still recognisably synthetic.
The second image, generated using the new Nano Banana Pro, was something else entirely. Suddenly the lighting looked naturally diffused. The skin texture was photoreal. The reflections in the glasses behaved exactly as they would through a high-end camera. Even the bartender in the background looked like an ordinary person going about his job.
Nothing changed except the model. The prompt was identical. Yet the output made the internet genuinely unsure whether the woman existed. For many, it crossed the line from "AI-generated" to "this must be real".
That is the point.
We are no longer watching gradual progress. We are witnessing step-changes. Advances that compress years of capability into weeks. The kind of leaps that collapse the distance between synthetic and real so quickly that society has no time to adjust.
And if a single prompt can now generate a person so convincing that thousands debate her existence, then we are entering a world where authenticity is no longer obvious and perception is no longer reliable.
This is no longer about aesthetics. It is about trust, governance, identity and the future structure of society.
We Are Already Behind
For more than a century, our legal, social and cultural systems have operated on a simple assumption: if you see something with your own eyes, you can form a reasonable judgement about whether it is real.
That assumption is collapsing.
In the past year alone, we have seen: - Hyper-real face generation indistinguishable from professional photography - Voice models capable of emotional micro-inflections - Video synthesis that bypasses uncanny valley entirely - Synthetic influencers amassing real communities - Companies testing AI-generated staff for customer service - Automated content pipelines replacing entire production teams
The shift is no longer theoretical or experimental. It is active.
Yet the public conversation — and much of policy — remains rooted in a 2021 understanding of AI, where "spotting the fake" was still a plausible skill. In 2025, that skill is rapidly becoming obsolete.
When synthetic people become indistinguishable from real ones, everything built on the assumption of visual trust becomes unstable.
The Emerging Categories of Risk
1. The Collapse of Visual Trust
If every image can be manufactured, every image becomes questionable. This strikes at the heart of journalism, law, elections, policing, court evidence, institutional communication and even interpersonal relationships.
We shift from "is this real?" to "assume it is fake unless proven otherwise". That is a dangerous direction for any society.
2. Identity and Consent Become Fragile
For the first time in history, a person's likeness can be duplicated at scale without their knowledge. This matters for models, actors, creators, influencers, ordinary citizens, vulnerable people, and children.
When identity becomes copyable, consent becomes structural, not situational.
3. Synthetic Media as a Weapon
The concern is not simply deception. It is manipulation.
Deepfakes plus autonomous distribution systems open new frontiers for: - Harassment - Political destabilisation - Financial fraud - Targeted misinformation - Reputational damage - State-sponsored influence campaigns
We are moving into a world where illusions can have real-world consequences.
Regulation Is Behind by Design
Technology is global. Regulation is local. And that mismatch is widening.
What we currently have: - Early AI Acts - Watermarking proposals - Synthetic media disclosure rules (limited) - Basic guidelines from industry bodies
What we urgently need: - Global provenance standards - Legal rights over digital likeness - Frameworks for AI identity misuse - Cross-border enforcement - Clear liability for AI-caused harm - Robust governance for embodied AI systems
We are not simply dealing with a technical challenge. We are dealing with a governance vacuum.
The synthetic era requires regulatory thinking that assumes hyper-real AI exists today, not in the future.
From Synthetic Humans to Synthetic Actors: The Embodied AI Problem
The realism we see in static images is only the beginning. The physical frontier — embodied AI — is advancing in parallel.
- Humanoid robots are now in live industrial pilots
- AI-operated labs execute experiments without human hands
- Automated logistics systems orchestrate warehouses autonomously
- Robotics companies can train models using cross-robot data sharing
Once AI can act in the world — not just depict it — the risks move from perceptual to physical.
Errors have consequences. Misidentification has consequences. Design flaws have consequences.
This is why the hyper-real debate cannot stay confined to images. It must expand to include autonomous systems, robot safety, and AI-driven physical infrastructure.
The Coming Labour Shift: Hardware Skills Become Critical
An overlooked part of this story is the workforce.
While synthetic media sparks fascination, the deeper shift is happening underneath: Companies deploying AI into physical systems cannot find enough people who understand hardware, robotics and embedded intelligence.
The fastest-growing roles in 2025 include: - Robotics engineers - Embedded AI specialists - Mechatronics engineers - Sensor design experts - Autonomous systems operators - Safety engineers - AI hardware integrators
For a decade, the industry told people "learn to code". In the coming decade, the message will expand to: "Learn to build."
The professionals who thrive will be those who understand both how AI thinks and how the physical world behaves.
The Bigger Issue: Society Was Not Built for Synthetic Reality
These images are not just an upgrade. They are a warning.
They expose a deep mismatch between: - The speed of technological capability - The speed of human adaptation - The speed of policy formation - The speed of cultural understanding
We have created a world where: - Reality is editable - Identity is duplicable - Perception is manipulable - Trust is negotiable - Authenticity is uncertain
But we have not built the institutions to survive that world.
The danger is not AI itself, but our unpreparedness.
As AI-generated people, voices and realities become indistinguishable from the real, what values, rights and safeguards will you prioritise to protect trust inside your organisation and society at large?
Topics
Need guidance on AI governance?
If you're navigating AI ethics, governance challenges, or regulatory compliance, we can help clarify priorities and next steps.
Book a Readiness Consultation