An Interactive Explainer
Seeing Is No Longer Believing
The line between real and synthetic media has all but disappeared. Here's what that means for the future of online media.
The challenge
Some of these images are real. Some are AI-generated.
Can you tell which is which? No labels, no hints. Just your eyes and your instincts. Scroll through and make your guesses — then test yourself for real at the end.
Real or Fake game
perfect 10/10
each image
Defining the terms
Synthetic media is not the same as a deepfake
The terms are often used interchangeably, but they describe different things. Understanding the distinction matters — because the solutions are different too.
Any media — image, audio, video, or text — that has been generated or substantially modified using artificial intelligence. This is the broad category.
- AI-generated stock photos
- Text-to-speech voiceovers
- AI-written articles and code
- Generative art and music
- AI avatars for training videos
A specific subset of synthetic media that uses AI to create or manipulate content depicting real people doing or saying things they never did. The intent is typically to deceive.
- Face-swapped video of politicians
- Cloned voice of a CEO for fraud
- Nonconsensual explicit imagery
- "Nudification" of real photographs
- Fabricated audio of public figures
Media
technology
The same AI models that generate stock imagery can be weaponised to create deepfakes. The technology is neutral — the application determines the harm.
Leaving the uncanny valley
How AI images became indistinguishable from reality
In just two years, AI image generation has gone from "impressive but obviously fake" to "virtually undetectable." The shift happened faster than most people realise.
ChatGPT / DALL-E
ChatGPT's image outputs are still ludicrously stereotypical. Portraits tend toward a glossy, stock-photo aesthetic — attractive people in perfect lighting with flawless skin. They scream "AI" to anyone who's been paying attention.
Midjourney v7
Midjourney has led the pack for a while, but its images have a distinctive "look" — glossy, overly perfect, and obviously artificial. While more detailed prompts can produce less polished results, the default output still leans heavily into staged portrait territory.
Google Nano Banana Pro
In late 2025, Google surprised everyone with Nano Banana Pro. It broke from the trend of glossy AI images, producing much more realistic and natural compositions. Google has obviously shifted toward creating not just realistic, but almost real images. The uncanny valley has been crossed.
The gap has closed
This image was generated with a single text prompt. No editing. No reference photos. It looks like a candid shot from someone's phone. The average internet user can no longer reliably identify fake images by eye. The telltale signs — distorted fingers, three-mile stares, weird text — have been resolved.
Where they intersect
The same models power both creation and deception
Synthetic media and deepfakes share the same technological foundations. The image generators that produce art also produce weaponised content. Open-source models mean the technology is free and increasingly ubiquitous.
The first deepfakes
Face-swapping technology emerges on Reddit. The Obama/Buzzfeed deepfake demonstrates the potential for political manipulation. Creating convincing deepfakes still requires significant technical expertise and compute.
Generative AI goes mainstream
DALL-E 2, Midjourney, and Stable Diffusion launch publicly. Image generation becomes accessible to anyone. Early outputs have obvious tells — extra fingers, melting faces, garbled text.
Video and voice follow
Tools like Runway, HeyGen, and ElevenLabs make video deepfakes and voice clones trivially easy. A single webcam photo is enough to generate a convincing deepfake. South Korea is engulfed in organised deepfake crime scandals affecting schools and universities.
The uncanny valley collapses
Google's Nano Banana Pro and similar models produce images indistinguishable from photographs. Google Veo 3 generates realistic video with synchronised audio. xAI's Grok begins generating thousands of "nudified" images of real people per hour — including minors.
Where we are now
The average viewer cannot reliably distinguish AI-generated images from real ones. The same technology that creates harmless stock imagery is used to produce nonconsensual explicit content, election misinformation, and financial fraud.
The human cost
The four faces of deepfake harm
Deepfakes are not an abstract technology problem. They cause real damage to real people — and the harms are escalating as the tools become more accessible.
Nonconsensual Intimate Imagery
The most prevalent harm. AI tools can "undress" real photographs or generate explicit content using someone's likeness from just a few social media photos. The majority of victims are women and girls.
Blackmail & Sextortion
Fabricated intimate imagery is used to extort victims — demanding money, further real images, or silence. Young people are particularly vulnerable to these schemes.
Misinformation
Fabricated images and video of political figures, manufactured evidence, and AI-generated "news" footage can sway elections and erode public trust. Elon Musk's X platform has already been used to spread election deepfakes.
Reputational Destruction
A single convincing deepfake can destroy a career, a relationship, or a life. Even when debunked, the damage persists — the internet doesn't forget, and the "liar's dividend" means any real evidence can now be dismissed as AI.
Since late December 2025, xAI's chatbot Grok has responded to user requests to undress real people by turning photos into sexually explicit material — posting thousands of "nudified" images per hour, including sexualised images of minors.
— Leon Furze, "Can You Spot an AI Generated Image?", January 2026Fighting back
Technology and law are racing to catch up
Two parallel tracks are emerging: technical standards that prove where content came from, and legislation that punishes those who weaponise it.
Technical solutions
A coalition led by Adobe, Microsoft, OpenAI, Meta, and the BBC has produced an open standard. Content Credentials attach a tamper-evident digital "receipt" to media files, recording their creation history using cryptographic signatures.
Think of it like a notary stamp for digital content. If anyone modifies the image, the hash won't match. Adopted by ChatGPT, Adobe Firefly, and others — but notably not Midjourney.
An invisible watermark applied to image and video data, detectable by Google's tools. In theory, it could identify AI-generated content even after editing.
In practice, it simply does not work when the image has been altered in any way — a limitation that remains unresolved.
Content Credentials allow users to legitimise the use of AI images — "yes, I used AI, and here's the workflow." They're not designed to stop misuse, but to enable appropriate use. The key limitation: adoption is voluntary, and bad actors simply won't opt in.
Legislative responses around the world
Federal law prohibits nonconsensual explicit deepfakes. 46 states have enacted legislation targeting AI-generated media. Platforms must remove flagged content within 48 hours.
The EU AI Act entered into force August 2024, with full enforcement from August 2026. Mandatory labelling and risk assessments for AI systems.
Up to seven years in prison for creating and sharing nonconsensual deepfake sexual material under the Criminal Code Amendment 2024.
Criminalised creation, distribution, possession, and viewing of sexually explicit deepfakes — up to seven years. Launched a 24/7 national response centre.
Criminalised nonconsensual sexual deepfakes in late 2024 — up to two years imprisonment and €60,000 in fines.
Mandatory labelling of all AI-generated synthetic content under March 2025 rules, effective September 2025.
The crossroads
Two paths forward
Path one
Technical standards like Content Credentials become widely adopted. Major platforms implement robust safeguards. Legal frameworks are actually enforced. AI-generated media becomes a technology with clear provenance, and viewers can make informed decisions about what they're seeing.
Path two
A race to the bottom. Platforms compete on how few restrictions they impose. Watermarks stripped as easily as cropping an image. Laws that exist only on paper. The Grok scandal suggests we're currently heading down this path.
Whether we course-correct depends not just on regulators and technology companies, but on users and educators — through the platforms we choose to use, the content we choose to share, and the standards we demand from the tools that increasingly shape our perception of reality.
— Leon FurzeWhat you can do
Test your own eyes
Think you spotted the fakes above? Play the full Real or Fake game to find out — and sign up to stay informed as this technology evolves.