
A political ad shows a candidate dramatically reading his own embarrassing social media posts. Security footage captures a dog pushing a baby from a falling chandelier. A video clip shows Superman facing off against The Boys’ Homelander in a crossover that never happened.
You’re scrolling through TikTok, but something feels wrong. None of this content is real.
America is entering a new reality where anyone can create convincing fake videos and images with a single click. These deepfakes spread rapidly across social media platforms, and even real-time voice calls might not be authentic. The technology threatens privacy, political stability, and democratic norms by making everything we see online questionable.
The growing threat of synthetic media
Photo manipulation isn’t new. Airbrushing and video propaganda have existed for decades. But today’s deepfakes can appear anywhere online with unprecedented ease and quality. As reported by New America, deepfake fraud and scam attempts have surged, with vulnerable groups facing the highest risks.
The financial impact is staggering. One report found deepfake fraud caused over $200 million in losses during the first quarter of 2025 alone. The technology is also used to create non-consensual intimate imagery, which makes up most deepfakes online and disproportionately targets women.
Social media feeds are increasingly clogged with AI-generated content. Most people don’t believe they can distinguish between real and cloned voices, and research shows people struggle to identify deepfaked content in practice. As the technology improves, visual tells that once exposed fakes are disappearing.
Look for visual inconsistencies
While flawless deepfakes are becoming more common, many still contain obvious signs of manipulation. MIT researchers suggest watching for these warning signs:
- Odd lighting without clear sources
- Unusual shine or reflections
- Poor hair and skin texture details
- Blurry or distorted facial features
- Strange lip movements or unnatural blinking
- Audio that doesn’t match video timing
- Nonsensical backgrounds or unintelligible text
- Physics-defying imagery or impossible camera angles
Research shows that while AI detectors outperform humans at identifying fake images, people are better at spotting fake videos. Detection skills improve when people know what to look for, making practice with deepfake detection tools valuable.
Trust your instincts and verify context
Healthy skepticism remains your best defense. Unexpected phone calls from unknown numbers deserve extra caution. Use reverse image searches for suspicious photos. If something feels too good to be true, it probably is.
Political content and controversial topics require special attention. During the current conflict with Iran, cinematic AI videos showing destroyed U.S. forces have been used as propaganda tools. Ask yourself: what emotional response is this content trying to trigger? Scams often exploit fear and pressure you to make quick decisions.
Cross-check sources and establish verification systems
No single source should be trusted in isolation. When you see shocking political footage, check whether credible news outlets are reporting the same story and trace their sources. For personal communications:
- Establish code words with family and friends
- Verify urgent financial requests through multiple channels
- Contact senders on different platforms when messages seem suspicious
- Never make major purchases without in-person verification
Before responding to urgent voicemails or texts asking for money, confirm the sender’s identity through text, email, or face-to-face contact.
Know your options if you become a target
Deepfakes are designed to be convincing, and some attacks like non-consensual intimate imagery can happen without any interaction. If you become a victim:
- File cybercrime reports with the FBI’s Internet Crime Complaint Center (IC3)
- Contact platform owners to remove deepfaked intimate content
- Report crimes to local law enforcement
- Seek legal advice from attorneys or legal aid organizations
- Contact support organizations like the Cyber Civil Rights Initiative
As technology advances, the potential for fraud becomes clearer. We must adapt to a reality where visual and audio evidence can’t always prove authenticity. This means using good judgment, practicing strong digital security habits, and maintaining skepticism in a world where truth can be convincingly faked.