Five ways to spot AI-generated videos

Share

WHAT you see online may not be true, in the age of AI, it can fool you.

With artificial intelligence evolving rapidly, the internet is flooded with videos that look real but aren’t. From fake interviews to synthetic news reports, deepfakes have become harder to detect, no longer giving themselves away with just awkward smiles or stiff gestures. They are now so convincing that spotting them takes more than just noticing a twitchy eye or a strange smile.

Tools such as DALL·E and Veo 3 can generate stunning visuals and cinematic videos with just a prompt. While this technology opens up new creative possibilities, it also poses significant risks.

The challenge? These videos often look indistinguishably real.

ALSO READ:Five tools to detect audio deepfakes

For instance, a close observation of the video posted by an X user in June, which showed a female journalist reporting about a flood in Abeokuta, reveals subtle but telling signs of AI generation. Despite the human-like movements and natural-sounding speech, some inconsistencies gave it away. For instance, during a quick turn by the presenter, an umbrella suddenly appears in her hand, though it wasn’t there moments earlier.

Screenshot of the video from X
Screenshot of the video from X

Additionally, her emotionless reaction to someone falling nearby, an incident that would typically prompt concern, further suggests the video contains inauthentic human behaviour. There are various instances of such videos that have gone viral.

Another example is this video showing a collapsing flyover bridge. While the scenery and structure appear realistic, the behaviour of the white car driving toward the collapsed section is implausible, not the expected behaviour in the face of danger.

Screenshot of the video from Facebook
Screenshot of the video from Facebook

As a viewer, journalist, or active digital citizen, spotting AI-generated content isn’t just a helpful skill; it’s a critical one.

This tutorial will walk you through how to recognise videos created with tools like DALL·E (which generates images often edited into video sequences) and Veo (which produces entire video scenes from a prompt).

Look for visual anomalies

AI-generated videos, though increasingly sophisticated, often reveal their artificial nature through subtle glitches and inconsistencies. Watch for odd lighting or mismatched shadows, distorted hands or fingers (common in still-to-video conversions like DALL·E hacks), unnatural eye movements or blinking, and warped or melty objects in the background.

In tools like Veo 3, robotic body movements, overly smooth transitions, altered or incoherent subtitles, and audio that sounds too perfect or lacks ambient noise can all signal synthetic content. Even complex scenes may feature objects that are inconsistently placed or lit, making careful observation essential.

Flawless, yet fake

Veo, especially in its latest version (Veo 3), can generate cinematic scenes with hyperrealism, but the results often come with subtle giveaways. Look out for over-perfect environments like spotless skies, mirror-smooth water, or unnaturally fluid motion.

Transitions between scenes or camera angles may also feel jarring or robotic. The actors themselves might not blink, breathe, or move quite naturally, giving off an uncanny vibe. Veo 3 can even create scenarios that defy logic or physics, think improbable events, flawless characters, or backgrounds with strange, repeating patterns that don’t belong in real-world settings.

Context is equally important. Veo 3, for instance, is a paid tool, so a sudden flood of short, high-quality videos around breaking news could signal AI involvement.

A recent example is the flyover bridge accident in Nasarawa, where AI-generated images were circulated to support the claim. However, The FactCheckHub found these visuals to be misleading.

Similarly, if a video claims to show a major event but isn’t reported by any reputable outlet, that’s might be a red flag.

Scrutinise metadata

Tools like InVID can help verify video authenticity by extracting metadata such as timestamps, GPS coordinates, and device information. AI-generated videos often lack this data or include vague, generic tags. If you have access to the original video file, inspect its metadata for signs like “Veo,” “Gemini,” or “Google AI.”

Keep in mind that metadata can be stripped or altered, so it shouldn’t be your only line of verification.

READ : How to protect yourself from AI voice cloning scams, audio deepfakes – Experts

Use detection tools

Several tools can help detect AI-generated or manipulated media. Deepware Scanner is designed to spot deepfake facial alterations, while Hive Moderation uses AI detection to flag likely synthetic images or video frames. InVID is especially useful for verifying videos, allowing frame-by-frame analysis and metadata extraction. Another handy tool is AI or Not, which helps determine whether an image or video was created using artificial intelligence.

The FactCheckHub has used one or more of these tools to verify images or videos in our previous fact-checks, as can be seen here and here.

Look for disclosures

Some creators voluntarily tag or watermark AI-generated content, so it’s worth looking closely. Check for phrases like “generated with AI,” “created using DALL·E,” or “Veo3” in captions, file names, or project descriptions, especially if they include prompt-style language.

Google embeds both visible and invisible watermarks in every Veo 3 video. You might spot small logos or “AI-generated” text within the frame. More subtly, Veo 3 videos contain SynthID an invisible digital signature detectable only with specialised tools. While the general public can’t access SynthID scanners, certain fact-checkers and organisations can use them to verify the origin of a video.

In conclusion, detecting AI-generated content is becoming more complex as the technology continues to advance. However, by combining keen observation with the right tools and questioning content that lacks credible context or sources, we can stay a step ahead of AI-driven misinformation. Remember, staying informed isn’t just about consuming content; it’s about verifying it.

Fact-checker at The FactheckHub | fquadri@icirnigeria.org |  + posts

Seasoned fact-checker and researcher Fatimah Quadri has written numerous fact-checks, explainers, and media literacy pieces for The FactCheckHub in an effort to combat information disorder. She can be reached at sunmibola_q on X or fquadri@icirnigeria.org.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Read

Recent Checks