Last week, Google quietly announced that it would be adding a visible watermark to AI-generated videos made using its new Veo 3 model.
And if you look really closely while scrolling through your social feeds, you might be able to see it.
The watermark can be seen in videos released by Google to promote the launch of Veo 3 in the UK and other countries.
Credit: Screenshot: Google
Google announced the change in an X thread by Josh Woodward, Vice President with Google Labs and Google Gemini.
According to Woodward’s post, the company added the watermark to all Veo videos except for those generated in Google’s Flow tool by users with a Google AI Ultra plan. The new watermark is in addition to the invisible SynthID watermark already embedded in all of Google’s AI-generated content, as well as a SynthID detector, which recently rolled out to early testers but is not yet broadly available.
This Tweet is currently unavailable. It might be loading or has been removed.
The visible watermark “is a first step as we work to make our SynthID Detector available to more people in parallel,” said Josh Woodward, VP of Google Labs and Gemini in his X post.
In the weeks after Google introduced Veo 3 at Google I/O 2025, the new AI video model has garnered lots of attention for its incredibly realistic videos, especially since it can also generate realistic audio and dialogue. The videos posted online aren’t just fantastical renderings of animals acting like humans, although there’s plenty of that, too. Veo 3 has also been used to generate more mundane clips, including man-on-the-street interviews, influencer ads, fake news segments, and unboxing videos.
Mashable Light Speed
This Tweet is currently unavailable. It might be loading or has been removed.
If you look closely, you can spot telltale signs of AI like overly-smooth skin and erroneous artifacts in the background. But if you’re passively doomscrolling, you might not think to double-check whether the emotional support kangaroo casually holding a plane ticket is real or fake. People being duped by an AI-generated kangaroo is a relatively harmless example. But Veo 3’s widespread availability and realism introduce a new level of risk for the spread of misinformation, according to AI experts interviewed by Mashable for this story.
The new watermark should reduce those risks, in theory. The only problem is that the visible watermark isn’t that visible. In a video Mashable generated using Veo 3, you can see a “Veo” watermark in a pale shade of white in the bottom right-hand corner of the video. See it?

A Veo 3 video generated by Mashable includes the new watermark.
Credit: Screenshot: Mashable
How about now?

Google’s Veo watermark.
Credit: Screenshot: Mashable
“This small watermark is unlikely to be apparent to most consumers who are moving through their social media feed at a break-neck clip,” said digital forensics expert Hany Farid. Indeed, it took us a few seconds to find it, and we were looking for it. Unless users know to look for the watermark, they may not see it, especially if viewing content on their mobile devices.
A Google spokesperson told Mashable by email, “We’re committed to developing AI responsibly and we have clear policies to protect users from harm and governing the use of our AI tools. Any content generated with Google AI has a SynthID watermark embedded and we also add a visible watermark to Veo videos too.”
“People are familiar with prominent watermarks like Getty Images, but this one is very small,” said Negar Kamali, a researcher studying people’s ability to detect AI-generated content at Kellogg School of Management. “So either the watermark needs to be more noticeable, or platforms that host images could include a note beside the image — something like ‘Check for a watermark to verify whether the image is AI-generated,'” said Kamali. “Over time, people could learn to look for it.”
However, visible watermarks aren’t a perfect remedy. Both Farid and Kamali told us that videos with watermarks can easily be cropped or edited. “None of these small — visible — watermarks in images or video are sufficient because they are easy to remove,” said Farid, who is also a professor at UC Berkeley School of Information.
But, he noted that Google’s SynthID invisible watermark, “is quite resilient and difficult to remove.” Farid added, “The downside is that the average user can’t see this [SynthID watermark] without a watermark reader so the goal now is to make it easier for the consumer to know if a piece of content contains this type of watermark.”
Topics
Artificial Intelligence
Google
Read the full article here