top of page
Simeon Spencer

Google Spearheading Detection of Gen AI Content in Text, Video, Images, and Audio



As the digital world floods with AI-generated content on social media and other platforms, the lines between human-made and machine-edited material are becoming blurry. This blurring creates a perfect playground for scammers and opportunists to spin out misleading content—deepfakes, fake images, and voice clones designed to deceive.


However, the real concern isn’t just about fraud and misinformation. The overwhelming surge of AI-generated content is creating an entirely different challenge: future AI models will be trained on this content, including that generated by other AIs—or even earlier versions of themselves. This recursive training, as we discussed in a previous article, leads to what researchers call "model collapse." It's a bit like asking a photocopier to copy its own copies until the image is barely recognizable. The result? Defective models become extremely prone to hallucinations and errors.


To avoid this spiral of digital decay, we’ll need robust tools to detect AI-generated content. Such tools will be crucial in training future AI systems to stay accurate and avoid falling into the trap of self-replicating nonsense.


Google’s SynthID for Detection of AI-Generated Content


Google has jumped into the fray to tackle the growing issue of AI content transparency, partnering with the Coalition for Content Provenance and Authenticity (C2PA), a global group dedicated to ensuring we know where our digital content comes from. Not to be outdone, Google also sits on the steering committee of the C2PA—because, of course, it does.


The highlight of this initiative is a tool called SynthID, which can digitally watermark AI-generated content without altering its appearance or sound. SynthID doesn’t stop at images and videos; it can watermark text and audio, too. And here’s the clever bit: it embeds these watermarks so subtly, you won’t even notice they’re there.


Text Watermarking


An LLM (large language model) creates text one piece (called a "token") at a time. These tokens could be a letter, word, or part of a phrase. An LLM generates text one token at a time by predicting the next token based on previous words and probability scores. SynthID steps in here, quietly tweaking those scores to insert a hidden "watermark" into the text without changing the quality. This process repeats for each token, and the final pattern of token choices and adjustments creates a "watermark." The longer the text, the more accurate SynthID becomes.


Example of SynthID’s text watermarking on Google’s Gemini


Audio and Music Watermarking


SynthID doesn’t stop at text. It can embed digital watermarks into AI-generated audio or music that humans can’t hear but that shows up clearly on a spectrogram. These watermarks are remarkably tough—they stick around even after compression or edits to the audio.


Example of SynthID Watermark in audio generated by Google Lyria




Images and Video Watermarking


And, of course, there’s image and video watermarking. SynthID places invisible watermarks onto AI-generated images and on every frame of AI-generated videos. Try finding one in a watermarked image—you won’t, but trust the tech: it’s there.


Example of SynthID Watermark in an image generated by Google Imagen 3


A Unified Platform for Identification of AI-generated Content is Essential for AI to Advance


While SynthID is still in its early days, there’s a predictable hurdle looming on the horizon. At the moment, SynthID is a Google-exclusive tool, integrated only into Google’s own AI models. And let’s be honest—other major AI companies are not exactly lining up to plug Google’s tech into their systems, no matter how noble the cause.


What’s likely to happen is that we’ll see OpenAI, Meta, and other AI heavyweights rolling out their own versions of tools like SynthID. Each will want to ensure their future language models don’t collapse under the weight of AI content generated by their previous models, all while likely using rival detection tools to check for AI-generated content.


The problem? This fragmented approach, with each company building its own AI identification system, could slow the entire industry’s progress. Eventually, we’ll need a unified solution—one tool to rule them all, so to speak—that can watermark and identify AI-generated content across all models. Only then will we be able to confidently train future AIs without risking model collapse, while also curbing the spread of AI-fueled misinformation.

 

Comentários


Want to Know When We Post?

Google Spearheading Detection of Gen AI Content in Text, Video, Images, and Audio

As the digital world floods with AI-generated content on social media and other platforms, the lines between human-made and machine-edited material are becoming blurry. This blurring creates a perfect playground for scammers and opportunists to spin out misleading content—deepfakes, fake images, and voice clones designed to deceive. However, the real concern isn’t just about fraud and misinformati ....

blurred text.png

Already Accessed Free Article !

bottom of page