Key Points
- Google’s Gemini app can detect a SynthID watermark in images generated by Google’s AI tools.
- The tool confirms AI‑generated content quickly when the watermark is present.
- For images without SynthID, Gemini provides only a general analysis, not a definitive answer.
- Testing shows inconsistent detection across Gemini’s browser version, Gemini 3, Gemini 2.5 Flash, ChatGPT, and Claude.
- The variability underscores the need for universal detection methods that work across all AI generators.
- Industry initiatives like C2PA aim to create standards for content provenance and authentication.
Google’s New Image Verification Tool
Google has added a feature to the Gemini app that lets users submit an image and ask whether it is real. The system looks for a digital watermark called SynthID, which is embedded in images created by Google’s AI models. When the watermark is present, Gemini quickly confirms the image as AI‑generated. In testing, the tool even identified a screenshot containing a watermarked image.
Limitations with Non‑Google Content
For images that lack a SynthID watermark, Gemini provides only a general analysis, noting typical signs of artificial creation without a definitive verdict. When asked about an infographic that Google itself generated (which included metadata indicating it was AI‑produced), the app version using SynthID detected it, but the browser version, which lacks the watermark check, gave an ambiguous response, stating the design could be from AI or a human.
Inconsistent Results Across Models
Various versions of Gemini and other chatbots produced mixed answers. Gemini 3, the higher‑level reasoning model, offered a detailed explanation for a Nano Banana Pro‑generated cat image, correctly labeling it as AI‑generated. By contrast, Gemini 2.5 Flash guessed the same image was a real photograph. ChatGPT gave contradictory answers on different days, and Claude’s Haiku 4.5 and Sonnet 4.5 models both said the image looked real.
Broader Challenges in AI‑Generated Image Detection
The testing highlights a broader issue: many AI detection tools rely on visible artifacts or model‑specific watermarks, which can be bypassed or are absent in content from other generators. As AI image models improve, traditional visual cues become less reliable. The article argues that a universal, hard‑to‑remove watermark detectable by everyday tools would be more effective.
Future Directions and Industry Efforts
Google’s SynthID check represents a step toward reliable verification, but the article notes the need for broader adoption across platforms, including browser extensions and search engines. Industry groups such as the Coalition for Content Provenance and Authentication (C2PA) are working toward standards that would allow users to verify image provenance without specialized apps or expertise.
Source: cnet.com