Key Points
- Prime Minister Keir Starmer says the UK will take action against X over Grok‑generated deepfakes.
- Starmer calls the sexualized AI content “disgusting” and vows to explore all options.
- X launched a Grok feature that lets users edit any image without permission, sparking a flood of deepfakes.
- The deepfakes include undressed women and, in some cases, children.
- Ofcom has opened an investigation into possible breaches of the Online Safety Act.
- X states that users who create illegal content with Grok will face the same penalties as uploaders.
- X has not responded to a request for comment from The Verge.
Government Response
Prime Minister Keir Starmer told listeners on Greatest Hits Radio that the United Kingdom will take decisive action against X following reports that the platform’s Grok AI chatbot is being used to produce sexualized deepfakes of both adults and minors. He described the material as “disgusting” and said the government will consider all options to address the problem, emphasizing that such content will not be tolerated.
Platform Features and Fallout
Last month, X introduced a feature that lets users employ Grok to edit any image on the platform without the original creator’s permission. The rollout quickly led to a surge of AI‑generated deepfakes that undressed women and, in some instances, children. The proliferation of this content prompted widespread criticism and calls for accountability.
Regulatory Investigation
The UK communications regulator, Ofcom, has begun probing whether X may be violating the Online Safety Act, legislation that holds online platforms accountable for hosting harmful content. An Ofcom spokesperson said the regulator would conduct a swift assessment to determine if there are compliance issues that merit further investigation.
Company Stance
X responded by stating that anyone who uses or prompts Grok to create illegal content will face the same consequences as if they had uploaded illegal material directly. The company did not immediately answer a request for comment from The Verge.
Implications
The episode underscores growing concerns about AI‑driven manipulation on social media and the challenges regulators face in enforcing existing safety laws. It also highlights the tension between platform innovation and the responsibility to prevent harmful misuse of powerful AI tools.
Source: theverge.com