Key Points
- xAI released Grok Imagine with a “spicy” mode that can generate suggestive or nude images of real people.
- Within a day, the service generated over 34 million images, indicating strong user interest.
- Advocacy groups warn the tool enables non‑consensual deepfake pornography.
- The Take It Down Act criminalizes publishing such content, but its definition of “publication” may exclude single‑user outputs.
- Legal experts argue Grok Imagine may not be a “covered platform” under the Act, limiting liability.
- UK regulators have implemented age‑gating rules for sexual content, and activist pressure has tightened policies on other platforms.
- Apple has banned smaller apps that facilitate AI‑generated nudes, but has not confirmed any action on Grok Imagine.
- The controversy highlights challenges in balancing AI innovation with protections against image‑based sexual abuse.

Launch of Grok Imagine and Its Capabilities
Early in the week, xAI, the artificial‑intelligence arm of Elon Musk’s business empire, introduced Grok Imagine, an image and video generation service that features a “spicy” mode. This mode can produce a range of content from suggestive gestures to full nudity, and it does not appear to have guardrails preventing the creation of images that resemble real individuals. Within a day of its debut, the service reportedly generated more than 34 million images, highlighting rapid user adoption.
Legal and Ethical Concerns
Advocacy groups such as the Rape, Abuse & Incest National Network (RAINN) have labeled the new feature as part of a growing problem of image‑based sexual abuse. Legal scholars note that the federal Take It Down Act criminalizes the intentional publication of non‑consensual intimate imagery, but the statute’s language focuses on “publication” to a broader audience. Because Grok Imagine’s output is typically viewed only by the user who created it, experts argue the service may fall outside the Act’s scope.
Furthermore, the Act defines a “covered platform” as one that primarily provides a forum for user‑generated content. Since Grok Imagine generates content based on AI rather than hosting user‑uploaded material, it may not meet that definition, potentially limiting liability under the law.
Regulatory Landscape and Industry Responses
Regulators in the United Kingdom have recently enforced age‑gating rules that require platforms to block sexual or otherwise harmful content for users under 18. Meanwhile, pressure from activist groups has led platforms such as Steam and Itch.io to tighten policies on adult‑oriented games and media. In the United States, the Take It Down Act has yet to be robustly enforced against large technology firms, and some observers suggest that high‑profile companies like xAI may receive limited scrutiny.
Apple’s recent actions—banning smaller apps that facilitate AI‑generated nudes of real people—demonstrate another layer of gatekeeping. Although Grok Imagine is available on iOS, Apple has not commented on whether it will apply similar restrictions to the service.
Potential Implications for Users and Platforms
The ease with which Grok Imagine can generate realistic, potentially non‑consensual pornographic images raises concerns about the rapid spread of deepfake content. Even if the tool itself does not publish the images, users could easily download and share them on social media, circumventing existing safeguards.
Industry analysts warn that the situation underscores a broader challenge: the promise of a “safer” internet is undermined when powerful companies can monetize content that may be illegal in certain contexts, while smaller platforms face pressure to remove comparable material.
Outlook
As AI‑generated media continues to evolve, lawmakers, regulators, and technology firms will need to clarify definitions of publication, platform responsibility, and user consent. The controversy surrounding Grok Imagine illustrates the tension between innovative AI services and the need to protect individuals from non‑consensual exploitation.
Source: theverge.com