US Attorneys General Target xAI Over Grok’s Nonconsensual Sexual Image Generation

Key Points

  • At least 37 state and territorial attorneys general have taken action against xAI over Grok’s generation of nonconsensual sexual images.
  • A report estimates Grok produced about 3 million photorealistic sexual images in an 11‑day period, including roughly 23,000 involving minors.
  • The attorneys general issued an open letter demanding immediate safeguards, content removal, user controls, and reporting of violations.
  • Arizona, California, Florida, Missouri, North Carolina, Georgia, and other states have opened investigations or issued cease‑and‑desist letters.
  • Half of the states have age‑verification laws, but their applicability to platforms like X and Grok remains uncertain.
  • Lawmakers are debating whether verification should apply to any pornographic content rather than a one‑third threshold.
  • xAI has limited its response, claiming to have stopped Grok’s ability to undress people on X but not removing existing nonconsensual content.
  • The controversy highlights the clash between rapid AI development and the need for protective regulations.

US Attorneys General Target xAI Over Grok’s Nonconsensual Sexual Image Generation

State Attorneys General Mobilize Against xAI

A bipartisan group of at least 37 attorneys general from U.S. states and territories has launched coordinated action against xAI, the company behind the chatbot Grok. The attorneys general allege that Grok was used to generate a flood of nonconsensual sexual images, including roughly 23,000 depictions of children, during an 11‑day period that began at the end of December. A recent report from the Center for Countering Digital Hate estimated that Grok’s account on X produced about 3 million photorealistic sexual images in that timeframe.

The attorneys general sent an open letter to xAI demanding that the company “immediately take all available additional steps to protect the public and users of your platforms, especially the women and girls who are the overwhelming target of non‑consensual intimate images.” The letter calls for the removal of Grok’s ability to depict people in revealing clothing or suggestive poses, suspension of offending users, reporting of violations to authorities, and the implementation of user controls over whether their likeness can be edited by the AI.

State Investigations and Legal Pressure

Several states have opened formal investigations. Arizona’s attorney general announced an investigation on January 15, describing the reported imagery as “deeply disturbing.” California’s attorney general sent a cease‑and‑desist letter to Elon Musk on January 16, demanding an end to the creation and distribution of child sexual abuse material (CSAM) and nonconsensual intimate images. Florida’s attorney general reported ongoing discussions with X to ensure child protections are in place. Other states, including Missouri, North Carolina, Georgia, and Nebraska, have also expressed a duty to enforce existing laws and consider new legislation.

These actions come amid a broader wave of state‑level interest in AI‑generated sexual content. Earlier in the year, 42 attorneys general co‑signed a letter urging AI companies to adopt additional safeguards for children. A working group of state officials met in mid‑January to discuss emerging AI‑related risks, with particular emphasis on CSAM as an early priority.

Age‑Verification Laws and Their Limits

Half of the states have enacted age‑verification statutes that require users to prove they are not minors before accessing pornographic material. However, the applicability of these laws to platforms like X and the standalone Grok website remains uncertain. Many states have adopted the “one‑third” threshold model, originally set by Louisiana, which triggers verification only when at least one‑third of a site’s content is deemed pornographic or harmful to minors. Lawmakers and privacy advocates debate whether a content‑agnostic approach—requiring verification for any pornographic material—would be more effective.

Industry representatives, including officials from Pornhub’s parent company, have argued for device‑based age verification that would keep personal data on users’ devices rather than on third‑party servers. They suggest such a system could also filter explicit content on social media and AI chatbots.

xAI’s Response and Ongoing Controversy

xAI’s public response to inquiries has been limited, with the company dismissing media coverage as “Legacy Media Lies.” The firm claims to have halted Grok’s ability to undress people on X, but the attorneys general note that nonconsensual content remains accessible and that federal law will soon obligate its removal.

The controversy underscores a tension between rapid AI innovation and the need for regulatory frameworks that protect vulnerable populations. As state officials continue to pressure xAI and explore legislative solutions, the case highlights the growing responsibility of AI developers to anticipate and mitigate misuse of their technologies.

Source: wired.com