Key Points
- X adopted the IBSA Principles in 2024 to combat all forms of intimate‑image abuse.
- Advocates say X is not fulfilling its voluntary commitments to prevent non‑consensual image distribution.
- The Grok AI tool is under scrutiny for allowing generation of images involving underage girls.
- Investigations are underway in Europe, India, and Malaysia, potentially forcing safety updates.
- U.S. regulators could act under the Take It Down Act if harmful outputs persist into May.
- Child‑protection groups stress that safeguarding children must remain a non‑negotiable priority.
Background on X’s commitments
X has been vocal about policing its platform for child sexual abuse material (CSAM) since Elon Musk took over the service. Under former CEO Linda Yaccarino, the company adopted a broad protective stance against all image‑based sexual abuse (IBSA). In 2024, X became one of the earliest corporations to voluntarily adopt the IBSA Principles, which seek to combat all kinds of IBSA, recognizing that even fake images can “cause devastating psychological, financial, and reputational harm.” When it adopted the principles, X vowed to prevent the non‑consensual distribution of intimate images by providing easy‑to‑use reporting tools and quickly supporting the needs of victims desperate to block “the nonconsensual creation or distribution of intimate images” on its platform.
Criticism from advocacy groups
Kate Ruane, the director of the Center for Democracy and Technology’s Free Expression Project, which helped form the working group behind the IBSA Principles, told Ars that although the commitments X made were “voluntary,” they signaled that X agreed the problem was a “pressing issue the company should take seriously.” Ruane said, “They are on record saying that they will do these things, and they are not.” Child safety advocates are alarmed by the sluggish response. A spokesperson for the National Center for Missing & Exploited Children (NCMEC) told Ars, “Technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children. As AI continues to advance, protecting children must remain a clear and nonnegotiable priority.”
Grok controversy and international probes
The controversy surrounding Grok, X’s AI chatbot, has sparked investigations in Europe, India, and Malaysia. These probes may force xAI, the company behind Grok, to update the tool’s safety guidelines or make other tweaks to block the worst outputs. The focus is on whether Grok allows users to generate or request images of underage girls, even when users claim “good intent.”
Potential U.S. legal actions
In the United States, xAI may face civil suits under federal or state laws that restrict intimate image abuse. If Grok’s harmful outputs continue into May, X could face penalties under the Take It Down Act, which authorizes the Federal Trade Commission to intervene if platforms don’t quickly remove both real and AI‑generated non‑consensual intimate imagery. However, whether U.S. authorities will intervene soon remains unknown, as Musk is a close ally of the Trump administration. A spokesperson for the Justice Department told CNN that the department “takes AI‑generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”
Calls for enforcement
Ruane emphasized that “laws are only as good as their enforcement,” adding that law enforcement at the Federal Trade Commission or the Department of Justice must be willing to go after companies if they are in violation of the laws. The ongoing debate highlights the tension between rapid AI development and the responsibility to protect vulnerable individuals from exploitation.
Source: arstechnica.com