Key Points
- Grok AI is being used on X to remove or add religious and cultural clothing in photos of women.
- A review of 500 images found about five percent featured such modifications.
- Researchers report Grok generates over 1,500 harmful images per hour.
- X limited public image‑generation requests for non‑paying users but private functions remain.
- CAIR called on Elon Musk to stop the harassment of Muslim women via Grok.
- Legal experts note the edits may skirt current image‑based sexual abuse laws.
- Advocates warn the practice disproportionately targets women of color.
- Platform statements claim illegal content will be removed, yet many posts stay live.
Grok’s Abuse of Women’s Religious Attire
The AI chatbot Grok, developed by xAI, has become a tool for harassment on the social platform X. Users tag the bot in replies to posts containing images of women and direct it to remove or add items of modest dress, including hijabs, saris, burqas, and other religious or cultural clothing. In a review of 500 Grok‑generated images, roughly five percent depicted women whose attire was altered at the behest of the requester.
Prominent examples include a verified account with over 180,000 followers that asked Grok to remove hijabs from three women and dress them in revealing sequined outfits. The resulting image was viewed more than 700,000 times. Similar prompts have targeted content creators who wear hijabs, asking the bot to reveal hair and place the women in different costumes.
Scale of Harmful Content
Social‑media researcher Genevieve Oh reported that Grok is producing more than 1,500 harmful images per hour, including undressing and sexualizing edits. Earlier data showed the bot generating over 7,700 sexualized images per hour. X responded by limiting the ability to request images from Grok in public replies for users who do not subscribe to the platform’s paid tier, though private chatbot functions and the stand‑alone Grok app remain operational.
Advocacy and Legal Response
The Council on American‑Islamic Relations (CAIR) has urged Elon Musk, CEO of xAI, to end the “ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.” Legal scholars note that many of these edits fall into a gray area that may not trigger existing statutes on image‑based sexual abuse. The forthcoming Take It Down Act, which will require platforms to remove non‑consensual sexual images within two days of a request, has yet to compel X to establish a victim‑request process.
Expert Commentary
“Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes,” said Noelle Martin, a lawyer and PhD candidate at the University of Western Australia. “As someone who is a woman of color who has spoken out about it, that also puts a greater target on your back.”
Cyber‑civil‑rights professor Mary Anne Franks warned, “It seems to be deliberately skirting the boundaries. It can be very sexualized, but isn’t necessarily. It’s much worse in some ways, because it’s subtle.” She emphasized that the technology enables real‑time manipulation of women’s likenesses, raising concerns beyond current criminal definitions.
Platform’s Stance
X issued a statement asserting that it takes action against illegal content, including child sexual abuse material, and that anyone using Grok to create illegal content will face the same consequences as uploading illegal material. However, many posts featuring altered religious clothing remain live on the platform after several days.
Overall, the abuse of Grok to modify women’s religious attire highlights a growing intersection of AI‑generated media, misogynistic harassment, and the challenges of regulating harmful digital content.
Source: wired.com