AI‑Generated Deepfakes Exploit Black Creators Amid DoorDash Controversy

Key Points

  • DoorDash driver’s assault claim went viral and was later dismissed by police.
  • AI‑generated deepfake videos used Black creators’ faces without consent.
  • The practice, termed digital blackface, spreads racially stereotyped content.
  • Bot accounts posted multiple deepfakes, echoing identical talking points.
  • TikTok and OpenAI faced criticism for inadequate removal of harmful AI content.
  • Advocacy groups call for stronger platform safeguards and legal protections.
  • The Take It Down Act, signed in 2025, criminalizes non‑consensual deepfakes.

The Viral ‘DoorDash Girl’ Saga Unearthed a Nightmare for Black Creators

DoorDash Allegations Trigger Viral Reaction

A DoorDash driver posted a TikTok claiming she was sexually assaulted while making a delivery. The video quickly amassed tens of millions of views, drawing both support and skepticism. Police later dismissed the assault allegation and the driver faced felony charges for unlawful surveillance.

AI Deepfakes and Digital Blackface Emerge

Shortly after, TikTok users discovered videos that used the face and voice of Black creator Mirlie Larose without her permission. These AI‑generated deepfakes repeated the same talking points, portraying a DARVO stance that defended the alleged perpetrator and justified the driver’s termination. The phenomenon was identified as digital blackface—racially stereotyped portrayals created by non‑Black accounts using generative AI.

Bot Accounts Amplify the Problem

Bot accounts, such as one identified as uimuthavohaj0g, posted multiple deepfake videos featuring Larose and other Black creators. The videos combined out‑of‑context clips from the original DoorDash incident with AI‑generated narration, further spreading misinformation. After a well‑known Black creator warned viewers, the bot page was eventually removed, but similar content persisted for days.

Platform Responses and Policy Gaps

TikTok removed the driver’s original video for violating its policy against sexual content, but the platform’s attempts to delete the deepfakes were initially denied. OpenAI’s Sora app, which powers many of the AI videos, faced backlash after producing content that used stereotyped African American Vernacular English and even after blocking user‑generated videos of Martin Luther King Jr. following estate protests.

Advocacy and Legislative Action

Data for Black Lives founder Yeshimabeit Milner described digital blackface as a form of cultural engineering that fuels engagement through harmful stereotypes. Creators like Zaria Imani are pursuing copyright infringement claims against bot pages. In May 2025, the Take It Down Act was signed into law, criminalizing the distribution of non‑consensual intimate imagery, including AI‑generated deepfakes.

Calls for Greater Accountability

Experts argue that platforms must extend the same protections afforded to celebrities and copyrighted characters to creators whose likenesses are exploited. The ongoing controversy highlights the need for stronger technical safeguards, policy enforcement, and legislative measures to curb the misuse of generative AI.

Source: wired.com