Key Points
- Testing shows Grokipedia cited in over 263,000 ChatGPT responses.
- Google’s AI products, including Gemini, also show increased Grokipedia citations.
- Grokipedia is AI‑generated and lacks human editorial oversight.
- Experts warn that even limited use can amplify misinformation and bias.
- OpenAI states users can view source citations to assess reliability.
Growth of Grokipedia Citations
Analysts tracking AI‑generated answers have found that Grokipedia, the encyclopedia produced by Elon Musk’s xAI chatbot Grok, is appearing more often as a source in responses from several major AI platforms. Testing by Ahrefs reported that Grokipedia was referenced in over 263,000 ChatGPT answers, drawn from roughly 13.6 million prompts. By comparison, Wikipedia appeared in 2.9 million ChatGPT responses.
Semrush’s AI Visibility Toolkit observed a similar uptick in citations across Google’s AI products, including Gemini, AI Overviews, and AI Mode, during December. While still a secondary source, the visibility of Grokipedia in these tools has risen steadily since its launch in late October.
Platform‑Specific Findings
Data from Ahrefs indicated that Grokipedia appeared in around 8,600 Gemini answers, 567 AI Overviews answers, 7,700 Copilot answers, and 2 Perplexity answers, based on millions of prompts for each service. The share of citations remains small—about 0.01 to 0.02 percent of daily ChatGPT citations—but the trend is upward.
Nature of the Source
Unlike Wikipedia, which relies on human editors and transparent revision histories, Grokipedia is generated entirely by the Grok chatbot. Early versions cloned Wikipedia articles, but many entries reflected controversial or biased viewpoints, including content that downplays certain historical facts or presents extremist perspectives. The lack of human oversight makes the encyclopedia vulnerable to “LLM grooming” or data‑poisoning attacks.
Expert Concerns
Industry experts warn that even limited use of Grokipedia as a reference can amplify misinformation. Jim Yu of BrightEdge noted that AI Overviews typically list Grokipedia alongside other sources, treating it as supplementary. In contrast, ChatGPT often places Grokipedia among the first cited sources, giving it greater apparent authority.
Critics argue that the AI‑generated nature of Grokipedia, combined with its opaque sourcing, raises the risk of spreading biased or inaccurate information. The concern is that fluency can be mistaken for reliability, especially when AI tools present citations without clear context.
Responses from AI Providers
OpenAI’s spokesperson emphasized that ChatGPT draws from a broad range of publicly available sources and that users can see the cited references to assess reliability. Google and other providers declined to comment on the specific rise of Grokipedia citations.
Implications
The growing presence of Grokipedia in AI‑generated answers highlights the broader challenge of ensuring trustworthy sources in the expanding landscape of large language models. As AI tools continue to integrate new, automatically generated references, the industry faces pressure to develop safeguards that prevent the propagation of low‑quality or biased content.
Source: theverge.com