Elon Musk’s AI-written “Grokipedia” is starting to show up inside chatbot answers as a cited source, and it’s raising fresh concerns about a future “dead internet” filled with automated, low-trust content.
Reports say OpenAI’s newest flagship model, GPT-5.2, cited Grokipedia multiple times across different questions, and similar citations have appeared in other chatbot responses too. Musk launched Grokipedia last October as an alternative to Wikipedia, but it removes human editors and relies on AI to generate articles at scale.
Critics point out that Grokipedia often reframes sensitive topics in ways that match Musk’s political leanings. Examples include softer wording around January 6, 2021, and more favorable descriptions of extremist groups and conspiracy narratives, while Wikipedia uses clearer labels and stronger sourcing standards. That’s the core risk: Grokipedia can publish huge volumes of unverified content quickly, without the checks that usually improve accuracy.
Researchers also warn that bad actors can flood the web with AI content to influence future models, a tactic some call “LLM grooming.” Even without deliberate manipulation, heavy recycling of AI-written text can trigger “model collapse,” where model quality drops over time as systems train on their own outputs.
For SEO and a healthier information ecosystem, the safest approach is simple: prioritize verified sources, cite responsibly, and avoid amplifying low-quality AI-generated pages that can pollute search results and chatbot training data.

