ChatGPT’s latest version is citing Elon Musk’s AI-made encyclopedia, Grokipedia, instead of Wikipedia for some queries, raising concerns about bias and credibility.
Quick Summary – TLDR:
- ChatGPT’s GPT-5.2 model was found citing Grokipedia in response to obscure and technical queries.
- Grokipedia is an AI-generated encyclopedia launched by Elon Musk’s xAI as a rival to Wikipedia.
- Critics warn that Grokipedia carries ideological bias and misinformation on sensitive issues.
- The shift signals a larger trend in how AI systems prioritize sources for real-time data.
What Happened?
Independent testing of OpenAI’s GPT-5.2 model has revealed a subtle yet significant change in how the chatbot sources its information. Instead of pulling data from Wikipedia as usual, the model cited Grokipedia, Elon Musk’s AI-generated encyclopedia, in nine separate instances during a dozen-question trial. This selective usage has sparked intense scrutiny across the tech and AI communities.
BREAKING: ChatGPT is now citing Elon Musk’s Grokipedia as a source in some replies. pic.twitter.com/Blnw2CfoIU
— DogeDesigner (@cb_doge) January 24, 2026
AI Swaps Wikipedia for Grokipedia in Select Queries
A recent Guardian investigation uncovered that GPT-5.2 leaned on Grokipedia for information involving politically sensitive or underreported topics. The model referenced Grokipedia when answering questions about:
- Iran’s political structure, including salary details of the Basij paramilitary force.
- Ownership of the Mostazafan Foundation, a powerful Iranian economic entity.
- Sir Richard Evans, particularly his expert witness role in a libel trial involving Holocaust denial.
- Alleged ties between MTN-Irancell and Iran’s Supreme Leader, going beyond what Wikipedia typically reports.
While Wikipedia has long been the gold standard for crowd-sourced encyclopedic knowledge, GPT-5.2’s selective preference for Grokipedia reflects a changing landscape in how AI models retrieve and weigh real-time or niche data.
What Is Grokipedia?
Launched by xAI in October 2025, Grokipedia was created as a direct competitor to Wikipedia, aiming to offer faster updates and algorithm-driven curation. But from the start, it has courted controversy. Critics have pointed out that many of its articles:
- Mirror Wikipedia’s content without clear attribution.
- Promote conservative viewpoints, particularly on hot-button issues like gay marriage, climate change, and the January 6 Capitol riot.
- Include inflammatory claims, such as suggesting pornography contributed to the AIDS crisis and providing ideological justifications for slavery.
- Use derogatory language for transgender individuals, raising flags around inclusivity and factual integrity.
Despite these criticisms, Grokipedia’s AI-managed infrastructure allows it to update quickly and maintain consistency, appealing to language models needing structured, machine-readable data.
OpenAI and Anthropic Respond
OpenAI, responding to the controversy, explained that GPT-5.2 is designed to pull from a broad mix of publicly available sources. A spokesperson emphasized that safety filters are in place to reduce the risk of misinformation and that all sources are transparently cited in responses.
Interestingly, OpenAI is not alone. Anthropic’s Claude AI has also reportedly used Grokipedia in answering questions related to petroleum production and Scottish ales, signaling that more AI systems might be experimenting with this controversial source.
Selective Use Highlights AI’s Risk Management
One noteworthy finding was GPT-5.2’s apparent caution. The model did not cite Grokipedia when asked about topics where the platform’s inaccuracies have been widely documented, such as the January 6 insurrection, media bias around Donald Trump, or the HIV/AIDS epidemic. This suggests built-in safeguards may be steering the model away from unreliable data on contentious topics, though concerns about less-visible inaccuracies persist.
SQ Magazine Takeaway
This story hits close to home for anyone who relies on AI tools for information. I use ChatGPT every day, and knowing that it’s pulling answers from a highly controversial and biased source like Grokipedia even just sometimes makes me pause. Sure, it’s just for niche topics now, but this could be a slippery slope. I believe AI needs to be held to higher standards of transparency and accuracy, especially when it’s replacing sources like Wikipedia that are community-driven and openly reviewed. We should all keep an eye on where our AI gets its facts.