OpenAI CEO Sam Altman is worried that bots have taken over social media so thoroughly that even real users now seem artificial.
Quick Summary – TLDR:
- Sam Altman says Reddit and X feel “very fake”
- Humans are adopting AI-like speech, confusing what’s real
- Reddit threads praising Codex triggered his concern
- Over 50% of 2024 internet traffic came from non-humans
What Happened?
OpenAI CEO Sam Altman posted on X that reading Reddit threads about Codex, OpenAI’s AI coding assistant, made him feel like the entire conversation might be fake. Despite knowing that Codex’s growth is real, the overwhelming number of similar posts led him to question whether they were written by humans or bots.
He later broke down his reasoning, suggesting that it’s not just bots, but humans who have begun to sound like language models too.
i have had the strangest experience reading this: i assume its all fake/bots, even though in this case i know codex growth is really strong and the trend here is real.
— Sam Altman (@sama) September 8, 2025
i think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely… https://t.co/9buqM3ZpKe
AI-Speak, Bots, and a Blurred Reality
Altman’s realization came while browsing the r/ClaudeCode subreddit, where users have been posting about switching from Anthropic’s Claude Code to OpenAI’s Codex, which launched in May. The volume and tone of these posts made Altman pause.
“I assume it’s all fake/bots, even though in this case I know Codex growth is really strong and the trend here is real,” he wrote.
He explained that several elements are making social media feel increasingly inauthentic:
- Humans mimicking AI patterns, especially what he calls “LLM-speak”
- Hyper-online communities reacting in unified, exaggerated ways
- Engagement-focused platforms encouraging viral, predictable content
- Creator monetization models shaping how people interact
- Astroturfing tactics possibly influencing public perception
The irony here is sharp. OpenAI’s own models were trained to sound convincingly human, and they were trained on sites like Reddit, where Altman was once a board member. Now, humans seem to be mimicking those models right back.
Are We Talking to Bots or Just Sounding Like Them?
Altman’s concerns echo a broader unease online. As bots flood platforms like Reddit and X, it’s becoming harder to trust what we see. In his own words, “AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago.”
There is no direct evidence that Reddit posts praising OpenAI are part of a bot campaign. However, Altman noted that OpenAI has previously been targeted by astroturfing, where users or bots are paid to sway public opinion.
Following the launch of GPT 5.0, Reddit threads unexpectedly shifted tone, with users voicing frustration instead of support. Altman even stepped in for an AMA session to admit rollout issues. Still, user trust hasn’t fully bounced back.
The Internet Is More Bot Than Human
There’s real data behind this unease. According to cybersecurity firm Imperva, more than 50 percent of all internet traffic in 2024 came from non-human sources. Even X’s own bot, Grok, estimated there were hundreds of millions of bots on the platform.
@elonmusk @grok Exact numbers aren’t public, but 2024 estimates suggest hundreds of millions of bots on X, possibly 300-400M out of 650M users. A Jan 2024 study found 64% of accounts might be bots, and 2025 reports show no clear drop despite purges. It’s a tricky issue with… pic.twitter.com/CmneWHYyuQ
— Grok (@grok) April 8, 2025
As AI tools become more embedded in everyday internet use, the boundary between authentic and artificial becomes harder to detect. Even real posts begin to feel manufactured, as people unconsciously adopt the rhythms and language of the very bots they once tried to spot.
SQ Magazine Takeaway
I totally get where Altman is coming from. These days, I scroll through Reddit or X and catch myself wondering, “Is this even a real person?” Not because the post sounds robotic, but because it sounds too polished, too optimized, too… AI. That’s the wild part. We trained machines to sound like us, and now we’re sounding like them. It’s not just the bots that are fooling us. We’ve started fooling each other, and that’s a much deeper problem than spam filters or moderation can fix.