OpenAI has banned multiple ChatGPT accounts allegedly linked to China, Russia, and North Korea after uncovering efforts to misuse the AI for surveillance and cybercrime.
Quick Summary – TLDR:
- OpenAI banned ChatGPT accounts involved in mass surveillance and phishing schemes
- Threat actors from China, Russia, and North Korea used ChatGPT for profiling, malware development, and phishing
- Some Chinese-linked users sought to develop social media monitoring and Uyghur tracking tools
- OpenAI’s findings highlight how AI is being weaponized despite built-in safeguards
What Happened?
OpenAI released a detailed report revealing it had disrupted and banned multiple ChatGPT accounts linked to global threat actors, including state-affiliated groups from China, North Korea, and Russia. These actors attempted to misuse ChatGPT for building spyware, phishing tools, and other malicious software components under the guise of research or technical queries.
OpenAI admits suspected Chinese operatives tried using ChatGPT to design tools for mass surveillance and online censorship.
— The Alliance for Secure AI (@secureainow) October 8, 2025
But this also exposes a bigger truth: Silicon Valley built systems so powerful that authoritarian states now see them as weapons. pic.twitter.com/PvEdM7UNKy
OpenAI Flags Global Abuse of ChatGPT
In its October 2025 threat report titled Disrupting Malicious Uses of AI, OpenAI described several instances of ChatGPT misuse:
- Chinese-linked accounts attempted to use the tool to develop surveillance systems. These included proposals for a “social media listening tool” and even a model to track “high-risk” Uyghur populations using transport bookings and police data.
- One user requested help creating documents for a tool that could monitor platforms like X, Facebook, TikTok, and YouTube for politically sensitive or extremist content.
- Another prompt involved developing a “High-Risk Uyghur-Related Inflow Warning Model,” although OpenAI emphasized that there’s no evidence the model was ever implemented.
These prompts were often framed as academic or technical inquiries, skirting OpenAI’s safety systems while still pushing toward authoritarian surveillance goals.
Phishing and Malware Development by Russian and North Korean Actors
The abuse wasn’t limited to China. Russian threat actors used ChatGPT to refine malware components such as remote access trojans (RATs) and credential stealers. They relied on multiple accounts to troubleshoot code and generate modular components for post-exploitation tasks.
- These actors often requested assistance for building tools like clipboard monitors, Telegram-based exfiltration bots, and password generators.
- Some accounts were linked to criminal Telegram channels, where hackers shared ChatGPT-aided projects.
North Korean hackers also joined the fray, attempting to use ChatGPT to write phishing emails, convert Chrome extensions for macOS use, and explore malware development. These activities mirrored tactics seen in a campaign exposed by Trellix targeting South Korean diplomats.
Influence Operations and Social Media Manipulation
OpenAI found that its models were also being used in covert influence campaigns:
- A Chinese campaign called Nine Line used ChatGPT to create critical social media content targeting political figures in the Philippines and activists in Hong Kong.
- Accounts sought guidance on how to grow social media influence, including advice on launching TikTok challenges using popular hashtags like #MyImmigrantStory.
- Some actors experimented with removing stylistic elements like em dashes, possibly to disguise AI-generated text and avoid detection.
Beyond state-affiliated misuse, accounts from Cambodia, Myanmar, and Nigeria used ChatGPT for scams, translation, and content creation to lure victims into investment fraud.
OpenAI’s Safeguards: Still a Work in Progress
Despite existing safety filters, bad actors have adapted their prompts to fly under the radar. OpenAI stated that while the chatbot often refused to output malicious code directly, users could still coax it into producing useful building blocks by breaking requests into innocuous components.
While OpenAI took action by banning the accounts and increasing monitoring, the findings point to a larger concern. AI misuse is evolving often in ways that are difficult to detect or prevent in real time.
SQ Magazine Takeaway
I’ve got to say, this report from OpenAI is a real eye-opener. It shows just how far state-backed hackers and shady online groups are willing to go to weaponize AI. Tools like ChatGPT are meant to empower people, not help governments surveil citizens or hackers steal data. What strikes me most is how cleverly these actors are sidestepping safety features, which means AI developers have to stay several steps ahead at all times. We should all care about this, because the line between helpful and harmful use of AI is getting thinner every day.