An accidental data leak has revealed Claude Mythos, Anthropic’s most powerful AI model yet, raising serious cybersecurity concerns.
Quick Summary – TLDR:
- Anthropic leak exposed Claude Mythos, its most advanced AI model in development.
- Introduces a new AI tier called Capybara, above Opus.
- Model shows major gains in coding, reasoning, and cybersecurity.
- Company is restricting release due to potential misuse risks.
What Happened?
Anthropic confirmed that a data leak exposed internal drafts and assets, including details about its upcoming AI model Claude Mythos. The leak, caused by a CMS configuration error, made nearly 3,000 files publicly accessible before the company secured them.
The exposed documents reveal that Mythos is already trained and is currently being tested with a small group of early access users.
Cybersecurity names are trading lower after reports of a leak tied to Anthropic’s new Claude Mythos model.
— Polymarket Money (@PolymarketMoney) March 27, 2026
The model is already in testing and is believed by Anthropic to pose unprecedented cybersecurity risks. pic.twitter.com/cfNXMHmvqU
A Leak That Revealed More Than Intended
The leak originated from Anthropic’s internal content system, where files were mistakenly left unmarked as private and stored in a public data repository. These included draft blog posts, PDFs, images, and internal company information.
Among the most significant discoveries was an unpublished blog post detailing Claude Mythos, along with plans for an invite only CEO event in Europe aimed at promoting the company’s AI capabilities to enterprise clients.
Anthropic later described the incident as human error and clarified that the exposed files were early drafts not meant for public release.
Claude Mythos Introduces a New AI Tier
One of the biggest revelations is that Mythos is not just another upgrade. It represents a new category of AI models called Capybara, positioned above Anthropic’s current tiers:
- Opus: Most powerful existing model.
- Sonnet: Balanced performance and cost.
- Haiku: Lightweight and fast.
The new Capybara tier is described as larger, more intelligent, and significantly more capable than Opus. According to Anthropic, Mythos delivers a step change in performance, especially in:
- Software coding tasks.
- Advanced reasoning and problem solving.
- Cybersecurity operations.
Internal drafts suggest that Mythos outperforms Claude Opus 4.6 across multiple benchmarks, making it the company’s most advanced system so far.
Why Anthropic Is Holding Back the Release?
Despite its capabilities, Anthropic is not rushing to release Mythos widely. The company has flagged serious concerns about how the model could be misused, particularly in cybersecurity.
According to leaked content, Mythos is “currently far ahead of any other AI model in cyber capabilities” and can identify and exploit software vulnerabilities faster than human defenders.
This raises the risk of:
- Automated cyberattacks at scale.
- Faster discovery of system weaknesses.
- Greater pressure on security teams worldwide.
To manage these risks, Anthropic is taking a cautious approach by offering early access only to trusted organisations, especially those working in cybersecurity. The goal is to help defenders strengthen systems before wider exposure.
Real World Misuse Already Detected
Anthropic has already seen attempts to misuse its AI tools. The company reported that state linked hacking groups, including some associated with China, tried to exploit its systems in coordinated operations.
In one instance, attackers targeted around 30 organisations, including:
- Technology companies
- Financial institutions
- Government entities
Anthropic said it quickly identified the activity, blocked the accounts, and notified affected organisations, highlighting the growing challenge of controlling powerful AI tools.
Cost and Scale Challenges
Another factor limiting Mythos’ release is its high computational cost. The model is described as large and expensive to run, making it difficult to deploy widely at this stage.
Anthropic is reportedly working on improving efficiency before expanding access through its API. Even without security concerns, a full public rollout may have taken time due to these infrastructure and cost barriers.
SQ Magazine Takeaway
I think this is one of those moments where the AI race starts to feel a bit real and a bit risky. Claude Mythos sounds incredibly powerful, but the fact that even its creators are slowing down its release says a lot. When a company openly admits its own model could be dangerous in cybersecurity, it is not something to ignore.
At the same time, this shows how fast AI is evolving. We are moving beyond chatbots into systems that can actively challenge global digital security. That is exciting, but also something that needs serious oversight.