Anthropic is testing a new AI model called Claude Mythos that the company describes as a major leap beyond anything it has shipped before, and the only reason we know about it is because someone at Anthropic forgot to set a CMS toggle to private. A Fortune exclusive published Thursday evening revealed that draft blog posts, internal PDFs, and nearly 3,000 unpublished assets were sitting in a publicly searchable data store, accessible to anyone who knew where to look.
An Anthropic spokesperson confirmed the model's existence after Fortune's inquiry, calling it "a step change" and "the most capable we've built to date." The company attributed the leak to "human error in the CMS configuration," which is a polite way of saying the default-public setting on their content management system caught them off guard.
The cyber risk problem
The leaked draft blog post makes for uncomfortable reading. In it, Anthropic warns that the model, also referred to internally under a new tier called "Capybara," is "currently far ahead of any other AI model in cyber capabilities." The document goes further, stating the model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." That is Anthropic's own assessment of its own product, written for a blog post it hadn't planned to publish yet.
The company's proposed rollout strategy, per the draft, centers on giving cyber defenders a head start. Early access customers would get the model first so they could harden their codebases before wider availability. The draft also notes that Mythos is expensive to run and not ready for general release.
This follows a pattern. When OpenAI released GPT-5.3-Codex in early February, it became the first model OpenAI classified as "high capability" for cybersecurity under its Preparedness Framework. Anthropic's Opus 4.6, released the same week, surfaced previously unknown vulnerabilities in production codebases. Both companies have been racing to build models that find software flaws, then scrambling to explain why that's fine, actually. Anthropic's own language suggests Mythos goes considerably further than either.
What "Capybara" means for the lineup
The leaked blog post introduces Capybara as a new model tier sitting above Opus in Anthropic's hierarchy. Right now, the company sells three sizes: Haiku (small, fast, cheap), Sonnet (mid-range), and Opus (largest, most capable). Capybara would be a fourth tier, bigger and more expensive than Opus. The draft claims it scores dramatically higher than Claude Opus 4.6 on coding, academic reasoning, and cybersecurity benchmarks.
Mythos and Capybara appear to refer to the same underlying model. How Anthropic plans to brand and price it remains unclear, since, again, none of this was supposed to be public yet.
An AI safety company with a CMS problem
The irony is hard to ignore. Anthropic positions itself as the safety-focused AI lab, the company that thinks carefully about risks before releasing powerful systems. Its data leak wasn't a sophisticated breach. It was a content management system where uploaded assets default to public unless someone remembers to flip a switch. Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge whom Fortune asked to review the material, found roughly 3,000 assets that hadn't been published to Anthropic's public-facing sites but were sitting in the open anyway.
The cache also contained details of an invite-only CEO retreat in the English countryside, described as an "intimate gathering" at an 18th-century manor. Attendees, unnamed in the documents, would hear from policymakers and get previews of unreleased Claude features. Anthropic called it part of "an ongoing series of events" and moved on.
"These materials were early drafts of content considered for publication and did not involve our core infrastructure, AI systems, customer data, or security architecture," the company said. That's technically reassuring, though a company warning the world about AI-enabled cyberattacks probably doesn't love the optics of its own CMS configuration being the vulnerability.
Anthropic stressed that AI tools, including Claude Code and Cowork, were not responsible for the misconfiguration. But as Fortune noted, AI coding tools now make it trivially easy to crawl and correlate exactly this kind of accidentally public data. The company has since restricted access to the data store.
No timeline has been announced for Mythos or Capybara's public release. The draft blog post had a publication date embedded in its structured data, but Anthropic has given no indication it plans to stick to whatever schedule existed before Thursday evening.




