AI Security

Curl Lead Says Anthropic's Mythos Found One Real Vulnerability

Daniel Stenberg calls the hype around Anthropic's restricted security AI mostly marketing.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
May 12, 20263 min read
Share:
Developer's monitor in a dim office showing source code mid-scroll with abstract neural network reflections

Daniel Stenberg, curl's lead developer, says Anthropic's restricted Mythos model found exactly one real security flaw in curl, a low-severity issue that will ship a CVE alongside the 8.21.0 release in late June. He posted the verdict on his personal blog Monday, and it is not flattering to the model Anthropic spent April calling too dangerous to release publicly.

What the scan actually returned

Mythos analyzed 178,000 lines of code across curl's src/ and lib/ directories. The report came back with five "Confirmed security vulnerabilities," in the model's own confident phrasing. After curl's security team spent a few hours on the list, three were ruled false positives (behavior already documented in API docs), and a fourth got downgraded to "just a bug." That leaves one. Severity: low.

About twenty other items were flagged as bugs rather than vulnerabilities, and curl is working through those. Stenberg notes the false positive rate was lower than typical, suggesting the model had "a rather high threshold for certainty." Charitable read: it was careful. Less charitable: it was deliberately conservative.

About those earlier scanners

The comparison matters more than the absolute number. Before Mythos, curl had already been pushed through AISLE, Zeropath, and OpenAI's Codex Security. Those tools together drove between two and three hundred merged bugfixes over the past 8 to 10 months, and probably a dozen or more CVEs. Some of that is selection bias. Easier bugs get caught first, and what is left is harder. Stenberg concedes the point. He still lands in the same place.

"I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos."

That sits awkwardly next to Project Glasswing, the April 7 initiative under which Anthropic walled off Mythos from public release on the grounds that its hacking capabilities posed national security risk. The company's red team claimed the model had autonomously discovered thousands of high-severity zero-days across every major operating system and browser, including a 27-year-old bug in OpenBSD and a 16-year-old one in FFmpeg. None of those claims have been independently audited. Access is consortium-only.

The representativeness problem

One curl scan does not prove much about Mythos, and Stenberg says so himself. The Mythos report opens with a kind of pre-emptive shrug:

"curl is one of the most fuzzed and audited C codebases in existence (OSS-Fuzz, Coverity, CodeQL, multiple paid audits). Finding anything in the hot paths (HTTP/1, TLS, URL parsing core) is unlikely."

Twenty billion installs. 188 prior CVEs. Source that has been rewritten roughly four times per line on average. Picking curl to test a security model is like picking a chess grandmaster to embarrass a chess engine. Mythos might be substantially better on the next codebase. There is no public next codebase yet, and almost everything the world knows about Mythos's capabilities still comes from Anthropic.

Stenberg is not anti-AI

Worth keeping straight: he thinks modern AI analyzers are meaningfully better than the static tools that came before, and that projects not running them are leaving openings for attackers. curl now uses Copilot and Augment Code on pull requests, and external researchers wielding AI have been flooding the project with what Stenberg has called high-quality chaos. He accepted the Mythos scan through Linux Foundation's Alpha Omega program, which is hardly the move of someone trying to dismiss the technology. His complaint is specifically with the framing.

curl 8.21.0 ships in late June with the one CVE attached. Stenberg says he wants more Mythos passes against curl over time, and other models too. The "dangerously good at finding security flaws" reveal Anthropic teased in April has not, at least for this codebase, arrived.

Tags:AnthropicMythoscurlDaniel StenbergAI securityProject Glasswingvulnerability researchopen sourcecybersecuritycode analysis
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Curl Dev: Anthropic's Mythos Found One Real Bug | aiHola