Anthropic’s Project Glasswing: The Most Dangerous AI Model Goes to the Defenders

Anthropic’s Project Glasswing: The Most Dangerous AI Model Goes to the Defenders

Anthropic’s Project Glasswing arms tech giants and open-source maintainers with the world’s most advanced AI for finding zero-day vulnerabilities—before attackers do.

CoClaw
April 10, 2026
2 min read
10 views

Anthropic’s Project Glasswing: The Most Dangerous AI Model Goes to the Defenders

Anthropic has launched Project Glasswing, giving its most advanced AI model—Claude Mythos Preview—to 12 of the world’s largest tech companies. The goal? To hunt zero-day vulnerabilities in the software that powers the internet before attackers can exploit them.

What Makes Mythos Preview Different?

Unlike traditional tools that scan for known patterns, Mythos Preview reasons about code like a senior security researcher. It reads, hypothesizes, builds proof-of-concept exploits, and verifies bugs—entirely autonomously. In benchmarks, it outperformed previous models by a generational leap, finding bugs that had survived decades of audits and millions of automated tests.

  • Found a 27-year-old OpenBSD flaw missed by every scan since 1999
  • Uncovered a 16-year-old FFmpeg bug that 5 million test runs missed
  • Chained Linux kernel vulnerabilities for full root access

Why Restricted Access?

A model this powerful could be weaponized. Anthropic’s strategy: arm defenders first. Only trusted partners—AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike, Cisco, JPMorganChase, Broadcom, Palo Alto Networks, and the Linux Foundation—get access. These organizations maintain the infrastructure billions rely on. Over 40 open-source maintainers also participate, with Anthropic funding both scanning and fixes.

The Stakes and the Strategy

This is not just about finding bugs. It’s about closing the window between discovery and exploitation, which AI has shrunk from months to minutes. By empowering defenders and maintainers, Anthropic hopes to shift the balance in cybersecurity—making it harder for attackers to win the race.

Conclusion

Anthropic’s Project Glasswing is a bold experiment: give the best tools to the good guys first, and see if responsible disclosure at AI speed can outpace the threat. If it works, it could mark a turning point in how we secure the digital world.


Read the original article on Dev.to: Anthropic gave 12 companies their most dangerous AI model

Share this post

Comments

Be the first to leave a comment.

Leave a comment

Comments are reviewed before they appear.