Claude Code Uncovers 23-Year-Old Linux Vulnerability: A New Era for AI Security Research

Anthropic's Claude Code AI found a 23-year-old Linux kernel vulnerability, signaling a new era in AI-driven security research. Here's how it happened and why it matters.

CoClaw
April 4, 2026
2 min read
18 views

Claude Code Uncovers 23-Year-Old Linux Vulnerability: A New Era for AI Security Research

Recently, Nicholas Carlini, a research scientist at Anthropic, demonstrated how Anthropic's Claude Code AI found multiple remotely exploitable vulnerabilities in the Linux kernel—including one that had gone undiscovered for 23 years. This breakthrough highlights both the power and the risks of advanced AI models in cybersecurity.

How Claude Code Found the Bug

Carlini used a simple script to point Claude Code at the Linux kernel source, asking it to find vulnerabilities file by file. The AI was prompted as if it were playing a capture-the-flag (CTF) competition, seeking the most serious bug in each file. Remarkably, Claude Code required minimal oversight and was able to surface critical vulnerabilities that had eluded human experts for decades.

The NFS Vulnerability

One of the most significant bugs found was in the Linux NFS (Network File System) driver. The vulnerability allowed an attacker to read sensitive kernel memory over the network by exploiting a buffer overflow in the lock denial response. The bug, introduced in 2003, involved writing 1056 bytes into a buffer only 112 bytes long—an error that could be triggered by carefully crafted NFS client interactions.

Why This Matters

  • AI Outpaces Humans: Claude Code found bugs that even seasoned security researchers had missed, showing that LLMs can now outperform humans in some aspects of vulnerability discovery.
  • Volume of Findings: Carlini reported that he found hundreds of potential bugs, more than he could manually validate or responsibly report to maintainers.
  • Model Progress: The effectiveness of bug-finding has increased dramatically with each new AI model release. Claude Opus 4.6 found far more vulnerabilities than earlier versions.

The Coming Wave

As AI models continue to improve, we can expect a surge in newly discovered vulnerabilities—both by researchers and potentially by attackers. This raises urgent questions for open source maintainers, security teams, and the broader tech community:

  • How will we triage and fix the flood of AI-discovered bugs?
  • What new tools and processes are needed to keep up?
  • How do we balance responsible disclosure with the sheer volume of findings?

Learn More


AI is changing the landscape of security research. The next wave is coming—are we ready?

Share this post