(And We Feel Fine)
Anthropic's Claude Mythos made headlines for good reason. In just a few weeks of testing, the model identified thousands of zero-day vulnerabilities across every major operating system and web browser, including a 27-year-old bug in OpenBSD and a 16-year-old flaw in FFmpeg that automated tools had exercised five million times without catching. Anthropic considers the model too dangerous to release publicly.
To be clear, this is not the first time AI has demonstrated real capability in vulnerability discovery. In 2024, Google's OSS-Fuzz found 26 vulnerabilities in open-source projects using AI-generated fuzz targets, including a 20-year-old flaw in OpenSSL that human-written tests had missed entirely. Google's Project Zero separately used an LLM-based tool called Big Sleep to find the first real-world vulnerability discovered autonomously by an AI agent. In 2025, a security researcher used OpenAI's o3 to find a zero-day in the Linux kernel. The direction of travel has been clear for a while. What Mythos represents is a step change in scale and sophistication, thousands of findings across every major platform, with the model chaining vulnerabilities autonomously and with minimal human steering.
The reaction was predictable: is open-source software now too risky to use? Should organizations shift to closed-source software instead?
Here is our answer: that is the wrong question. The real problem is that most organizations do not have a reliable, current inventory of the OSS they already use. Without that foundation, no strategic portfolio decision matters, because you cannot protect what you cannot see.
Attackers Were Already Ahead
Claude Mythos is not the first warning sign. Between late February and late March 2026, threat group TeamPCP conducted a cascading supply chain attack that compromised Trivy, Checkmarx KICS, and LiteLLM, a library with 95 million monthly downloads, by exploiting incomplete credential rotation at a single vendor. The campaign spread across five package ecosystems in under 30 days, harvesting cloud credentials, API keys, and Kubernetes tokens from CI/CD pipelines across hundreds of organizations.
No AI-assisted vulnerability discovery was required. TeamPCP deployed custom malware payloads tailored to each OSS ecosystem, and they went undetected by maintainers until the damage was done. The campaign was executed with enough precision that downstream users were compromised even when they had no idea the affected libraries existed somewhere in their software supply chain. That last part is the critical detail: you can be breached by software you did not know you were running.
The defenders' own tools became the attack vector. That is what poor inventory hygiene costs you in practice.
Finding Vulnerabilities Just Got a Lot Cheaper
Finding novel vulnerabilities has never been easy. There is a reason zero-day markets command millions of dollars per exploit and nation-state teams dedicate years to hunting them. That 27-year-old OpenBSD bug was not hiding in plain sight. It took an AI model running thousands of iterations to surface it.
What Claude Mythos changes is the economics. The skill, time, and resources required to find and chain novel vulnerabilities just dropped dramatically. More supply in the zero-day market means lower prices. Capabilities that once belonged to well-funded nation-states and elite offensive teams are on a path to becoming broadly accessible. Vulnerabilities that once cost millions to acquire could soon be cheap enough for mid-tier criminal groups to buy or produce.
That is the real shift. And it lands hardest on organizations that cannot quickly answer a basic question: what software are we actually running? Without that inventory, a cheaper and faster exploit market leaves you with no way to know whether you are exposed until it is too late.
AI Can Defend OSS Too. But Who Pays for That?
Anthropic's response to Mythos is instructive. Rather than sit on the capability, they launched Project Glasswing, a coalition including AWS, Apple, Google, and Microsoft, to put the same vulnerability-finding power in the hands of defenders first.
That works for large, well-resourced organizations. It does not solve the structural problem underneath OSS security: most open-source libraries are maintained by individuals or small volunteer teams who are not paid for that work. The OSS ecosystem powers a significant share of global software infrastructure. The people keeping it secure often do so in their spare time.
If AI dramatically increases the volume and speed of vulnerability discovery in OSS, the disclosure pipeline leads back to those same under-resourced maintainers. Even if defenders find the vulnerabilities first, patches still need to be written, reviewed, and released. Organizations that do not maintain a current inventory of their OSS dependencies will not know they need to act, let alone how fast.
Anthropic has committed $4 million to open-source security organizations as part of Project Glasswing. That is a meaningful start and a fraction of what would be required to close the gap. This is a conversation the industry needs to have, and quickly.
Switching to Closed-Source Software Won't Solve It
Closed-source software vendors also use OSS extensively. Without SBOMs from those vendors, you inherit the same risk with less visibility into it. You are trading one form of opacity for another, and taking on the considerable operational lift of transitioning a software portfolio in the process.
More importantly, the origin of your software is not the vulnerability. The gap in your inventory is. An organization running well-inventoried OSS with continuous monitoring is in a stronger position than one running closed-source software it cannot inspect. Visibility is the variable that matters, not the license model.
The Real Race Is Patch Speed
When exploitation timelines collapse from months to minutes, patching speed becomes a direct line of defense. CrowdStrike's CTO put it plainly: "What once took months now happens in minutes with AI."
But you cannot patch what you do not know you are running. Organizations with a current, accurate inventory of their software components can identify exposure and begin remediation immediately when a vulnerability surfaces. Organizations without that inventory spend the first days of a crisis just trying to answer "are we affected?" That is time they cannot afford to lose, and it will only become more costly as AI-assisted attacks accelerate.
This is where the Mythos story and the TeamPCP story converge. Both demonstrate that the organizations most at risk are not necessarily those running the most OSS. They are the ones with the least visibility into what they run, and the slowest path from discovery to patch.
Common Mistakes to Avoid
- Treating this as a future problem. TeamPCP proved last month that OSS supply chains are under active, sophisticated attack right now.
- Assuming closed-source software is inherently safer. It often just means less visibility into the same underlying OSS components.
- Confusing scanning with management. Running a vulnerability scanner is not the same as having a remediation workflow that closes issues before they are exploited.
- Relying on vendor attestations alone. Attestations are a starting point, not a substitute for continuous monitoring of what vendors actually deliver.
- Treating inventory as a one-time project. Your software stack changes constantly. An inventory that is six months old is not an inventory. It is a historical artifact.
Quick Win: Build a Complete Picture of Your Highest-Risk Application Today
Pick the application with the most sensitive data or the largest blast radius. Map every component it depends on, the open-source libraries, third-party packages, and vendor-supplied software. Then ask: do you know which of those components have unpatched critical vulnerabilities? Which are end of life? Which come from vendors who cannot tell you what is inside what they shipped you?
You do not need to solve the entire inventory problem today. But getting a clear, current view of all the libraries and dependencies in your most critical application gives you a real foundation when the next zero-day drops, and the confidence to act in hours instead of weeks.
The Manifest Platform does not just give you a list of software components. It gives security teams a continuous operating environment for managing, monitoring, mitigating, and protecting the software that runs the organization.
That means full visibility across the software supply chain, the code you build, the software you buy, and the AI models you adopt. It means reachability analysis that separates genuine exploitable risk from the noise that buries teams in false positives. It means Manifest Supplier Risk continuously monitoring your vendor portfolio so you are not dependent on self-reported attestations when something goes wrong. And it means doing all of this in a reliable, easy-to-use platform that works at the speed your security team actually needs.
Mythos tells a powerful story about what AI can find. Manifest makes sure you can do something about it.
The Bottom Line
Mythos is a compelling name for a model that discovers what has been hidden. But a mythos, by definition, is a story. A narrative people tell to make sense of something overwhelming.
The real question is not whether the story is impressive. It is whether your organization can move from story to action.
That requires something less dramatic and far more useful: a manifest. A clear, current, and continuously maintained record of every piece of software your organization actually runs. Not a myth about what might be vulnerable. A manifest of what is.




.png)

