- Anthropic’s Claude Mythos Preview can find high-severity vulnerabilities in every major operating system and web browser — autonomously, without human guidance.
- This is the first AI model to surpass expert-level humans at both finding and exploiting software vulnerabilities at scale.
- Project Glasswing, backed by $100M in usage credits, is Anthropic’s controlled rollout to use Mythos Preview defensively before bad actors can weaponize similar capabilities.
- Mythos Preview is not being released publicly — and the reason why reveals something critical about where AI and national security are headed.
- The window between vulnerability discovery and active exploitation has collapsed from months to minutes — keep reading to understand what that means for your business.
Cybersecurity just hit a turning point that most businesses aren’t ready for.
Anthropic, the AI safety company behind the Claude family of models, has developed a new frontier model called Claude Mythos Preview — and what it can do to software systems is unlike anything the security industry has seen before. This isn’t another AI-assisted scanner or threat detection dashboard. Mythos Preview finds critical vulnerabilities, builds working exploits, and does it entirely on its own, across every major operating system and web browser.
For businesses trying to stay ahead of attackers, understanding this model matters — not just because of what Anthropic built, but because of what it signals about where AI-powered attacks and defenses are heading next. This is the kind of shift that redefines what “cybersecurity” means for organizations of every size.
Anthropic’s Mythos Preview Just Changed Cybersecurity Forever
For years, the cybersecurity arms race has been fought between human experts — elite red teams on one side, sophisticated threat actors on the other. Claude Mythos Preview changes that equation entirely. According to Anthropic, this model has reached a level of coding and vulnerability analysis capability that surpasses all but the most skilled human professionals.
What Mythos Preview Found in Every Major OS and Browser
The results from Anthropic’s internal evaluations are striking. Mythos Preview identified high-severity vulnerabilities across every major operating system and web browser tested. More importantly, it didn’t just flag potential issues — it developed functional exploits for many of them, and did so nearly entirely without human input or steering.
These weren’t edge cases or obscure legacy bugs. The vulnerabilities Mythos Preview uncovered represent the kind of critical exposures that, if found by a malicious actor, could compromise enterprise infrastructure, expose sensitive data, or give attackers persistent access to systems at scale. The fact that an AI model located and exploited them autonomously changes the risk calculus for every organization running modern software.
Why This Is Different From Every AI Security Tool Before It
Most AI security tools work reactively. They compare traffic patterns against known signatures, flag anomalies, or surface CVEs from public databases. Mythos Preview operates at a fundamentally different level — it reasons through codebases the way a senior penetration tester would, identifying novel vulnerabilities that don’t yet exist in any database.
Previous models could assist security researchers. Mythos Preview can replace entire stages of the manual penetration testing process, operating autonomously from initial discovery through exploit development. That capability gap is what makes this announcement a genuine inflection point, not just another incremental update in AI tooling.
What Claude Mythos Preview Actually Does
At its core, Mythos Preview is a general-purpose frontier model with exceptional coding capability — but its security applications are what set it apart. Anthropic’s evaluations show it can perform complex vulnerability research workflows end-to-end, from reading and understanding large codebases to crafting targeted exploits that demonstrate real-world impact.
How It Finds Vulnerabilities Other Models Missed
Where earlier AI models relied on pattern matching or required significant human guidance to navigate complex code, Mythos Preview reasons through software logic at a deeper level. It can analyze how different components of a system interact, identify assumptions developers made that attackers could break, and surface vulnerabilities that static analysis tools consistently miss.
This is particularly significant for open-source software and critical infrastructure, where codebases are large, maintained by distributed teams, and often under-resourced for security review. Mythos Preview can cover that ground at a speed and scale no human team can match.
Its Ability to Develop Working Exploits, Not Just Find Bugs
Finding a vulnerability is only half the problem. Determining whether it’s actually exploitable — and how — is where expert skill has traditionally been irreplaceable. Mythos Preview closes that gap. In Anthropic’s evaluations, the model developed working exploits for many of the vulnerabilities it found, demonstrating not just theoretical risk but confirmed, actionable attack paths.
Why “High-Severity” Matters More Than It Sounds
In security classifications, “high-severity” isn’t a label applied loosely. These are vulnerabilities that can lead to remote code execution, privilege escalation, or full system compromise — the class of bugs that make headlines and end careers. The fact that Mythos Preview consistently surfaced findings in this tier, across diverse and widely-deployed software, signals that AI has crossed a threshold that security professionals have been watching for.
The implications for businesses are immediate: the bar for what attackers can do with AI-powered tooling has just risen sharply, and defensive strategies need to account for that new reality.
Project Glasswing: Anthropic’s Defensive Strategy
Recognizing the dual-use risk of a model this capable, Anthropic didn’t just build Mythos Preview — they built a framework around it. Project Glasswing is Anthropic’s initiative to deploy Mythos Preview’s capabilities exclusively for defensive purposes, partnering with organizations that build and maintain critical software to scan and secure systems before attackers can reach them first.
Anthropic has committed up to $100 million in usage credits for Mythos Preview across Project Glasswing efforts, along with $4 million in direct donations to open-source security organizations. Access has been extended to over 40 organizations that build or maintain critical software infrastructure, with launch partners using the model directly in their security operations — already applying it to production codebases with measurable results.
Why Anthropic Formed Project Glasswing
Anthropic didn’t form Project Glasswing out of caution alone — they formed it because the capability they observed in Mythos Preview made the decision urgent. When a model can autonomously find and exploit high-severity vulnerabilities across major operating systems and browsers, releasing it without a structured defensive framework isn’t a risk calculus any responsible organization should accept. Project Glasswing is the answer to a straightforward question: if this technology exists, who should control it, and toward what end?
Who Has Already Joined the Project
Anthropic extended Mythos Preview access to over 40 organizations that build or maintain critical software infrastructure. Launch partners are actively using the model in their security operations today — scanning first-party and open-source codebases for vulnerabilities before adversaries find them first. Several of these organizations have already reported that Mythos Preview is helping them identify and address weaknesses in production systems.
- Launch partners are using Mythos Preview directly in live security operations
- Over 40 additional organizations have received access to scan critical software infrastructure
- Anthropic is committing up to $100M in usage credits across all Project Glasswing efforts
- $4M in direct donations has been allocated to open-source security organizations
- Findings from all partner engagements will be shared across the industry to raise the collective security baseline
What makes this structure significant is the knowledge-sharing commitment. Anthropic isn’t just giving select organizations a capability advantage — they’re building a feedback loop where what partners learn gets distributed back to the broader security community. That’s a meaningful difference from how proprietary security tools typically operate.
For businesses not yet part of Project Glasswing, the practical implication is clear: the organizations securing the software you depend on are already using AI at this level. Your own security posture needs to account for that shift, both in terms of the tools you deploy and the expectations you set for vendors and software suppliers.
The Double-Edged Sword: Defense vs. Attack
Every significant advancement in offensive security capability creates a parallel risk: that the same capability falls into the wrong hands. Mythos Preview is not publicly available, but the model’s existence confirms something the security industry has been bracing for — AI has crossed the threshold where it can perform expert-level offensive security work autonomously. That reality doesn’t disappear because access is restricted.
The uncomfortable truth is that capabilities like Mythos Preview will eventually exist in forms that aren’t controlled by safety-focused labs. Nation-state threat actors and well-resourced criminal groups are actively investing in AI-powered offensive tooling right now. The question for every business is not whether AI-driven attacks are coming — it’s whether your defenses will be ready when they arrive.
How Nation States and Hackers Could Weaponize These Capabilities
A model that can find and exploit vulnerabilities autonomously is, functionally, an infinitely scalable penetration tester with no ethical constraints — in the wrong hands. Nation-state actors, particularly those the U.S. identifies as primary cyber adversaries, have the resources and intent to develop or acquire equivalent capabilities. The attack scenarios this enables aren’t theoretical: automated vulnerability discovery across critical infrastructure, rapid exploit development targeting financial systems, and large-scale compromise of government networks, all executed faster than any human defensive team can respond.
Alex Stamos, a prominent cybersecurity figure involved in Project Glasswing discussions, noted that withholding Mythos Preview from public release buys software developers and U.S. institutions critical time to shore up defenses. That framing alone tells you how seriously Anthropic is treating the offensive risk profile of what they’ve built.
Why Open-Source Infrastructure Is Most at Risk
Open-source software sits at the foundation of virtually every enterprise technology stack — web servers, cryptographic libraries, container orchestration, programming language runtimes. It’s also the most structurally vulnerable to AI-powered attack, because its code is publicly readable, its maintainers are often volunteers with limited security review bandwidth, and its vulnerabilities, once found, affect thousands of downstream organizations simultaneously.
Mythos Preview’s ability to autonomously analyze large, complex codebases is precisely what makes it so valuable for defending open-source infrastructure — and precisely what makes an equivalent offensive tool so dangerous. A single AI-powered scan of a widely-used open-source library could surface exploitable vulnerabilities that have existed undetected for years, giving an attacker an immediate, high-value entry point into countless enterprise environments.
What Daniel Stenberg’s cURL Experience Tells Us About AI’s Rapid Progress
Daniel Stenberg, the creator and maintainer of cURL — one of the most widely used open-source data transfer libraries in existence — has publicly documented his experience receiving AI-generated vulnerability reports. His observations illustrate a trend that security teams need to internalize: AI models are already being used to scan open-source projects for bugs, and the quality and volume of those reports is increasing rapidly. What Mythos Preview represents is that progression reaching expert-human parity, and in some cases surpassing it.
Why Anthropic Is Not Releasing Mythos Preview Publicly
The decision to withhold Mythos Preview from public release is deliberate and strategically reasoned. Releasing a model capable of autonomous expert-level vulnerability discovery and exploit development into the open market would hand adversaries — including state-sponsored groups based in countries the U.S. considers primary rivals — immediate access to an offensive capability that critical infrastructure is not yet prepared to defend against. The controlled rollout through Project Glasswing gives defenders a head start, however narrow, to find and patch the vulnerabilities Mythos Preview would otherwise expose to anyone with API access.
What Governments Need to Do Right Now
The arrival of AI models with Mythos Preview’s capabilities reframes cybersecurity as a national security emergency, not just a technology management problem. Governments that treat AI-powered vulnerability exploitation as a future concern are already behind. The policy frameworks, procurement standards, and inter-agency coordination mechanisms needed to respond to this threat level take time to build — time that the current pace of AI development may not allow.
For businesses operating in regulated industries or supplying government contracts, the downstream effect of government action in this space will be significant. Expect stricter software supply chain requirements, mandatory vulnerability disclosure timelines, and expanded definitions of what constitutes critical infrastructure — all driven by the recognition that AI has permanently changed the offensive threat landscape.
Anthropic’s Ongoing Discussions With U.S. Officials
Anthropic has confirmed it is in active discussions with U.S. government officials regarding Claude Mythos Preview’s offensive and defensive cyber capabilities. These conversations are happening at the intersection of AI development and national security policy — an area where regulatory frameworks are still catching up to technical reality. Securing critical infrastructure is explicitly identified as a top national security priority in these discussions, and Mythos Preview’s capabilities are central to understanding both the threat vector and the defensive opportunity.
For enterprise security leaders, this signals that government guidance on AI use in cybersecurity is coming — and that organizations which proactively build AI-aware security programs now will be better positioned to meet compliance requirements when formal standards arrive. Getting ahead of that curve isn’t just good security practice; it’s increasingly a business continuity imperative.
The Pentagon’s “Supply Chain Risk” Label and Why It Matters
The U.S. Defense Department’s classification of AI-powered vulnerabilities as a “supply chain risk” isn’t bureaucratic language — it’s a threat designation that carries real procurement and compliance weight. When the Pentagon labels something a supply chain risk, it triggers a cascade of requirements across every contractor, vendor, and technology supplier connected to defense infrastructure. For businesses in that ecosystem, the emergence of models like Mythos Preview means the software you ship, source, or integrate is now subject to a higher standard of scrutiny than it was twelve months ago.
The practical implication for enterprise security teams is straightforward: if AI can now autonomously find high-severity vulnerabilities in major operating systems and browsers, the assumption that your software stack has been adequately reviewed by human teams is no longer defensible. Supply chain security now requires AI-level review to match AI-level threats. Organizations that haven’t begun integrating automated, AI-assisted vulnerability scanning into their development pipelines are operating with a widening blind spot.
AI in Cybersecurity Is No Longer a Future Problem
The release of Claude Mythos Preview and the formation of Project Glasswing mark the moment AI-powered cybersecurity shifted from a topic in conference presentations to an operational reality with immediate consequences. The window between vulnerability discovery and active exploitation has collapsed — what once took months now happens in minutes. That compression affects every layer of your security program, from how quickly you need to patch known CVEs to how you structure your incident response playbooks.
For businesses, the most actionable takeaway from everything Mythos Preview represents is this: your security posture needs to be evaluated against AI-capable adversaries, not just the threat actors you faced two years ago. That means AI-assisted red teaming, continuous automated scanning of your code and dependencies, and security vendor relationships with partners who are actively integrating these capabilities into their defensive tooling. Waiting for the threat to materialize in your environment before making those investments is not a strategy — it’s exposure.
Frequently Asked Questions
Quick Reference: Claude Mythos Preview & Project Glasswing
Model Name: Claude Mythos Preview (Claude Mythos2 Preview)
Developer: Anthropic
Capability Level: Surpasses all but the most skilled human security professionals at vulnerability discovery and exploit development
Key Finding: High-severity vulnerabilities found in every major OS and web browser
Public Release: No — intentionally withheld
Defensive Initiative: Project Glasswing
Funding Committed: Up to $100M in usage credits + $4M in donations to open-source security orgs
Organizations With Access: 40+ critical infrastructure and software organizations
What is Claude Mythos Preview?
Claude Mythos Preview is an unreleased general-purpose frontier AI model developed by Anthropic. It has demonstrated the ability to autonomously find and exploit high-severity vulnerabilities across major operating systems and web browsers — surpassing expert human performance in vulnerability discovery and exploit development. It is not publicly available and is currently deployed only through the controlled Project Glasswing initiative.
What is Project Glasswing?
Project Glasswing is Anthropic’s structured initiative to deploy Claude Mythos Preview exclusively for defensive cybersecurity purposes. Over 40 organizations that build or maintain critical software infrastructure have been granted access to use the model to scan and secure their codebases. Anthropic has committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations as part of this effort. Findings from partner engagements are shared across the industry to raise the collective security baseline.
Can Hackers Use Claude Mythos Preview to Launch Attacks?
Not through Anthropic’s platform — access to Mythos Preview is tightly controlled and limited to vetted Project Glasswing partners. However, the broader concern the model raises is that equivalent offensive capabilities are likely being developed by nation-state actors and well-resourced threat groups independently. Mythos Preview’s existence confirms that AI has crossed the threshold of expert-level offensive security capability, which means the threat landscape has changed regardless of who has access to this specific model.
Why Did Anthropic Choose Not to Release Mythos Preview Publicly?
Anthropic withheld Mythos Preview from public release specifically because of its offensive capability profile. A model that can autonomously discover and exploit high-severity vulnerabilities across widely deployed software would give malicious actors — including state-sponsored groups — an immediate, scalable attack capability that critical infrastructure is not yet equipped to defend against. The controlled rollout through Project Glasswing gives defenders a structured opportunity to find and remediate vulnerabilities before they can be weaponized. Anthropic is also in ongoing discussions with U.S. government officials about the model’s national security implications.
How Is Mythos Preview Different From Other AI Cybersecurity Tools?
Most AI security tools available today are reactive — they match traffic against known threat signatures, surface CVEs from public databases, or flag anomalies based on historical patterns. Mythos Preview operates proactively and autonomously, reasoning through codebases to identify novel vulnerabilities that don’t yet exist in any public database, then developing working exploits to confirm real-world impact. It doesn’t assist a human researcher — it replaces entire stages of the manual penetration testing process.
The distinction matters enormously for how businesses should think about their defensive tooling. An AI that can perform expert-level offensive security work autonomously sets a new minimum bar for what “adequate” vulnerability management looks like. Security programs built around human-review cadences and periodic penetration tests are now structurally mismatched against the speed and scale at which AI-powered threats can operate.



