Security
April 10, 2026

The Myth(os) of Solved Security

Author
Sabina Smith
Share
Stay Connected

There's a growing sentiment that application security is a shrinking category. The recent release of Anthropic's Mythos Preview has only added fuel to that fire. The argument goes: AI models can now find vulnerabilities and AI models can write code, so eventually AI finds and fixes everything, and the market for dedicated security tooling collapses. If models can find decades old vulnerabilities on their own, why would anyone need a standalone security product?

I think the opposite is true. Discovery and triage of vulnerabilities is fundamentally changing, but Mythos has made it clear that the opportunity to proactively secure your environment is larger than ever. The hard part of security was never finding vulnerabilities, it's deciding what to do about them. With models able to surface more issues than ever, and bad actors able to exploit vulnerabilities faster than ever, the need for more intelligent, purpose-built systems to support security teams is only growing.

Did Anthropic Solve Cybersecurity?

If the good guys have access to Mythos before the bad guys, have we solved cybersecurity? Can you point agents at all the software in the world and have it fix every vulnerability? (And do the model companies win again?)

I’d argue no. Even if we assume Mythos lives up to its hype, the same problems that have plagued vulnerability management for years don't disappear. You still get false positives, findings that are technically valid but practically unexploitable, and, crucially, vulnerabilities you already know about and have consciously accepted. According to industry research, 91% of companies have knowingly released applications with unresolved vulnerabilities, and nearly half knowingly push vulnerable code on a regular basis.

The logical question that might follow is: If an LLM like Mythos can also write patches, why not just apply them all? But cybersecurity is not a black-and-white world, and there is far more nuance in how these problems get addressed. The model might surface every issue, but knowing what to fix, not how, is its own beast. 

Security’s Gray Area

Addressing vulnerabilities is a conversation of tradeoffs, with a close eye to your business priorities and the resources available to you. Security practitioners deal with these tradeoffs constantly, and that paradigm doesn’t go away because a model can generate a patch.

The nature of those tradeoffs has shifted, though. It used to be a question of capacity: which of these vulnerabilities do we have time to fix? Models could soon (if not already) make this a largely solved problem. But now that the capacity question is answered, the harder question becomes whether you should fix something, and what happens when you do. 

Yes, you should fix the low-hanging fruit, and models make that faster and cheaper than ever. But there is a significant class of security issues where you actually cannot apply a fix, or where applying the fix creates a different problem. A vulnerability might exist because of an architectural decision you made prioritizing product usability, or your authentication flow could have a tradeoff baked into it that you understand and accept. Patching something might also cause downtime during a window where downtime costs more than the risk. 

These aren’t edge cases for teams. 95% of open-source vulnerabilities come from transitive dependencies, which means patching one thing can cascade in ways that are hard to predict. It's also a formally recognized practice in NIST 800-53, PCI-DSS, and ISO 27001 to have vulnerabilities sitting behind enough compensating controls that the residual risk is within your tolerance. As of right now, no model has earned the right to make the call of when a vulnerability is or isn’t OK, nor do I think they want to.

Anthropic does not want to be the reason you patched a vulnerability and introduced a different zero-day, or caused an outage, or broke a product your largest customer depends on. It's not in their interest to try to cannibalize all of security decision-making, and the liability math alone makes that clear.

The decisioning piece also isn't something you can hand to an agent. Agents can add context to the decision - they can tell you what the vulnerability touches, how it was found, and what the blast radius looks like. But the determination of what risks an organization is willing to accept will live with humans for a long time to come. 

What Comes Next

The volume of these kinds of decisions is about to grow by orders of magnitude. Mythos reportedly surfaced thousands of high-severity zero-days in weeks. As models like it become more widely available, the number of findings that require a judgment call will explode, and teams won’t be able to run that process through a general-purpose model or a chat interface. If any percentage of those findings are ones an organization can’t or won’t fix, the logical conclusion is that a portion of any organization’s attack surface should be assumed exploitable. The question will then become how quickly a team can find and remediate bad actors acting on those vulnerabilities. The threat landscape is demanding an evolution of exposure management, and it is crucial that the next era of proactive security companies rises to the occasion. 

We think there is a significant opportunity for an AI-native player to redefine what proactive security looks like. In our mind, this is a net-new category, with some mixture of vulnerability management, threat hunting, pen-testing, and offensive security. The next great security company will be one that can fix what should be fixed, surface exposure proactively, understand where there are gaps in security posture, and build the detections to find out when those are being taken advantage of. This loop will need to be air tight, with the size and speed of attacks increasing at every next release.

Mythos is locked behind Project Glasswing today, but these capabilities won't stay gated forever. We're willing to bet there are plenty of bad actors looking to get their hands on the next frontier of intelligence.

If you’re keen on stopping them, get in touch – we are actively investing in this space.

News from the Scale portfolio and firm

Investment perspectives, market analysis, and growth playbooks from 30 years of backing Founders.
View All Press
This is some text inside of a div block.
Related Insights