>
>
Anthropic’s Mythos test sent cybersecurity stocks sliding, but not every company should fear it

News
Anthropic’s Mythos test sent cybersecurity stocks sliding, but not every company should fear it
Disruption snapshot
Investors now face a new rule. AI isn’t just replacing security labor. It may increase attack speed, making production security systems more critical and complex to operate.
Winners: vendors securing live environments and response layers. Losers: tools focused on code review, patch suggestions, and low-level analyst support tasks.
Watch renewal trends and budget shifts. The key signal is whether AI-native tools actually displace existing security spend or push companies to consolidate into larger platforms.
Cybersecurity stocks dropped on March 27 after reports that Anthropic was testing a new, unreleased AI model called Mythos with unusually strong cyber capabilities. That headline alone got investors’ attention, but this wasn’t just another wave of broad AI fear hitting the market.
Back in February, the worry was fairly straightforward. Investors thought tools like Anthropic’s new AI tool could automate parts of code security work and put pressure on workflow software companies. Now the concern has shifted, and it’s more serious.
By late March, the focus turned to what a much stronger AI model could actually do. If Mythos can boost defensive analysis, it might also speed up offensive cyber activity. That changes the equation.
But not every cybersecurity stock is exposed in the same way. So this shift could create clear winners and losers across the sector.
The market shot first, then asked questions later
Palo Alto Networks fell 6.1%, CrowdStrike 5.2%, Zscaler 5.8%, and Fortinet 3.4% on March 27, while the iShares Cybersecurity & Tech ETF dropped 4.2%.
That was a different setup from February. February's selloff was tied to Anthropic’s launch of a coding-focused security tool, which was built to detect high-severity vulnerabilities in open-source repositories and propose patches. The pressure then was easy to understand: if AI can do more code review, vulnerability triage, and patch suggestion, some security workflow spend becomes easier to question.
That substitution case had clear limits. Claude Code Security did not detect intrusions, contain attacks, or protect live production systems. The February fear sat around pre-production and assistive work.
The March 27 trigger was broader. Anthropic had exposed materials describing an in-development model called Claude Mythos, then confirmed it was building a more capable general-purpose model with advances in reasoning, coding, and cybersecurity. Anthropic called it “a step change,” said it was “the most capable we’ve built to date,” and said it was already being tested with a small group of early-access customers.
The question is no longer whether AI can absorb some security labor. It is whether frontier models can compress some workflow categories while also making real-world attack activity faster, cheaper, and easier to scale.
The exposure is concentrated, not universal
That distinction is what the market has to sort correctly. Cybersecurity vendors do not sell one product against one threat. Some categories are closest to labor substitution. Others become more valuable as attack pressure rises.
The clearest exposure sits in code scanning, vulnerability discovery, patch recommendation, and parts of SOC-assist tooling that mainly accelerate junior analysis. Those categories are closest to the work Anthropic has already shown models can do well. In February, the company said Claude Opus 4.6 found more than 500 high-severity vulnerabilities in open-source software, and Anthropic’s unveiling of Claude Sonnet 4.6 reinforced how fast its model capabilities are advancing. In March, Anthropic said Claude discovered 22 Firefox vulnerabilities in two weeks, 14 of them high severity. That is evidence for pressure on labor-heavy assistive tools and point products tied to pre-production security workflows. This dynamic closely mirrors one of the core signals from 5 signs an industry is ripe for disruption, particularly high labor costs and inefficiencies make the industry vulnerable to automation, which helps explain why workflow-heavy cybersecurity segments are now under the most pressure.
That does not map cleanly onto the whole public cyber group. Palo Alto Networks, CrowdStrike, Zscaler, and Fortinet are not primarily paid for patch suggestion or vulnerability writeups. They are paid to secure production environments: endpoints, users, networks, cloud access, and incident response. If offensive capability improves faster than defensive staffing does, those control layers do not become less necessary. They become harder to run well, and more important to automate.
Anthropic’s January cyber-range update points in that direction. The company said current Claude models could carry out multistage attacks on networks with dozens of hosts using standard open-source tools rather than custom cyber toolkits. In one example, Sonnet 4.5 exfiltrated all simulated personal information in an Equifax-style environment using only a Bash shell and common Kali Linux tools, doing so autonomously in two of five trials. That is not proof that current platforms are losing relevance. It suggests the opposite: if capable models lower the barrier to real attack workflows, enterprises should need faster remediation, tighter identity controls, better detection, and more automated response across live environments.
That is why this should be read as a sorting event, not a blanket verdict on cybersecurity economics. Workflow-heavy security software faces the most direct substitution risk. Core defensive platforms face a different test: whether rising attack velocity expands demand for integrated controls in production.
What would prove either case
The important next signal is customer behavior.
The bearish version gets proved if AI-native tools start taking real budget from products tied to vulnerability discovery, code review, patch recommendation, and junior SOC assistance. That would show automation is displacing paid workflow, not just improving it.
The bullish version gets proved if stronger offensive AI pushes customers toward platform consolidation and more spending on endpoint, identity, cloud, runtime protection, and response. That would show frontier models are increasing the value of the control plane rather than replacing it.
So far, the market has evidence for the first mechanism in narrow workflow categories and a plausible case for the second in production defense. It does not yet have public proof that Mythos has displaced spend from major vendors, changed renewal behavior, or outperformed commercial security platforms in live customer environments.
Mythos has not shown that cybersecurity vendors broadly break under frontier AI. It has shown that investors need to separate assistive security workflows from production security controls, because the first group is easier to compress and the second may get more important as capable offensive models spread.
Recommended Articles



