People march across San Francisco from Anthropic's headquarters to OpenAI to xAI, calling for a pause in AI development, in San Francisco on March 21, 2026.


The next wave of AI-powered cybersecurity assaults will be like nothing we’ve seen earlier than.

That’s the message AI firm Anthropic despatched in a leaked weblog submit final week, wherein it warned that its upcoming AI model, known as Mythos, and others like it may possibly exploit vulnerabilities at an unprecedented tempo.

And it’s not the one one: OpenAI warned in December that its upcoming fashions posed a “high” cybersecurity danger. Experts have already mentioned AI can amplify present risks and quickly generate new software program hacks.

But the rise of AI brokers, or AI assistants that can perform duties autonomously, takes that danger to a different degree, some specialists warn. A single AI agent could scan for vulnerabilities and doubtlessly benefit from them sooner and extra persistently than a whole lot of human hackers.

“The agentic attackers are coming,” mentioned Shlomo Kramer, founder and CEO of cybersecurity and networking firm Cato Networks. “This is a watershed event in the history of cybersecurity.”

Details about Mythos leaked in an unpublished weblog submit first reported on by Fortune. Anthropic didn’t reply to NCS’s request for remark. But the company told Fortune the leak was a results of human error inside its content material administration system.

“Although Mythos is currently far ahead of any other AI model in cyber capabilities, it presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders,” Anthropic said in the draft.

The firm is letting sure organizations take a look at the model forward of time to enhance their methods “against the impending wave of AI-driven exploits,” it mentioned.

Anthropic is also privately warning authorities officers concerning the potential for large-scale cyberattacks enabled by Mythos, in accordance with Axios.

But each lab’s next model will pose more and more extreme cybersecurity threats, Kramer instructed NCS.

“Behind Mythos is the next OpenAI model, and the next Google Gemini, and a few months behind them are the open-source Chinese models,” he mentioned.

People march across San Francisco from Anthropic's headquarters to OpenAI to xAI, calling for a pause in AI development, in San Francisco on March 21, 2026.

AI is making it potential to use vulnerabilities nearly instantly after discovering them, mentioned Evan Peña, chief offensive safety officer at cybersecurity agency Armadin.

But there are nonetheless limits to what the fashions can do, in accordance with Peña.

Advanced AI fashions are good for researching software program vulnerabilities and growing code to use them. But they lack the context a human hacker would have on what a corporation’s most precious data to steal is, Peña mentioned.

There will all the time be room for people in a cyberattack utilizing AI, mentioned Joe Lin, mentioned Joe Lin, co-founder and CEO of Twenty, a agency that sells offensive cyber capabilities to the US authorities.

“We must ensure we are building weapons systems where humans remain firmly in control of decisions and outcomes, because while the machine handles the execution, the human must always own the consequences,” he mentioned.

An instance of how AI has made comparatively unskilled hackers extra harmful got here in January, when a Russian-speaking cybercriminal used a number of AI instruments to hack over 600 gadgets working a standard firewall software program in additional than 55 nations, according to Amazon Web Services’ safety analysis group. The hacker used generative AI providers to “implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” AWS said.

The hacker used Anthropic’s Claude model in addition to Chinese-made DeepSeek within the assault, in accordance with Eyal Sela, director of menace intelligence at Gambit Security. At one level, the hacker requested Claude in Russian to create a net panel for managing a whole lot of the hackers’ targets, in accordance with chat logs the hacker had with AI fashions that Sela shared with NCS.

AI provides hackers of various ability “superpowers” by simplifying the technical data required to use methods, in accordance with Sela.

In February a hacker used Claude in a sequence of assaults in opposition to Mexican authorities businesses, stealing delicate tax and voter data, Bloomberg reported.

China and different US adversaries are “hunting for any edge to improve the performance of their homegrown AI,” mentioned Joe Lin, co-founder and CEO of Twenty, a agency that sells offensive cyber capabilities to the US authorities.

That means doubtlessly mining any leaks of US AI fashions to attempt to “supercharge their own cyber weapons systems,” he mentioned.

AI developments in cybersecurity are a double-edged sword: Attackers can use AI fashions and brokers to spice up their skills, whereas those self same capabilities allow steady monitoring, sooner menace identification, and automatic patching at a scale no human group could match.

But the attackers solely want to seek out a technique in, whereas defenders need to cowl each floor. Kramer described it as constructing an “army of good guys” to “fight the army of bad guys” simply to carry the road.

“You need to run as fast as you can in order to stay in the same place,” he mentioned.

Leave a Reply

Your email address will not be published. Required fields are marked *