When Attackers Use AI, Defence Must Scale Too
State-sponsored groups, criminal enterprises, and lone actors are now using AI to conduct operations that previously required entire specialist teams. The turning point is not approaching. It has arrived.
In September 2025, Anthropic’s threat intelligence team detected something that had been predicted but never documented at scale: a Chinese state-sponsored group had weaponised an AI coding assistant to conduct an espionage campaign across roughly thirty global targets. The AI performed 80–90% of the tactical work — reconnaissance, vulnerability discovery, exploit development, credential harvesting, lateral movement, and data exfiltration — at thousands of requests per second. Human operators intervened at perhaps four to six decision points per campaign. The rest was machine-driven.
This was not an isolated event. Within months, a lone attacker jailbroke the same AI platform to breach ten Mexican government agencies, exfiltrating 150GB of data — taxpayer records, voter rolls, civil registry files, government employee credentials — across a month-long campaign. The attacker sent over a thousand prompts, posing as a security researcher in a fictitious bug bounty programme. When the AI’s guardrails pushed back, the attacker persisted until they gave way. As the cybersecurity firm that discovered the breach put it: AI didn’t just assist — it functioned as the operational team.
These are not edge cases. CrowdStrike’s 2026 Global Threat Report documents an 89% year-over-year increase in AI-enabled adversary operations. The FBI’s 2025 IC3 report recorded a 37% rise in AI-assisted business email compromise. Anthropic’s own August 2025 misuse report revealed that a single cybercriminal used AI to conduct large-scale data theft and extortion across seventeen organisations, with the AI autonomously deciding which data to exfiltrate and crafting psychologically targeted ransom demands. In the same report, North Korean operatives who could not previously write basic code were passing technical interviews at Western technology companies, their skill gaps eliminated by AI.
The pattern is consistent and accelerating. Research now shows that AI systems can generate working exploits for known vulnerabilities in ten to fifteen minutes, at roughly a dollar per exploit — making it feasible to weaponise more than 130 new CVEs per day at industrial scale. Polymorphic malware tools already use large language models to regenerate their code on every execution, producing signatures that hash-based detection cannot see. Amazon’s threat intelligence team identified a Russian-speaking actor who used commercially available AI to compromise more than 600 network appliances across 55 countries.
The end of the horse
There is a useful parallel in military history. At the turn of the twentieth century, cavalry was the elite arm of every major army: fast, skilled, decisive in the right conditions, and deeply prestigious. But cavalry was also capacity-constrained. You could not mass-produce experienced horsemen, and horses could not sustain the tempo that industrialised warfare demanded. When repeating rifles, machine guns, and eventually mechanisation changed the economics of the battlefield, the cavalry’s supremacy ended — not because the riders lacked courage or skill, but because the system of warfare had moved beyond what they could deliver.
Cybersecurity has been built on a similar model. High-end offensive and defensive work has depended on small numbers of highly trained people — penetration testers, reverse engineers, SOC analysts, incident responders. They are expensive, difficult to scale, inconsistent in output across teams, and unable to provide continuous coverage over attack surfaces that grow faster than headcount. That model assumed cognition was scarce, speed was bounded by human labour, and coverage was necessarily partial.
Those assumptions are now obsolete. What the incidents above demonstrate is not that AI can help a good analyst write better notes. It is that AI can compress discovery timelines from months to hours, expand coverage across entire estates, lower the skill threshold for sophisticated attacks, and make parallelised operations possible at a scale that no human team can match. The attacker who breached Mexico’s government was not a nation-state operation. It was, by all indications, a single individual with a chatbot.
The cavalry was not disbanded
Critically, cavalry regiments were not simply dissolved. Many of the most storied formations — the Queen’s Royal Hussars, the King’s Royal Hussars, the Light Dragoons — exist today as armoured and reconnaissance regiments. The expertise, the doctrine, the institutional knowledge survived. What changed was the platform. The horse gave way to the tank, the armoured car, the helicopter. The riders became commanders, reconnaissance specialists, and tactical decision-makers operating at mechanised speed.
The same transition is now required in cybersecurity. The best human operators are not becoming irrelevant — they are becoming more important strategically. But asking them to operate without AI-driven tooling is increasingly equivalent to asking them to ride into a mechanised battlefield on horseback. The courage and skill remain; the platform is wrong.
This is not a tooling upgrade. It is a doctrinal break. At [un]prompted, the AI security practitioner conference held in San Francisco in March 2026, the dominant theme was the shift from theoretical risk to operational reality. The “Zero Day Clock” initiative — a coalition arguing for radical reform of vulnerability management — highlighted that the mean time to availability of an exploit has collapsed from months to hours. Researchers from Trend Micro demonstrated that ordinary documents fed into AI-driven KYC pipelines could be weaponised into executable attack surfaces. Their FENRIR system, which uses multi-stage AI pipelines to hunt for zero-day vulnerabilities at scale, embodies the defensive counterpart: AI must now be used to find weaknesses faster than attackers can exploit them.
Anthropic itself has made this case explicitly. Its Mythos model, announced as this post was being written, achieved full control-flow hijacks on ten separate, fully patched targets in testing — discovering vulnerabilities that had survived decades of automated and manual review, including a 27-year-old bug in OpenBSD and a 16-year-old flaw in widely used video software. The company’s framing is stark: if defensive AI can do this, offensive AI will soon match it. The question is whether defenders will have adopted these capabilities before attackers proliferate them.
What this means in practice
The organisations that will navigate this transition successfully are those that redesign their operating model around AI-first security production — not those that bolt a co-pilot onto a legacy service model. Giving the cavalry a few trucks did not make it mechanised infantry. In the same way, adding an AI assistant to a quarterly penetration test does not constitute continuous, adaptive defence.
Security validation must become continuous, automated, and operating at machine speed. Human expertise must be redirected from manual execution toward commanding AI-driven operations: setting objectives, validating findings, handling exceptions, designing deception, and governing the systems themselves. The human expert becomes more important, not less — but their role changes fundamentally.
This is the premise on which we built Arcseer. When cavalry regiments were reconstituted as armoured formations, the most valuable thing they carried forward was not their equipment — it was their experience. Decades of operational knowledge about terrain, timing, reconnaissance, and the behaviour of adversaries under pressure. That experience became the doctrine around which mechanised warfare was designed.
We believe the same is true in cybersecurity. The deep operational experience of specialist security professionals — understanding how attackers think, how defences fail, where the real exposure lies — is one of the most valuable and least replicable capabilities in the industry. But that experience needs to be channelled through the right platform. We have designed our technology and our solutions around this conviction: that AI does not replace the expertise of seasoned practitioners, it amplifies it. It allows experienced operators to project their knowledge across an entire attack surface, continuously, at machine speed, rather than being constrained to the handful of engagements that human labour alone permits.
The turning point is not approaching. It has arrived. The evidence is no longer speculative. State-sponsored groups, criminal enterprises, and lone actors are all using AI to conduct operations that would previously have required entire teams of experienced specialists. The skill barrier has dropped. The speed has increased by orders of magnitude. The attack surface continues to expand.
The question for every organisation responsible for defending critical systems is straightforward: are your people equipped for a mechanised battlefield, or are they still on horseback?