In January 2025, Google's Threat Intelligence Group published a finding that received less attention than it deserved: more than 57 distinct nation-state-aligned Advanced Persistent Threat groups from China, Iran, North Korea, and Russia had been observed using AI to support their cyber operations. Not experimenting with AI. Not building proofs of concept. Operationally deploying AI as part of live attack campaigns.
By mid-2025, one campaign attributed to actors with Chinese state alignment had achieved 80 to 90 percent autonomous operation — using large language models to automate reconnaissance, target selection, and exploit refinement. The human operators were still in the loop, but they were directing, not executing. The AI was executing.
The asymmetry this creates is significant and underappreciated.
What Nation-State AI Actually Looks Like
When most security professionals hear "AI-powered attacks," they imagine automated phishing emails or slightly faster port scans. The reality in 2025 and 2026 is considerably more sophisticated. These groups are using AI across the full kill chain — not just at the edges.
Reconnaissance
AI models are used to process vast quantities of open-source intelligence: employee LinkedIn profiles, GitHub repositories, job postings, conference presentations, patent filings. The goal is to build a target model — understanding the organization's technology stack, its key personnel, its third-party dependencies — faster and more comprehensively than human analysts could manage. What used to take a team of analysts weeks now takes hours.
Spearphishing at Scale
Iranian-aligned groups have been documented using AI to generate highly personalized spearphishing content — messages that reference specific projects, use appropriate internal jargon, and mimic the writing styles of known colleagues. The barrier between targeted and mass phishing has collapsed. What was previously a resource-intensive, high-precision technique is now scalable.
Exploit Development and Refinement
North Korean groups affiliated with the Lazarus cluster have been observed using AI to assist in vulnerability research and exploit refinement — iterating on attack payloads faster than human developers could, and adapting to defensive responses in near real time. The development cycle for a working exploit has compressed significantly.
Lateral Movement
Chinese-aligned groups have deployed AI to reason about network topology and identify optimal lateral movement paths — the sequences of pivot points between an initial foothold and a high-value target. This kind of reasoning was previously the domain of experienced human operators. It is now increasingly automated.
The Big Four: What Each Adversary Prioritizes
China (APT groups including Salt Typhoon, Volt Typhoon): Long-dwell, intelligence-focused operations targeting critical infrastructure, telecommunications, and technology IP. The objective is typically persistence over years, not immediate disruption. AI is used primarily for reconnaissance depth and lateral movement efficiency.
Russia (Fancy Bear / APT28, Sandworm): Operational disruption of Ukraine and NATO partners, with expanding focus on European critical infrastructure. 75% of Russian nation-state attacks in a recent 12-month period targeted Ukraine or a NATO member state. AI augments both the speed of operations and the sophistication of disinformation campaigns running in parallel.
Iran (APT33, APT34, Charming Kitten): Persistent credential access campaigns targeting critical infrastructure, with documented brute force and MFA manipulation techniques. Iranian-backed groups have grown significantly more sophisticated in social engineering, using AI to generate persona-consistent content at scale.
North Korea (Lazarus Group): Financially motivated operations — cryptocurrency theft, ransomware — funding state programs. Technically sophisticated, highly adaptive, and increasingly reliant on AI for exploit development and social engineering targeting crypto and financial sector employees.
The Defense Asymmetry
The challenge for defenders is not simply that attackers are now faster. It is that the speed differential between offense and defense has become structural. Nation-state groups with significant resources, AI tooling, and years of operational experience can now execute campaigns at machine speed. Most enterprise security teams are still operating at human speed — reviewing alerts, scheduling tabletops, commissioning annual pen tests.
The only rational response to AI-enabled offense is AI-enabled defense. Not AI-augmented defense — tools that make human analysts marginally faster. Autonomous defense: continuous validation that operates at the same cadence as the threat, identifies exploitable paths before adversaries reach them, and adapts to the same changing environment that attackers are mapping in real time.
The 57 groups documented by Google are not slowing down. They are building capability, sharing tradecraft, and operationalizing techniques faster than any point-in-time security assessment can track. The question for every organization is not whether to take this seriously. It is whether to respond at the speed the threat demands.
See what continuous testing finds in your environment.
Tadpole deploys autonomous agents that simulate real adversaries — 24/7, across your entire attack surface.
Request early access →