Google blocks hackers from using AI for massive cyberattack

Google’s security team says it stopped hackers who were planning to use artificial intelligence to launch a massive cyberattack. The company’s Threat Intelligence Group reported that criminals tried to use AI models to find and exploit software vulnerabilities on a large scale.

The hackers used AI to discover a zero-day vulnerability – a software flaw that developers don’t know exists. They planned to use this flaw to bypass two-factor authentication systems. Google says it has “high confidence” that it caught the hackers in the act and likely prevented their attack from happening.

This incident shows how cybercriminals are now using AI tools to find and exploit software weaknesses in ways that could seriously damage companies and government agencies. The development comes as cybersecurity firms are spending billions to strengthen their defenses against these new AI-powered threats.

Google didn’t name the hacker group involved, but said its own Gemini AI model wasn’t used in the attack. Instead, the criminals relied on other available AI tools like OpenClaw to plan their operation. The company describes this as a “mass vulnerability exploitation operation” that could have affected many organizations.

The incident highlights growing concerns about AI being weaponized for cyberattacks. In April, AI company Anthropic delayed releasing its Mythos model because of fears that criminals could use it to find and exploit old software vulnerabilities. These concerns prompted White House meetings with tech and business leaders to discuss the risks.

Other major tech companies are also grappling with this challenge:

  • Anthropic has since released Mythos to a select group of testers, including Apple, CrowdStrike, Microsoft, and Palo Alto Networks
  • OpenAI announced last week that GPT-5.5-Cyber, designed for cybersecurity teams, is rolling out in limited preview
  • Multiple companies are racing to develop AI tools that can defend against AI-powered attacks

Google’s report reveals that hacker groups linked to China and North Korea have shown “significant interest in capitalizing on AI for vulnerability discovery.” The company provided several examples of how criminals are already using AI tools to find security flaws, launch cyberattacks, and develop malware.

This represents a major shift in how cyberattacks happen. Previously, finding zero-day vulnerabilities required deep technical expertise and lots of time. Now, AI can automate much of this process, allowing less skilled criminals to discover and exploit software flaws much faster.

The cat-and-mouse game between attackers and defenders is intensifying as both sides adopt AI. While criminals use these tools to find vulnerabilities, cybersecurity companies are also using AI to detect and block attacks. This arms race is likely to define the future of cybersecurity as AI becomes more powerful and accessible.