
One employee at Vercel adopted an AI tool. One employee at that AI vendor got hit with malware. That simple chain created a direct path into Vercel’s production systems through OAuth permissions that nobody had reviewed.
Vercel, the cloud platform behind Next.js and its millions of weekly downloads, confirmed attackers gained unauthorized access to internal systems. The company brought in Mandiant investigators and notified law enforcement. An update confirmed that Vercel worked with GitHub, Microsoft, npm, and Socket to verify that no Vercel packages were compromised.
The attack chain: from game cheats to production access
Context.ai was the entry point. According to analysis from OX Security, a Vercel employee installed the Context.ai browser extension and signed in using their corporate Google account, granting broad OAuth permissions. When Context.ai was breached, attackers inherited that employee’s workspace access and escalated privileges by accessing environment variables not marked as “sensitive.”
CEO Guillermo Rauch described the attacker as “highly sophisticated and significantly accelerated by AI.” Security researchers independently confirmed a second OAuth grant tied to Context.ai’s Chrome extension. Google removed the extension from the Chrome Web Store on March 27.
Hudson Rock published forensic evidence tracing the breach to a February 2026 malware infection on a Context.ai employee’s machine. Browser history showed the employee downloading Roblox cheat scripts and game exploit tools. The malware harvested Google Workspace logins, database keys, and other credentials.
Detection gaps across the kill chain
The attack exploited four distinct phases that security teams struggle to monitor:
- Malware infection: Context.ai employee downloaded game cheats; malware harvested credentials
- Cloud compromise: Attacker used stolen credentials to access Context.ai’s AWS environment
- OAuth token theft: Compromised tokens accessed Vercel employee’s Google Workspace
- Lateral movement: Attacker found production credentials in accessible environment variables
Most organizations cannot detect OAuth token usage from compromised third parties. Context.ai detected the AWS compromise in March but missed the OAuth token theft until Vercel disclosed the breach weeks later.
What went wrong with governance
The breach exposed six critical failures that apply beyond Vercel:
- AI tool OAuth permissions went unaudited despite “Allow All” access
- Environment variables defaulted to readable instead of secure
- No detection coverage for attacks crossing multiple organizational boundaries
- Nearly a month delay between vendor detection and customer notification
- Third-party AI tools became new shadow IT without security oversight
- AI-accelerated attackers compressed response timelines
“Don’t give an agent access to everything just because you’re lazy,” CrowdStrike CTO Elia Zaitsev said at RSAC 2026. “Give it access to only what it needs.”
Immediate actions for security teams
Organizations should check their Google Workspace admin console for two specific OAuth App IDs linked to Context.ai. Search Security > API Controls > Manage Third-Party App Access for these identifiers:
- 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
- 110671459871-f3cq3okebd3jcg1lllmroqejdbka8cqq.apps.googleusercontent.com
Security directors should also audit all AI tool OAuth grants, default environment variables to secure storage, and require 72-hour notification clauses in vendor contracts.
The bigger picture for enterprise security
This isn’t just about Vercel. It’s the first major case proving that AI agent OAuth integrations create attack paths most security programs cannot handle. A game cheat download in February led to production access in April across four organizational boundaries.
Most employees have connected AI tools to corporate Google, Microsoft, or Slack accounts with broad permissions – without security teams knowing. The Vercel breach shows what that hidden exposure looks like when attackers find it first.