Email-Triggered Attack Can Nuke Google Drive Via New Agentic Browser Flaw
Researchers at Straiker STAR Labs have uncovered a serious vulnerability in Perplexity's Comet browser that allows attackers to weaponize ordinary-looking emails. This exploit, dubbed an "agentic browser attack," can automatically delete a victim's entire Google Drive without any user interaction.
Ever imagine a simple email could wipe out your entire Google Drive? Well, security researchers at Straiker STAR Labs have uncovered a scary new "agentic browser attack" targeting Perplexity's Comet browser. It's a zero-click technique that can turn a seemingly harmless email into a destructive force.
Here's the deal: these AI browsers connect to services like Gmail and Google Drive to automate tasks. They get permission to read emails, browse your files, and even perform actions like deleting things. Seems convenient, right? But it's also a potential security nightmare.
Think about it. You might ask your AI browser, "Hey, can you check my email and organize my recent tasks?" The browser then dives into your inbox and gets to work.
"This is where things get tricky," says security researcher Amanda Rousseau in a report. "These LLM-powered assistants can sometimes go way beyond what you explicitly asked for."
Attackers are now exploiting this "excessive agency." They can send a carefully crafted email with natural language instructions that sound like a routine cleanup task. This email might tell the browser to organize your Drive, delete files with certain extensions, or remove files not in folders. And here's the kicker: the AI browser will just do it, no questions asked!
The result? A browser-agent-driven wiper that moves critical content to the trash at scale, triggered by a single, natural-language request. As Rousseau explains, "Once an agent has OAuth access to Gmail and Google Drive, abused instructions can propagate quickly across shared folders and team drives."

What makes this attack so dangerous is that it doesn't require any fancy hacking techniques. It simply relies on being polite and providing sequential instructions. Phrases like "take care of," "handle this," and "do this on my behalf" can trick the AI into complying with malicious instructions without checking if they're safe.
In essence, the attack shows how sequencing and tone can manipulate the large language model (LLM) into doing bad things. So, what can you do to protect yourself?
Experts recommend securing not just the AI model itself, but also the agent, its connectors, and the natural language instructions it follows.
Rousseau sums it up this way: "Agentic browser assistants turn everyday prompts into sequences of powerful actions across Gmail and Google Drive. When those actions are driven by untrusted content (especially polite, well-structured emails) organizations inherit a new class of zero-click data-wiper risk."
Another AI Browser Threat: HashJack
As if that wasn't enough, Cato Networks recently revealed another attack called "HashJack." This one targets AI-powered browsers by hiding malicious prompts within legitimate URLs. The trick? The rogue prompts are placed after the "#" symbol (e.g., "www.example[.]com/home#<prompt>").
An attacker can share this specially crafted URL via email, social media, or even embed it on a website. When a victim clicks the link and asks the AI browser a question, the hidden prompt gets executed.
"HashJack is the first known indirect prompt injection that can weaponize any legitimate website to manipulate AI browser assistants," says security researcher Vitaly Simonovich in a blog post. "Because the malicious fragment is embedded in a real website's URL, users assume the content is safe while hidden instructions secretly manipulate the AI browser assistant."

After being notified, Google classified HashJack as "won't fix (intended behavior)" and low severity. However, Perplexity and Microsoft have released patches for their AI browsers (Comet v142.0.7444.60 and Edge 142.0.3595.94). Interestingly, Claude for Chrome and OpenAI Atlas seem to be immune.
It's also important to remember that Google doesn't consider policy-violating content generation and guardrail bypasses as security vulnerabilities under its AI Vulnerability Reward Program (AI VRP).