Almost everyone is now using AI to do a variety of tasks from search to even writing emails and doing research. Just like how every technology has it's flaws, AI is no exception with threat actors taking advantage of vulnerabilities to get private information of users. A newly uncovered AI injection prompt vulnerability in the Google Gemini enterprise AI ecosystem, allows attackers to steal sensitive Gmail, Docs, and Calendar data but the good news is this vulnerabilites has been patched. So how was this attack carried, how to see if you were affected and how to fix it from your end. Join us in this article for step by step guide, experts also warn this is just the eginning of targeted AI attacks.
The “GeminiJack” vulnerability, discovered within Google Gemini Enterprise and previously in Vertex AI Search, was identified by researchers at Noma Labs in May. After collaborating on a fix with Google, it was publicly disclosed on Tuesday. Noma Labs found that Gemini Enterprise was tripped up by how it trusted whatever Workspace content it pulled into its own context. Whenever an employee ran a search, Gemini automatically gathered relevant items and treated everything inside them as safe material to interpret.
By exploiting an organization’s reliance on Google’s Workspace tools and sharing, attackers were able to manipulate everyday workflows to access and exfiltrate that company's sensitive information. “A shared Google Doc, a Google Calendar invite, or even a Gmail instantly becomes a persistent open channel into your corporate data,” Noma said.
What’s more, unlike traditional software bugs, GeminiJack is not a conventional flaw, but an “architectural weakness” in how Google’s enterprise AI systems interpret user-provided content. It is also considered one of the most significant AI-driven security risks to hit the corporate cloud so far, due to the fact that the bug required no user interaction to do its damage.
No prompts, no warnings
GeminiJack didn’t wait for a careless click or a convincing phish. It is activated during routine Gemini Enterprise queries, the kind employees run dozens of times a day. No prompts, no warnings, no visible interaction. "No clicks were required from the targeted employee. No warning signs appeared. And no traditional security tools were triggered,” Noma explained in its security blog. “Incidents like
GeminiJack show that prompt injection and data leakage are no longer edge-case research topics,” said James Wickett, CEO of DryRun Security, adding that “these bugs are a symptom of a deeper architectural problem in how enterprises are wiring LLMs into their systems, even at the biggest companies."
To monitoring systems, everything still looked routine. Data loss prevention (DLP) tools saw a standard AI query. Email scanners saw clean content. Endpoint defenses spotted no malware or credential theft. Even the exfiltration hid inside what looked like a harmless image request, indistinguishable from normal browser traffic.
With nothing suspicious to flag, the attack moved straight past traditional controls. The AI itself executed the steps, turning everyday activity into an invisible handoff of sensitive Workspace data.

How it worked
According to Noma researchers, attackers could embed hidden instructions inside a shared document or message. When an employee later performed a routine search using Google’s Gemini Enterprise AI, the AI assistant automatically retrieved the manipulated content, executed the malicious instructions, and simply exfiltrated the sensitive data via a disguised external image request.
Once a poisoned file was in play, a single run of Gemini could assemble far more information than the person searching ever had in mind. The model followed the attacker’s buried cues alongside the user’s request, broadening what it pulled together.
That sweep could touch long-running correspondence, project and deal timelines, contract language, financial notes, technical documentation, HR material, and other records that normally sit deep in a company’s systems. The attacker didn’t need insider knowledge to reach any of it; general terms like “confidential,” “acquisition,” or “salary” were enough to steer Gemini toward the most sensitive corners.
The Fix
Google, which “promptly responded to the disclosure,” says its teams collaborated with Noma to “understand the attack vector and implement comprehensive mitigations.”
After reviewing Noma Labs’ findings, Google reworked how Gemini Enterprise handles retrieved content, tightening the pipeline to block hidden instructions. It also separated Vertex AI Search from Gemini’s instruction-driven processes to avoid future crossover issues.
The fix was said to address “the core issue of instruction/content confusion in the RAG processing pipeline.” Noma said. RAG or Retrieval-Augmented Generation is a recurring pipeline of document pre-processing, ingestion, and embedding generation in real time, returning a user's query into digestible information with limited hallucinations, according to Nvidia. All of it funneled directly to an attacker via what appeared to be a standard image request, indistinguishable from legitimate traffic.
In light of this attack, it's all clear to see where the future seems to be heading to, with these AI attacks going to get worse.
If you have a tip, a story, or something you want us to cover get in touch with us by clicking here. Sign up to our newsletter so you won’t miss a post and stay in the loop and updated also we will be launching a free basic cybersecurity short course for beginners to teach you how to protect yourself online. Just subscribe for free to our newsletter and create an account on perusee to be eligible.
Note: You can also advertise on Perusee, just contact us, call or app +263 78 613 9635
Click here to Follow our WhatsApp channel
Keep comments respectful and inline with the article, also create an account and login to chat with members in our forum, get help on issues you need help with from community members.