Researchers have demonstrated a serious vulnerability in Google Gemini where a hidden calendar invite can trick the AI into controlling smart home devices.

Quick Summary – TLDR:

  • Security researchers used hidden prompts in calendar invites to hijack Google Gemini
  • The AI assistant triggered real-world actions like opening windows and turning on boilers
  • The exploit demonstrates a new class of risk called “indirect prompt injection”
  • Google says attacks are rare but is rolling out stronger protections and human checks

What Happened?

At the Black Hat cybersecurity conference in Las Vegas, three security researchers unveiled a striking demonstration: they hijacked Google Gemini AI using a simple calendar invite and used it to take over a smart home. Their technique manipulated Gemini into opening shutters, turning on appliances, and initiating video calls using subtle hidden commands.

The Calendar Trap: A New AI Threat

In what is now being called one of the first physical-world exploits of a generative AI assistant, researchers embedded malicious prompts into Google Calendar invitations. When a user later asked Gemini to summarize their schedule, the AI unknowingly parsed these hidden instructions and activated Google Home devices.

The commands were triggered not immediately, but when the user responded to Gemini with casual phrases like “thank you” or “sure”, which were wired to initiate actions like opening windows or starting a Zoom call.

Key Findings from the Demonstration:

  • 14 different attacks were developed, involving everything from sending spam links to stealing meeting data
  • The hacks were crafted in plain English and did not require any technical knowledge
  • Actions were initiated through Google’s Home AI agent, effectively turning Gemini into a physical controller
  • One prompt even made Gemini voice a disturbing scripted insult after a simple calendar inquiry

Ben Nassi (Tel Aviv University), Stav Cohen (Technion), and Or Yair (SafeBreach) led the research project, called “Invitation Is All You Need,” a nod to the original AI paper “Attention Is All You Need.”

Google’s Response and Risk Assessment

Google was alerted to the vulnerability in February 2025. Andy Wen, senior director of security product management at Google Workspace, acknowledged the seriousness of the flaw. He emphasized that while real-world attacks are currently “exceedingly rare,” the growing complexity of large language models makes this threat class hard to eliminate entirely.

In response, Google has:

  • Accelerated defenses against prompt injections
  • Introduced machine learning tools to detect malicious prompts
  • Added human confirmation steps for certain AI-triggered actions

Wen stated, “Sometimes there’s just certain things that should not be fully automated, that users should be in the loop.”

What Makes This Vulnerability Dangerous

This form of indirect prompt injection doesn’t rely on the user typing malicious commands. Instead, it sneaks commands into data Gemini interacts with, such as:

  • Calendar event titles
  • Email subject lines
  • Hidden text in web pages or documents

This makes it especially dangerous in a world where AI assistants are expected to access user data to offer seamless productivity. When combined with smart home integrations, the risk spills from digital inconvenience to real-world consequences.

Independent researcher Johann Rehberger, who first demonstrated tool invocation attacks against Gemini earlier this year, called this latest research a major escalation. He said, “They showed at large scale how things can go bad, including real implications in the physical world.”

The Bigger Picture: AI Speed vs Security

While Google is investing heavily in educational and enterprise AI features like Guided Learning and offering free AI Pro subscriptions, experts warn that security isn’t catching up fast enough. The race to ship AI features has led to applications being deployed before they’re thoroughly protected.

In their research paper, the trio of hackers wrote: “LLM-powered applications are more susceptible to promptware than many traditional security issues.”

SQ Magazine Takeaway

This story blew my mind. We often think of AI vulnerabilities as something abstract or nerdy, but this was physical. Imagine your smart home turning on the heater or opening the windows because of a calendar invite. It’s not science fiction anymore. While Google has put in new guardrails, this research is a wake-up call. AI can make life easier, but if not secured properly, it can also make us vulnerable in ways we’re just beginning to understand. Always keep human oversight in the loop.

Avatar of Rajesh Namase

Rajesh Namase

Tech Editor


Rajesh Namase is a seasoned tech blogger, digital entrepreneur, and founder of SQ Magazine. Known for creating the popular tech blog TechLila, he now covers cybersecurity and technology news with a focus on how digital trends shape modern life. Rajesh enjoys playing badminton, practicing yoga, and exploring new ideas beyond the screen.
Disclaimer: Content on SQ Magazine is for informational and educational purposes only. Please verify details independently before making any important decisions based on our content.

Reader Interactions

Leave a Comment