Google is ramping up its AI security efforts by launching a dedicated bug bounty program that pays researchers up to $30,000 for uncovering serious vulnerabilities in its artificial intelligence systems.
Quick Summary – TLDR:
- Google’s new AI Vulnerability Reward Program (VRP) offers up to $30,000 for critical bug discoveries in AI systems like Gemini, Search, and Gmail.
- The program targets severe threats such as rogue actions, data exfiltration, and prompt injection.
- Google has paid out over $430,000 for AI-related vulnerabilities since opening the field to researchers.
- An automated AI agent named CodeMender is also in place to help patch discovered flaws.
What Happened?
Google has launched a standalone AI Vulnerability Reward Program, an evolution of its broader bug bounty efforts, aimed specifically at encouraging researchers to find AI-specific security flaws in flagship products. With top rewards of up to $30,000 for high-severity reports, the program marks a significant step in securing the growing number of AI tools Google offers.
📣 We’re delighted to announce our new, dedicated AI Vulnerability Reward Program 🥳 🎉!
— Google VRP (Google Bug Hunters) (@GoogleVRP) October 6, 2025
Join us in taking a look back at two years of AI bug bounties at Google and exploring the new AI VRP 👇https://t.co/x4Z5nwq07w
Google Targets Real-World AI Exploits
The new program moves beyond basic bugs or content issues. Instead, it zeroes in on exploits with real-world harm potential, such as:
- Rogue actions: Manipulating AI systems to perform unauthorized tasks, like unlocking smart home devices or altering user data.
- Sensitive data exfiltration: For example, prompting Gemini to summarize and send a user’s private emails.
- Prompt injections: Attacks embedded in web content or logs that hijack AI outputs.
- Phishing enablement and model theft: Stealing AI training data or enabling phishing scams through AI manipulation.
Here’s how Google’s reward structure breaks down:
- Up to $20,000 base reward for critical bugs in flagship products like Search, Gemini, Gmail, and Drive.
- Bonus multipliers for novelty and report quality can raise payouts to $30,000.
- Lower-tier products and bugs like unauthorized product usage or access control bypass earn smaller payouts between $100 to $5,000.
Not Just Content Violations
Google makes a clear distinction between content policy breaches and actual security vulnerabilities. Issues like Gemini producing inappropriate content or hallucinations are not eligible for reward. Instead, these are routed through internal feedback channels for model training improvements.
CodeMender: Google’s AI Fix-It Tool
To speed up remediation, Google also unveiled CodeMender, an AI-powered security agent developed by DeepMind. CodeMender has already made 72 security fixes across open-source projects, including those with millions of lines of code.
It works using Google’s Gemini Deep Think models, employing techniques like fuzzing and theorem proving to detect issues. Its built-in critique agents review patches before sending them to human developers, ensuring higher quality fixes.
Two Years of AI Security Investment
Since integrating AI into its vulnerability reward system in 2023, Google has seen strong results:
- Over $430,000 paid out for AI-specific vulnerabilities.
- Part of the $11.8 million Google paid to 660 researchers across all programs in 2024.
- More than $65 million in total rewards issued since the program’s inception in 2010.
This new program consolidates reporting across AI products into one platform via bughunters.google.com, and it aligns with a 90-day coordinated disclosure window.
SQ Magazine Takeaway
Honestly, I think this is a smart and necessary move by Google. As AI gets more deeply embedded in our digital lives, so do the risks. Rewarding researchers up to $30,000 for catching the kinds of bugs that could unlock doors or steal private data is not just generous, it’s strategic. The launch of CodeMender also shows that Google isn’t just rewarding people for finding problems but is investing in solving them quickly and safely.