'What if the AI agent you just deployed was secretly working against you?': Vertex AI 'double agent' flaw exposes customer data and Google's internal code

AI
(Image credit: Shutterstock)

  • Unit 42 reveals misconfigured Vertex AI agents in Google Cloud can be hijacked into “double agents”
  • Excessive default permissions let attackers pivot, access Cloud Storage, and expose proprietary Google code
  • Google updated documentation, urging customers to use Bring Your Own Service Account (BYOSA) instead of defaults

Cloud misconfigurations are one of the biggest causes of data leaks, but now we have another form of misconfiguration to worry about - AI agents.

Unit 42, Palo Alto’s cybersecurity arm, has revealed new analysis showing how an AI agent deployed in the Google Cloud Platform (GCP) Vertex AI Agent Engine can be turned into a “double agent” - doing nefarious work while appearing to serve its intended purpose.

Vertex AI is the main AI/ML platform from Google Cloud, where developers can build and deploy machine learning models and generative AI apps. The Agent Engine is what turns models into autonomous agents.

Article continues below

A blueprint for finding flaws

However, Unit 42 notes that if they’re not careful with permissions, users can leave their agents vulnerable to takeovers.

“By exploiting a significant risk in default permission scoping and compromising a single service agent, we reveal how the Vertex AI permission model can be misused, leading to unintended consequences,” the report states.

The researchers first deployed a custom AI agent using Vertex AI’s ADK in a controlled environment and then discovered that the agent’s default service account (P4SA) had excessive permissions.

Then, using a custom-built malicious tool, they were able to extract service agent credentials from the metadata service, and then use those to pivot into the consumer project. This gave them unrestricted read access to all Cloud Storage data, as well as the producer (Google-managed) environment.

This exposed restricted Artifact Registry repositories, allowing the researchers to download private container images, enumerate internal resources and inspected artifacts, and reveal proprietary source code and internal infrastructure details.

"Gaining access to this proprietary code not only exposes Google's intellectual property but also provides an attacker with a blueprint to find further vulnerabilities," the researchers explained in the paper.

In response, Google updated its documentation, to better explain how Vertex AI uses resources, accounts, and agents. The company is now recommending customers use Bring Your Own Service Account (BYOSA) to replace the default ones.


Best antivirus software header
The best antivirus for all budgets

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.