Google's AI-powered Antigravity IDE already has some worrying security issues - here's what was found
Agents can apparently read sensitive files and generate content without strict enforcement
- Antigravity IDE allows agents to execute commands automatically under default settings
- Prompt injection attacks can trigger unwanted code execution within the IDE
- Data exfiltration occurs through Markdown, tool invocations, or hidden instructions
Google’s new Antigravity IDE launched with an AI-first design, yet it already shows problems that raise concerns about basic security expectations, experts have warned.
Researchers at PromptArmor found the system allows its coding agent to execute commands automatically when certain default settings are enabled, and this creates openings for unintended behaviour.
When untrusted input appears inside source files or other processed content, the agent can be manipulated to run commands that the user never intended.
Risks linked to data access and exfiltration
The product permits the agent to execute tasks through the terminal, and although there are safeguards, some gaps remain in how those checks work.
These gaps create space for prompt injection attacks that can lead to unwanted code execution when the agent follows hidden or hostile input.
The same weakness applies to the way Antigravity handles file access.
The agent has the ability to read and generate content, and this includes files that may hold credentials or sensitive project material.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Data exfiltration becomes possible when malicious instructions are hidden inside Markdown, tool invocations, or other text formats.
Attackers can exploit these channels to steer the agent toward leaking internal files into attacker‑controlled locations.
Reports reference logs containing cloud credentials and private code already being gathered in successful demonstrations, showing the severity of these gaps.
Google has acknowledged these issues, and warns users during onboarding, yet such warnings do not compensate for the possibility that agents may run without supervision.
Antigravity encourages users to accept recommended settings that allow the agent to operate with minimal oversight.
The configuration places decisions about human review in the hands of the system, including when terminal commands require approval.
Users working with multiple agents through the Agent Manager interface may not catch malicious behaviour before actions are completed.
This design assumes continuous user attention even though the interface explicitly promotes background operation.
As a result, sensitive tasks may run unchecked, and simple visual warnings do little to change the underlying exposure.
These choices undermine expectations usually associated with a modern firewall or similar safeguard.
Despite restrictions, credential leakages can occur. The IDE is designed to prevent direct access to files listed in .gitignore, including .env files that store sensitive variables.
However, the agent can bypass this layer by using terminal commands to print file contents, which effectively sidesteps the policy.
After collecting the data, the agent encodes the credentials, appends them to a monitored domain, and activates a browser subagent to complete the exfiltration.
The process happens quickly and is rarely visible unless the user is actively watching the agent’s actions, which is unlikely when multiple tasks run in parallel.
These issues illustrate the risks created when AI tools are granted broad autonomy without corresponding structural safeguards.
The design aims for convenience, but the current configuration gives attackers substantial leverage long before stronger defences are implemented.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.