AI agents can only be trusted as Junior Engineers
AI agents require strict governance, least privilege, and human oversight
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
The new generation of agentic AI tools is rewriting how software gets built and managed. As we speak, more autonomous coding assistants, workflow agents, and AI-driven DevOps systems are embedded across tech stacks at unprecedented speed.
Yet, as the pace of adoption accelerates, so too does the risk when oversight lags behind. AI code governance is no longer a compliance afterthought; it’s the steering wheel that keeps AI-driven innovation on the road.
Senior Director at Software Improvement Group.
This isn’t theoretical. Reuters cited an organization-wide use of AI in professional services that almost doubled to 40% in 2026. IDC similarly predicts that agentic automation will enhance capabilities in over 40% of enterprise applications.
Article continues belowThese figures reflect a market transitioning from tentative trials to full operational reliance. The temptation to prioritize speed over safety will only grow, but it is governance that ensures velocity doesn’t become volatility.
The December 2025 AWS incident serves as a stark example. Reports suggest that engineers used an internal AI coding agent, Kiro, but misconfigured access controls granted the agent broader permissions than intended, leading to around 13 hours of downtime.
Amazon later clarified that the primary cause was user error, a human misconfiguration rather than a technical failure within Kiro, and that the tool usually requires dual human approval before acting. But the takeaway is clear:
When you give AI tools the same permissions as senior engineers but none of the judgment, small misconfigurations can become serious incidents very quickly.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
This instance isn’t a warning about AI’s dangers so much as a lesson in responsibility. For engineering leaders, AI agents should be seen as extremely fast junior engineers, brilliant at pattern‑matching and execution, but lacking judgment, context, and restraint.
Governance systems are what ensure these digital juniors contribute safely and productively.
AI should be given the least access
The first rule of safe deployment is least privilege. In the realm of AI agents, unlimited potential should never translate to unlimited access. They should have restricted access to data and environments, no more than they need to fulfil a single defined task.
Like a graduate software engineer, they must operate within a sandbox. This isolation ensures that the agent can iterate, hallucinate, or fail without bringing down the system. Production access is earned, not given, and only granted after outputs survive a gauntlet of tests, scans and human reviews.
If a human junior isn’t permitted to push code directly to a live environment without a senior's sign-off, an AI should be held to an even more rigorous standard. Bypassing this review process invites accidental privilege escalation, a quiet killer of code security.
By enforcing these boundaries, you prevent a minor logic error from cascading into a critical misconfiguration. In the age of autonomous agents, rigorous oversight is essential to keeping systems safe.
Oversight is essential for AI-generated code
AI agents, while powerful, have inherent limitations that necessitate treating their contributions with caution—analogous to the level of trust you would give a Junior Engineer.
Their operational model relies heavily on pattern-based association, which means they lack the true system and architectural understanding of a seasoned human developer.
This reliance can lead to unexpected mistakes or the generation of code that is technically functional but introduces unforeseen complexities or security vulnerabilities, as they lack the full context of the system's long-term health and design philosophy.
The degree of oversight should scale with autonomy. The more an agent can act without human initiation, the tighter its audit and traceability mechanisms must become.
In mature DevOps settings, this means embedding AI logging, version control, and rollback functionality directly into the deployment pipeline, ensuring every AI action can be explained or reversed.
This disciplined approach ensures that while AI agents enhance speed and efficiency, they do not compromise the integrity, security, or stability of the production environment, effectively constraining them to a Junior Engineer role.
Solving the visibility gap
Once multiple teams start using agents, you quickly lose track of where AI-generated code has landed and what it’s doing. You need portfolio-level tooling to see where AI code is running, how secure and maintainable it is, and where the riskiest changes are concentrated.
Without unified oversight, leaders may not know where AI-generated code is deployed, how it interacts with other systems, or whether similar agents are repeating the same flawed process across teams.
Central visibility is essential. Leaders need a current, portfolio-wide view of where AI-generated code is used, which systems carry the most risk, and what to fix first.
Modern governance frameworks recommend mapping not just what AI writes or executes, but where and why, allowing early identification of unsafe patterns before they manifest in production.
Governance is the handlebar, not the brakes
The AWS case showed what happens when automation gains authority without equivalent accountability. The next generation of organizations won’t avoid AI; they’ll pair autonomy with oversight, building clear permission boundaries, enforcing review pipelines, and maintaining cross-organizational visibility.
AI code governance does not slow AI innovation down. It gives organizations the control to adopt AI with confidence, focus on the right risks first, and go faster—responsibly.
We've featured the best AI website builder.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Senior Director at Software Improvement Group.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.