When I joined Smartsheet, one of my first priorities was understanding where AI was actually operating across the business.
What I found was less a deliberate strategy than an honest reflection of how fast things had moved: AI tools embedded in workflows, some vendor-approved, some not, adopted by smart people solving real problems faster than policy could keep up with.
Chief Information and Security Officer, Smartsheet.
When I went back to some of those vendors to understand what we were actually dealing with — what data the model had accessed, what actions it had taken — the answers were thin. The audit infrastructure simply wasn't there.
That combination of tools already embedded in our environment with no traceable record of what they'd done is what sharpened my thinking. The risk wasn't the tools themselves; it was the invisibility.
The instinct for most security leaders is to ask: "How do we control it?" But control implies restriction, and as many of us have learned, restriction doesn't change behavior. It just drives it underground, where you have even less visibility. The question that actually matters is simpler but harder to achieve: "Can we trace it?"
The most helpful model I've adopted for answering that question: treat every AI agent as a new kind of "employee". Each should have a defined role, a scope of authority, and a chain of accountability.
You wouldn't let a new hire make consequential decisions without oversight in their first weeks. That same logic applies to an AI system operating inside your organization's workflows—and traceability is what makes that oversight real.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
From the rear-view mirror to real-time
There was a time when "audit" meant conducting a periodic look back at what happened. That changed with digital transformation. As technology-driven actions became more common, so did logging and observability platforms.
Audit became continuous, with data flowing in real time, providing a security layer that flags anomalies as they occur. Today, audit isn’t a post-mortem, but a real-time operational discipline.
With the rise of agentic AI, that means logging which data sources an agent queried, which actions it took autonomously versus escalated for approval, and who sat in that approval chain in real time, not reconstructed after the fact.
Here's why this matters at the board level: when an AI-assisted process produces a bad outcome—like a risk flagged incorrectly, a resource assignment triggered without manager approval, or a status update pushed out before anyone signed off—the first question you’ll face from leadership, legal, or a regulator is: "Who approved this, how, when, and why?"
If you can't answer those questions, you're facing a governance crisis on top of a process failure.
Audit as a foundation, not a checkbox
To solve this, security leaders need to build audit into their AI strategy from the start. Not as a compliance exercise, but as the foundational layer that makes agentic AI governable.
What I look for when evaluating any AI capability, whether built internally or sourced from a vendor, is a traceable chain: what data informed the recommendation, whether human sign-off was required before an action was taken, and who, if anyone, reviewed it. If a vendor can’t show me that chain, the capability isn’t enterprise-ready, regardless of how impressive the outputs are.
This isn’t about slowing teams down. It’s about giving people the confidence to act on AI outputs rather than second-guess them. When employees can see how an AI recommendation was generated and know that appropriate oversight is in place, they can begin to own decisions.
That’s not a compliance outcome; that’s a productivity outcome. Audit stops being a checkbox and becomes the mechanism that lets teams scale AI confidently while maintaining human accountability.
Your new AI employees
Returning to that model of AI as an employee: the framing changes what questions you ask. Instead of “How do we prevent AI from doing harm?” the question becomes: “What would we need to know to trust this AI’s judgement the way we trust a capable team member?”
The answer almost always comes back to the same things: clear ownership, defined decision rights, a record of actions taken, and a mechanism for human override. Those aren’t novel security concepts. They’re just being applied to a new kind of “employee”.
As security leaders, we cannot solve every AI risk overnight, but we can establish a foundation that moves beyond high-level principles into operational reality:
1. Map where AI is actually operating, including integrations surfaced through OAuth tokens and API keys in your systems, because you cannot govern what you cannot see.
2. Be explicit about which decisions require human sign off and which don’t, and commit to revisiting those boundaries every six months as the technology and its organizational impact evolve. What feels low-risk today may look very different when an agent is running it at scale.
3. Hold your vendors accountable by investing in like-minded organizations that have committed to full AI auditability and traceability, and integrate those controls with your existing monitoring platforms as they're introduced.
When AI is traceable, clearly owned, and auditable, governance stops being a bottleneck and becomes a competitive advantage. The organizations that figure this out will move faster because their people have the confidence to act on AI outputs and the tools to course-correct when needed. As the old adage goes, "trust, but verify."
The standards landscape is beginning to catch up. NIST's AI Risk Management Framework, the EU AI Act's requirements around high-risk AI systems and emerging agentic identity protocols are all pointing in the same direction: auditability is becoming a baseline expectation, not a differentiator. Security leaders who build for it now won't just be compliant—they'll be ahead.
Which brings us back to the question you should be asking, if you're not already: can you trace it?
We've ranked the best software asset management (SAM) tools.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Chief Information and Security Officer, Smartsheet.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.