When it comes to AI tools like ChatGPT, the real challenge is not just what they can do, but what they should say.
Concerns about harmful content are everywhere, but the harder question is who decides what “harmful” actually means, and who gets to see it.
Co-founder at Knostic.
The limits of safety filters
High-profile cases have shown what happens when AI tools get it wrong. Reports of ChatGPT coaxing teenagers to take their life have forced major providers to clamp down on topics like suicide and self-harm. This has led to sweeping guardrails and parental controls.
But in practice, these guardrails have often taken a clumsy one-size-fits-all approach. The same rules apply to every user, regardless of age, expertise, or context. That is how we end up in frustrating situations where adults are treated like children and children are treated like adults.
Enterprises need nuance
Inside the workplace, this problem gets even messier. A marketing intern, a compliance officer, and a chief financial officer may all be asking the same AI system for help. Yet the sensitivity of what they should see can vary dramatically.
A compliance officer might need to reference regulatory guidance on insider trading. That same information could be dangerous in the hands of a junior employee.
A developer may need deep access to IT documentation, but a business user should not be allowed to expose system secrets with a casual prompt.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Enterprises cannot rely on consumer-style content filters that treat everyone the same. They need AI systems that recognize roles, responsibilities, and legal boundaries.
Enter: Persona-Based Access Controls
Instead of blanket restrictions that block an entire topic for everyone, enterprises can apply persona-based access controls (PBAC) that reflect what knowledge different users should be allowed to see.
An AI system that adheres to PBAC would tailor responses based on who is asking and what they are authorized to know. The same prompt might yield different answers depending on a user’s department, clearance level, or current projects.
This may sound like role-based access control (RBAC) in cybersecurity, but RBAC answers the question: what systems and files can this role access? PBAC answers: What knowledge should this persona see in this specific context, and what should be filtered out?
For example, with RBAC, both a HR manager and a HR software analyst might have access to the same employee record system based on their roles. If they each ask an AI assistant the same prompt, “Summarize absentee trends for the past six months,” an AI system that relies solely on RBAC controls would return the same response to both users.
That might include sensitive details such as specific employee absences, reasons for leave, or even embedded medical notes if the underlying records are not properly sanitized.
While both roles technically have system access, the output could unexpectedly expose Protected Health Information (PHI) and create potential compliance risks.
Different personas
With PBAC, the AI assistant would tailor its response to each user’s persona:
For the HR leader, whose persona may include not just responsibility for employee management but also for well-being, the AI assistant provides an anonymized, but detailed summary such as “Region A saw a 17% increase in medical leave, primarily among customer-facing roles, with stress-related absences being the most common category.”
For the data analyst, whose persona centers on performance metrics, the AI assistant generates a higher-level trend report without any medical or personal context: “Absenteeism increased by 7% quarter over quarter, with the largest increase occurring in the operations department.”
Both users asked the same question, but PBAC ensures that each receives only the insights appropriate to their role and need-to-know context.
Several internet security vendors are already testing this approach. They sit between large language models and the end user, enforcing policies that map to the company’s compliance, privacy, and security requirements.
The system becomes a content firewall, filtering out not only toxicity, but also topics that may pose undue business risk based on each user’s persona.
Beyond content moderation, into governance
PBAC sets the stage for better AI governance. When a model’s output is filtered or blocked, the system can document why and provide an audit trail that shows what content was restricted, under which policy, and for whom.
This auditability is crucial as AI regulations such as the EU AI Act and the NIST AI Risk Management Framework push enterprises toward traceable and transparent governance.
To manage the business risks caused by AI adoption, enterprises must move away from the old blunt instruments of content moderation to something more nuanced, more context aware, and more aligned with how enterprises already think about access control.
In other words, the future of AI governance will look a lot less like parental controls, and a lot more like cybersecurity.
The bottom line
As AI becomes embedded in business workflows, consumer-grade guardrails are no longer sufficient. Overly restrictive, one-size-fits-all controls stifle innovation and frustrate legitimate work, while loose or non-existent safeguards invite regulatory and reputational risk.
Enterprises need an approach that understands who users are, what they are allowed to see, and what the consequences might be if things go wrong.
Persona-based access controls are the next step. They bring the nuance and context that enterprise AI demands, ensuring that these powerful tools remain safe, useful, and aligned with business goals.
Defining and documenting this alignment is the next major frontier for AI governance, where safety is guided by structured and well-defined need-to-know policy, not arbitrary censorship.
We've rated the best identity management solutions.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Co-founder at Knostic.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.