Beyond the hype: The critical role of security in responsible AI development

A robot hand touching a locked digital shield blocking a human from accessing data
(Image credit: Blue Planet Studio/Shutterstock)

The pressure to ship is the greatest enemy of due diligence. In the AI gold rush, the mandate is clear: Release implementations that are as powerful as possible and as fast as possible.

But this speed-first culture is creating a dangerous vacuum where proper testing, monitoring, and security reviews are being bypassed in both development and production phases. And the risk is compounded by a shift in where release authority sits, with non-technical product managers often leading AI initiatives.

Article continues below
Melissa Ruzzi

Director of AI at AppOmni.

This is the evolution of the DevOps security gap. In a standard DevOps pipeline, we manage predictable code. But in MLOps, we’re managing live, evolving models that require high-privileged access to data and SaaS environments to function. The risks are no longer confined to a simple misconfiguration.

We are now dealing with autonomous activities within the pipeline that are significantly harder to reign in. For developers, the stakes have changed: We’re building far more than just back-end models.

We are deploying internet-facing non-human identities with direct access to our most sensitive data. We must stop treating MLOps security as a secondary concern and start applying the architectural rigor these systems demand.

The death of the private pipeline

Before GenAI became a must-have in every product, most AI implementations were internal. They were tucked safely behind layers of infrastructure, rarely seeing the light of the public internet. This isolation limited their exposure to data poisoning or unauthorized exfiltration.

But GenAI changed the architecture. Today, AI implementations serve the end user directly and are frequently exposed to the internet. This shift has turned the AI pipeline into a primary attack surface.

Now consider that nearly all AI development happens in the cloud, and the majority of SaaS platforms offer AI applications. Without proper security measures in place, the barrier to entry for an attacker drops considerably when an AI system is exposed to the web.

The SaaS and MCP risk multiplier

This complexity grows as we integrate MCP tools to connect AI agents to external SaaS environments. We’re increasingly seeing agentic implementations where GenAI autonomously uses these tools to move data. This can create a massive security gray area if foundational controls are absent.

Sensitive data now flows outside of your controlled environment and into external SaaS platforms via MCP. The risk is twofold. First: many MCP servers lack the native authentication controls that developers have come to expect from standard APIs.

Second, the non-deterministic nature of GenAI means you cannot always predict how the model will interact with these tools. If your AI agent has high level permissions, such as edit or write, without rigorous monitoring, it might autonomously grant access or move data in ways that violate every security protocol in your stack.

The myth of inherited security

There is a common misconception that building on top of major cloud providers solves the security problem. While it’s true these providers offer comprehensive MLOps tools, the responsibility for using them correctly lies entirely with the developer and security team, and their collaborators in the data engineering and DevOps teams.

Using a powerful MLOps platform doesn’t mean your pipeline is secure if your data flow is unmonitored or your access controls are overly permissive. You must treat every AI component not just as code, but as a digital identity.

And this identity requires the same zero trust principles you would apply to any human user or external SaaS application.

Can AI secure AI?

It’s tempting to use LLMs to automate the complexities of security. Asking an LLM to perform a code review, draft a monitoring plan, or conduct a security assessment can be a productive starting point. However, these outcomes should never be treated as a source of truth.

In MLOps, human expertise remains the only reliable oversight. LLMs are excellent at augmenting the work of an expert, but they cannot replace the nuanced understanding of a human security lead.

Use AI to surface potential issues, but ensure a human expert conducts deeper dives into those issues and guides the final production plan.

Securing your MLOps pipeline

You can still minimize security risks without killing your deployment velocity:

1. Commit to a full MLOps lifecycle Security must be a baseline requirement, not a final hurdle. Incorporate rigorous testing and monitoring during both the development and production phases. Conduct a comprehensive security review of the entire pipeline before it goes into production to identify vulnerabilities before they’re exposed to the internet.

2. Perform a comprehensive data flow analysis You must understand where your data originates, how it’s accessed, where it’s altered and where it’s being saved. Map every intermediate step where data might be cached or processed by third-party services to ensure no sensitive information is leaking through the cracks, and understand where real customer data is used versus synthetic data.

3. Apply zero trust to AI identities Use least privilege access when defining read, edit, and write permissions for your AI agents. If your implementation involves external MCP tools or SaaS integrations, perform the same data access and authentication reviews on those external points as you do on your own internal systems.

4. Audit your tooling supply chain and SBOMs Your pipeline is only as secure as the libraries it uses. Regularly review your dependencies for known vulnerabilities that could allow for server hijacking or the loading of malicious datasets. Tracking SBOMs becomes even more important, with more open source and vendor libraries for ML being used.

5. Monitor for non-deterministic risk Because GenAI can produce different results from the same input, traditional testing is insufficient. You need monitoring in production to catch anomalous behavior or unintended data exposure before it escalates.

The ultimate force multiplier: Secure innovation

The future of development is inseparable from AI, but the novelty of these tools is not an excuse for lax security. We’re moving toward an era where AI agents will be major, high-volume users within our SaaS environments.

If we don’t govern these identities with the same rigor we apply to our human employees, we’re not just building innovative products, we’re building liabilities.

Security must move from being a gatekeeper at the end of the pipeline to being the foundation upon which the pipeline is built. Because in the race to build the most powerful AI, the winners will not just be the fastest to market, but those that earn the most trust.

We've featured the best AI tools.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Director of AI at AppOmni.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.