The blueprint architecture for securing the AI data center

A data center with racks of servers and lots of lights glowing
Building the AI infrastructure is only part of the puzzle. Enterprises need to protect it. (Image credit: Getty Images)

As enterprises turn traditional data centers into AI factories powered by LLMs, they’re focused on unlocking new revenue streams, competitive differentiation, and operational efficiencies. But they’re also exposing themselves to unprecedented risk.

Enterprises are no longer just leasing AI. They are producing it. According to Markets and Markets, the global AI data center market is expected to grow from ~$236B in 2025 to ~$934B by 2030 at a CAGR of 31.6%, with enterprises being the fastest-growing end-user segment.

Aviv Abramovich

VP of Product Management at Check Point.

Why are organizations building their own AI?

The main drivers leading enterprises to build their own on-premises AI data centers are the need to meet compliance and sovereign AI mandates, avoid prohibitive cloud provider costs and concerns over risk to their data and intellectual property.

Article continues below

For heavily regulated industries, such as financial services and healthcare, model training requires clear audit trails and explainability. With that in mind, as AI workloads continue to rise, it becomes more financial beneficial to own the IT infrastructure, especially with the cumulative cost of cloud GPU compute often exceeding the investment in dedicated infrastructure.

For heavily regulated industries, such as financial services and healthcare, keeping model training and inference becomes a necessity. And as AI workloads scale, it becomes more financially viable to own, with the cumulative cost of cloud GPU compute often exceeding the investment in dedicated infrastructure.

New AI data centers, new needs

Organizations developing their own AI data centers contend with multiple new challenges. Whether their “AI factories” are designed for internal consumption, public use, or as a service they sell, there are several steps of the blueprint to follow.

A starting point is to transform on-premises data centers into those that can support AI training and inference through purpose-built GPU clusters, distributed inference services, and high-throughput networking.

These AI data centers need to comply with industry-specific regulations and regional mandates depending on where they are based, such as sovereign AI, the EU AI Act, U.S. Executive Order 14110, GDPR, data residency laws and industry frameworks like HIPAA and PCI-DSS.

Organizations need to test and validate their new multi-vendor AI data center architecture, to ensure configuration, networking and automation work properly before deploying new hardware in production.

The tricky part is then securing the AI data center, preventing AI-specific risks to their AI applications and infrastructure, and ensuring safe AI use and governance.

AI-stretched attack surface

The multi-layered risks facing AI data centres go beyond what most security teams and systems are used to dealing with.

At application level, risks include model theft, prompt injection, data leakage and model abuse. At the infrastructure level, threats take the form of AI system vulnerabilities (CVEs), supply chain attacks and lateral movement inside the AI data centre core. Then there are AI governance and misuse risks which include hallucination and toxicity or adversely affect relevance and accuracy.

These risks create an attack surface that is wider and deeper than traditional security threats that enterprises face.

A layered approach for AI data center security

AI data centers require enterprises to take a defense-in-depth approach that spans application security, infrastructure security and safe AI use and governance to secure the full AI stack at scale.

AI-native runtime security defends inference APIs and LLM endpoints against prompt injection, data exfiltration, adversarial queries, and API abuse. Protection that traditional web application firewalls are not equipped to provide.

Perimeter layer security covers firewalls, DDoS protection and Zero Trust Network Access to control who has permission to enter the environment.

Workload and container protection support micro-segmentation and container-level isolation, alongside runtime protection.

Host security on every node provides Zero Trust segmentation and AI prompt inspection, while AI-hardware protection embeds security at the infrastructure layer.

The risk exposure at the application, infrastructure and governance layers makes security an essential foundation for an AI data center. AI workloads will only rise and threats will only expand, so protecting infrastructure from the start is the best way to pre-empt security concerns.

Takeaways for securing an AI data center

Whether an enterprise is planning an AI data centre transformation or a fully operational AI factory, security and compliance must be a central priority.

The blueprint should follow:

- Implement a defense-in-depth model spanning applications, infrastructure, and AI governance, underpinned with Zero Trust AI prompt security and DPU-level protection.

- Use clear policy control and auditing, as well as support for air gapped environments, to meet sovereign AI requirements.

- Pre-validate the full AI architecture in a secure simulation before going live.

- Simplify security at scale through an open platform integrated across the whole AI stack.

Enterprises that follow this blueprint will be best prepared for the evolving threat landscape and reduce the risks against the AI data center. To realize the financial gains, security cannot be an afterthought.

We've rank the best Antivirus Software.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

VP of Product Management at Check Point.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.