The EU AI Act: what it means and how to comply

Security padlock and circuit board to protect data
(Image credit: Getty Images)

As the latest articles of the European Union (EU) Artificial Intelligence (AI) Act came into effect on the 2nd of August, more scrutiny is being levelled on the security measures around AI use cases, especially the ones designated as ‘high risk’.

One of the most advanced currently existing regulations around AI, the Act sets the standard for the European region on safe and ethical AI use, but organizations will need a clear roadmap to ensure they remain compliant. This article addresses the main questions to the Act.

Dirk Schrader

Resident CISO EMEA and VP of Security Research at Netwrix.

How the act rewrites the rules of cybersecurity

The EU AI Act bolsters cyber resilience by mandating AI-specific technical protections. The first of its kind in many ways, the Act calls for protections against data poisoning, model poisoning, adversarial examples, confidentiality attacks, and model flaws.

While this is good in itself, it is the delegated acts that will define what resilience will look like in practice. As a result, the real compliance burden will be determined by technical specifications that don't yet exist. Those specifications will define for instance the practical meaning of "appropriate level of cybersecurity."

What we know for sure is that the Act enforces lifecycle security requirements with an ongoing obligation for high-risk systems. This means that organizations with AI solutions designated as ‘high risk’ must achieve and sustain appropriate levels of accuracy, robustness, and cybersecurity through every stage of the product lifecycle.

This continuous assurance, as opposed to point-in-time audits and compliance checks, calls for the establishment of an ongoing DevSecOps practice as opposed to a one-off certification. Organizations will need to build pipelines that automatically monitor, log, update, and report on security posture in real time.

The continuous monitoring requirement represents a fundamental shift from traditional compliance models, and the biggest hurdle of implementation is the resource intensity needed.

Organizations will need dedicated AI security teams and automated monitoring infrastructure, creating significant ongoing operational costs that will open a whole new range of services offered by managed security service providers (MSSPs) to help small and medium-sized enterprises (SMEs).

These obligations sit on top of other regulations, like NIS2, the Cyber Resilience Act, GDPR, and other sectoral rules. As the AI Act adds an additional layer to a multi-regulation environment, the need for a holistic approach to compliance grows, especially if one looks at the cross-border complexity.

Becoming compliant

Organizations need a structured approach to make sure they comply with the EU AI Act, starting with an initial risk classification and comprehensive gap analysis to map every AI system against Annex III of the Act. Once high-risk use cases are identified, the auditing can begin with checking existing security controls against Art. 10-19 requirements.

The next step is going to be the building of robust AI governance structures. To achieve that, organizations will want to invest in an interdisciplinary team of experts from legal, security, data science, and ethics backgrounds.

These expert task forces will be best equipped to design clear procedures for the management of modifications. This isn't just about adding compliance roles – it requires fundamental changes to product development lifecycles, with security and compliance considerations embedded from initial design through ongoing operations.

A particularly challenging area is going to be managing third-party partnerships and supply-chain due diligence. Already existing compliance frameworks like NIS2 and DORA require organizations to put more weight on that aspect of the overall risk management.

Adding AI to the picture will increase the pressure even further to establish contractual security guarantees for all third-party components and services. We can expect a rapid emergence of specialized AI compliance service providers in response to that need.

However, organizations should be wary of compliance washing, where vendors claim readiness without deep technical capability.

Looking towards the future

One of the most significant anticipated successes is the standardization of AI security across the region, creating a harmonized, EU-wide security baseline. This foundational approach directly addresses AI-specific protections, with a clear focus on mitigating adversarial attacks, poisoning, and confidentiality breaches.

A key strength of the proposed regulations is their emphasis on a security-by-design ethos that integrates security considerations from the outset and throughout an AI system's operational life. This is complemented with enhanced accountability and transparency requirements via rigorous logging, comprehensive post-market monitoring, and mandatory incident reporting.

As AI is set to play a key role in many aspects of life, including cybersecurity, especially with the rise of Agentic AI, making sure it is safe and sound will prepare us for the era of Adaptive Resilience, the fusion of cyber resilience, zero trust, and AI-based risk alignment.

Pitfalls to overcome

Despite all promising aspects, several limitations and caveats could hinder the effectiveness of AI security regulations. A primary concern is the rapid threat evolution inherent in the AI landscape. New attack vectors may emerge faster than static rules can be updated, necessitating regular revisions through delegated acts.

Significant resource and expertise gaps could also pose a challenge, as national authorities and notified bodies will require adequate funding and highly skilled staff to effectively implement and enforce these regulations.

Only time will tell how effective these new measures will be, but one thing is certain: this groundbreaking legislation will mark a new age of AI and cybersecurity.

Beyond the EU's borders, the "Brussels Effect" is likely to create global ripple effects, inspiring similar security improvements in AI systems deployed worldwide and thereby extending the benefits of these regulations well beyond European Union member states.

Organizations looking to leverage AI solutions should prioritize holistic security and cyber resilience in their practices and view compliance as not a tick box exercise but a fundamental shift in how systems are built, and products are brought to market.

We've featured the best encryption software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Dirk Schrader, Resident CISO EMEA and VP of Security Research at Netwrix.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.