How to harmonize the complexities of global AI regulation

An AI face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

The EU AI Act roll out presents any company doing business in the EU with some tough decisions to make and an urgent need to establish a robust risk management framework.

Just this month, the European Union Artificial Intelligence Act (EU AI Act) reached yet another major milestone in its roll out. Article 5, covering prohibited AI practices and unacceptable uses of AI, has become law.

It’s not just companies based in the EU that need to prove their systems comply with Article 5 – or indeed, any other aspect of the EU AI Act. One of the most comprehensive AI regulations to emerge worldwide, it applies extraterritorially, meaning that any company doing business in the EU must comply, regardless of where they are based.

This presents multinationals with some tough decisions to make. Should they withdraw from the EU entirely, on the basis that it has become a high-compliance market? Should they restrict the use of AI in their products and services within EU markets? Or should they adopt the EU AI Act as a global standard, potentially incurring substantial costs and operational drag?

Clearly, none of these approaches are optimal. Ideally, regulations should align with global frameworks to avoid fragmentation between jurisdictions. Without that alignment, they are forced to allocate valuable resources to administrative compliance, arguably at the expense of other areas of concern, such as proactive cybersecurity measures.

Many laws, after all, aim to strengthen the security of organizations and that is to be welcomed. However, their multiplication and specificity can be a drain on company resources, increasing costs and creating vulnerabilities.

Bill Wright

Global Head of Government Affairs at Elastic.

For now, companies must navigate this less-than-ideal state of regulatory affairs, and do so at a time when AI technology is evolving rapidly – and typically faster than laws and mandates can be put into place.

Doing so will involve striking the right balance between innovation and compliance, while actively participating in the global debate between the private and public sectors around global AI standards.

Companies’ direct experiences of walking this innovation/compliance tightrope will be of great value to these discussions and should be led by public affairs teams with first-hand experience of following legislative developments, collaborating effectively with policymakers and advocating for regulatory harmonization to optimize compliance investments.

In the absence of a global framework, and for however long that situation persists, interoperability between the different regional outposts of multinationals will be crucial. Achieving harmonization, at least internally between those outposts, will help to promote the responsible development of technological solutions within a business that can be put to work in different parts of the world and, eventually, adopted on a global scale.

With an eye on internal efforts, it will be all the more essential to prioritize operational efficiency and process rationalization, focusing on automation, risk-based compliance and close cooperation between legal, IT management and security teams. This approach has the potential to turn constraints into opportunities, and help build a future where innovation and security go hand in hand.

In security terms, managers will face increasing challenges related to regulatory complexity, juggling compliance and operational safety, and protecting critical systems while respecting new and changing rules. Their role will be central in the implementation of an ethical and secure innovation policy, building bridges between various internal services to promote a comprehensive and coherent approach.

Challenging times ahead

The overall challenge that multinational organizations face in 2025 is to ensure that AI governance is aligned with both the regulatory requirements and the strategic objectives of the organization. This requires a robust and confident approach to risk management – one that can weather the storm when companies are inevitably forced to focus on diametrically opposed requirements. That takes rigor, but it also demands flexibility and consistency to allow for efficient resource management.

In the absence of this kind of approach, imbalances will persist and represent a significant burden on organizations, which run the risks of being less compliant, less secure, less able to benefit from innovation – or, indeed, all three.

Organizations may also find themselves woefully unprepared for new regulations coming down the line. Work related to the EU AI Act, for example, has only just begun. While Article 5 is now enforced, the next phase of the AI Act roll out will see the application of ‘codes of practice’ for general-purpose AI systems, such as large language models. Its enforcement and associated obligations for AI providers will commence in August.

On one point, the EU is very clear: the penalties for non-compliance with Article 5 will be stiff. These will be subject to administrative fines of up to €35m or up to 7% of total worldwide annual turnover for the preceding financial year, whichever is higher.

In light of this context, organizations must prepare now for a rolling program of regulatory change during 2025 and beyond. They must keep clear inventories of their AI tools and technologies, work to improve the AI literacy of employees and put in place the risk management foundation discussed here. Only by focusing on building this kind of resilience can they hope to navigate the regulatory minefield successfully and emerge on the other side as stronger, more innovative businesses.

We've compiled a list of the best IT asset management software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Global Head of Government Affairs at Elastic.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Read more
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
EU
I read the EU’s AI Act so you don’t have to - here are 5 things you need to know
A graphic showing fleet tracking locations over a city.
How can banks truly understand the changing regulatory landscape?
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
Customer service 3D manager concept. AI assistance headphone call center
The era of Agentic AI
Avast cybersecurity
How to beat ‘shadow AI’ across your organization
Latest in Pro
A person in a wheelchair working at a computer.
Why betting on Mac security could put your organization at risk
Zorin OS 17 main image
I tried the latest version of Zorin OS - here's what I thought of this Linux distro
WatchGuard Firebox T45-CW main image
I tried the WatchGuard Firebox - here's what I thought of this 5G appliance
Finger Presses Orange Button Domain Name Registration on Black Keyboard Background. Closeup View
I visited the world’s first registered .com domain – and you won’t believe what it’s offering today
Ubuntu Desktop 23.10 main image
I tested the latest Ubuntu Desktop release - read what I thought of this popular Linux distro
Racks of servers inside a data center.
Modernizing data centers: an efficient path forward
Latest in Opinion
A person in a wheelchair working at a computer.
Why betting on Mac security could put your organization at risk
Apple CEO Tim Cook delivers remarks before the start of an Apple event at Apple headquarters on September 09, 2024 in Cupertino, California. Apple held an event to showcase the new iPhone 16, Airpods and Apple Watch models. (Photo by Justin Sullivan/Getty Images)
The big Siri Apple Intelligence delay proves that maybe we really don't know Apple at all
Racks of servers inside a data center.
Modernizing data centers: an efficient path forward
Apple iPhone 16 Pro Max REVIEW
Apple Intelligence is a fever dream that I bet Apple wishes we could all forget about
Asus ROG Ally using Steam
I think Asus could be the perfect partner for an Xbox handheld – but I have questions
Hands typing on a keyboard surrounded by security icons
The psychology of scams: how cybercriminals are exploiting the human brain