EU passes landmark AI act, paving the way for greater AI regulation

AI
(Image credit: Shutterstock)

The European Parliament has passed its long awaited AI act that it hopes will provide the legal infrastructure for regulating artificial intelligence.

While AI has contributed massively to increases in productivity and has resulted in major innovations in critical industries such as science and healthcare, many fear that the speed of its development may be outstripping the ability to regulate it.

The new act therefore provides a legal framework for categorizing and scrutinizing AI, and acts as a stepping stone towards future regulation.

One small step for regulation, one giant leap for AI

A number of governments around the world have introduced individual laws and regulations to target specific issues related to AI, but this latest legislation from the European Parliament works by establishing a risk level for each product, with the highest risk products receiving the most scrutiny.

The high risk products would be those used in critical industries such as healthcare, defense, and law enforcement would be subject to the greatest level of checks and regulation. The AI act is still yet to pass through the European Council, but is expected to do so with little friction.

Speaking on today's ruling, IBM Vice President and Chief Privacy and Trust Officer, Christina Montgomery, said, “I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems.

“IBM stands ready to lend our technology and expertise – including our watsonx.governance product – to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI.”

A number of companies, IBM included, have been pushing for greater regulation on the rapid advancement of AI capabilities due to the threat of their potential misuse on elections, trustworthy information, and cybersecurity.

Last year, President Biden issued an executive order on safe, secure and trustworthy AI, which combined the general voluntary agreements made by AI developers into one all encompassing commitment.

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.