EU passes landmark AI act, paving the way for greater AI regulation

AI
(Image credit: Shutterstock)

The European Parliament has passed its long awaited AI act that it hopes will provide the legal infrastructure for regulating artificial intelligence.

While AI has contributed massively to increases in productivity and has resulted in major innovations in critical industries such as science and healthcare, many fear that the speed of its development may be outstripping the ability to regulate it.

The new act therefore provides a legal framework for categorizing and scrutinizing AI, and acts as a stepping stone towards future regulation.

One small step for regulation, one giant leap for AI

A number of governments around the world have introduced individual laws and regulations to target specific issues related to AI, but this latest legislation from the European Parliament works by establishing a risk level for each product, with the highest risk products receiving the most scrutiny.

The high risk products would be those used in critical industries such as healthcare, defense, and law enforcement would be subject to the greatest level of checks and regulation. The AI act is still yet to pass through the European Council, but is expected to do so with little friction.

Speaking on today's ruling, IBM Vice President and Chief Privacy and Trust Officer, Christina Montgomery, said, “I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems.

“IBM stands ready to lend our technology and expertise – including our watsonx.governance product – to help our clients and other stakeholders comply with the EU AI Act and upcoming legislation worldwide so we can all unlock the incredible potential of responsible AI.”

A number of companies, IBM included, have been pushing for greater regulation on the rapid advancement of AI capabilities due to the threat of their potential misuse on elections, trustworthy information, and cybersecurity.

Last year, President Biden issued an executive order on safe, secure and trustworthy AI, which combined the general voluntary agreements made by AI developers into one all encompassing commitment.

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for close to 5 years, at first covering geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division). Benedict then continued his studies at a postgraduate level and achieved a distinction in MA Security, Intelligence and Diplomacy. Benedict transitioned his security interests towards cybersecurity upon joining TechRadar Pro as a Staff Writer, focussing on state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.