Balancing AI innovation with regulation – a recipe for success

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

The world can’t stop talking about AI – and given the rate at which it’s developing, rightfully so. Google’s CEO, Sundar Pichai has raised concerns about how the wrongful deployment of AI is keeping him awake at night. In the UK, the Trades Union Congress has called for stronger rules to protect workers from decisions made by AI systems. And Rishi Sunak has announced the UK will hold the first global AI summit this year.

There is no doubt that regulation and governance associated with AI will continue to evolve. However, organizations must make sure they are putting the right measures in place now, so they do not have to row back once legislation has evolved.

It is easy to get swept up in the AI hype but what do these rapid developments mean for businesses, and how can they best mitigate risk in an emerging and uncertain regulatory landscape?

Alex Hazell

Alex Hazell is Head of EMEA Privacy and Legal at Acxiom.

Business responsibility in reducing compliance risk

One overarching challenge is compliance with pre-existing legal regimes such as privacy and intellectual property which were not written with newly released AI technologies in mind.

In the field of data protection, if it turns out that generative AI tools such as ChatGPT have unlawfully processed EU or UK personal data to train their models, then we can expect further consequences, and in some cases bans, until appropriate levels of compliance have been achieved.

Personal data originally put into the public domain in other contexts and then used for algorithm training can impact individuals’ rights, such as limiting their choice in how their data is used. Important issues yet to be resolved are whether that data can be used without additional notice and on an opt-out basis, known in GDPR terms as the legitimate interest ground, or used to train a model and then screened from any outputs.

Turning to the field of intellectual property, GitHub and Getty Images brought forward US copyright cases earlier this year, claiming their content was being used without permission to train AI systems. Complex legal issues such as these are being argued but remain to be decided, both at home and abroad.

Organizations working with generative AI must consider not only current legal uncertainties but also anticipate emerging regulation. Then they can minimize disruption to their AI strategy once the law has firmed itself up. As a practical matter it would be wise to negotiate strong intellectual property protections with AI tool providers as well as termination provisions that address regulatory risk. Companies should also document and implement AI policies based on a recognized framework that addresses and mitigates potential harms.

While technological innovation is certainly necessary to improve the way businesses operate, AI-specific regulation is mostly still in discussion phase so organizations need to be particularly careful about the tools they are implementing. Those companies who do not adopt self-regulation will potentially have to undo or remediate more of their AI work. Staying ahead of the curve is integral to business growth, but AI technology in particular needs to evolve with the protection and enrichment of humans at front of mind.

Creating consistent regulation across markets

With generative AI, there are many challenges as well as benefits. A long line of business executives and academics have called for a pressing need to regulate AI, and this is now happening across markets – each with a different take on what “good” looks like.

In Europe, the GDPR set the precedent for data protection. However, as new AI regulation continues to be discussed there, we are already witnessing a de facto relaxation in privacy protections, particularly in the form of wholesale scraping of training data from the internet.

The EU’s AI Act is a piece of legislation being written specifically for artificial intelligence applications. In contrast, the UK plans to rely on providing certain AI principles to pre-existing regulators for them to consider when developing and enforcing AI policy in their various jurisdictions. These principles include safety, transparency and fairness.

Either way, generative AI is a classic example of a technology outrunning the law; the need to take advantage of these rapid developments is making it difficult for legislators and regulators to keep up. Sunak has made it clear his aim is to establish the UK as a global leader, making Britain the geographical home of global AI safety regulation. But the UK and EU can’t forget the importance of protecting their people by ensuring any form of regulation, whether it’s the EU’s omnibus AI Act or the UK’s more piecemeal approach, does not contradict or water down the privacy protections provided by the GDPR.

Any new regulation must work in tandem with existing laws to create a stable framework that is optimised for everyone. If not, they run the risk of undermining one another and making it challenging for businesses to protect their data and their people. Close collaboration between stakeholders across markets is key to striking a balance between innovation and law.

The need for regulation to address novel issues

While data protection and intellectual property are of course top of the AI compliance agenda, addressing biases and ensuring fairness in AI systems is crucial for building ethical and equitable applications.

AI raises profound ethical and social questions. It impacts employment, equality, and autonomy. Developing ethical frameworks, responsible deployment practices, and inclusive decision-making processes are necessary to mitigate the negative consequences and maximize the benefits of AI for mankind.

Addressing these challenges requires a multidisciplinary approach involving policymakers, researchers, industry leaders while also empowering society to have a say about how they want to interact with AI solutions. Striking the right balance between AI automation and human involvement is crucial to maximizing the benefits for the betterment of society.

There is no doubt that AI has a wealth of potential for businesses across all sectors. However, it needs to be scaled up effectively and appropriately, and regulated with humans at front of mind or it risks taking a turn for the worse.

We've featured the best productivity tool.

Alex Hazell is Head of EMEA Privacy and Legal at Acxiom.