“It's kind of hopeless before you start” - regulation vs competition ahead of the UK AI Safety Summit

An AI robot stood at a podium
(Image credit: Shutterstock)

The UK government has released a report that claims AI could enhance the ability of threat actors to commit cyber and terrorist attacks.

The report, released in advance of the UK AI Safety Summit on November 1 and 2, highlights the need for greater regulation on the development of AI. But unless governance is put in place on an international scale, there is little to stop the malicious use of AI, a top expert has told TechRadar Pro.

“There are existential risks where AI stops taking instructions from human beings and it starts doing what it wants, and we become dispensable, so there is no guarantee we will survive because we would just be a puppet. And that's the fear of it all," Avivah Litan, VP and Distinguished Analyst at Gartner told us.

A robot hand holding a scale

(Image credit: Shutterstock)

The case for greater regulation

AI has become increasingly available over the past few years with the release of ChatGPT and other generative AI software to the public, with many of us now using AI software to increase productivity and efficiency at work.

As generative AI becomes more advanced and further democratized through open-source development, there is an increasing risk that the lack of safeguards could allow malicious use of AI.

“One of the biggest risks that we see every day now is misinformation and disinformation. It's so easy to create fake information with gen-AI, it doesn't even have to be deep fakes, it's just AI generated text,” Avivah says. “There's a risk of bad decision making from inaccurate information, from hallucinations that cause the wrong decisions, misinformation and disinformation that polarizes society and gets everybody into a virtual civil war, which we’re seeing right now.”

As UK Prime Minister Rishi Sunak recently pointed out, AI also has the potential to allow individuals without any training or experience to conduct highly sophisticated cyber attacks that were previously unattainable a decade earlier. Moreover, terrorist groups and other malicious actors could use AI to generate propaganda, enhance recruitment, spread political dissonance and plan attacks.

Putting regulation and governance on AI has been a priority for the industry. At the beginning of the year, over 30,000 people signed a petition calling for all AI systems more powerful than GPT-4 to be put on hold for six months. Among the signatories were big names in the tech industry, such as Elon Musk, Steve Wozniak and Geoffrey Hinton.

Even those developing the most powerful models understand the need for regulation, with OpenAI, Meta, Google and Amazon (among others) voluntarily committing to guidelines provided by the Biden administration. On the other side of the Atlantic, the UK government is preparing for the AI Safety Summit which is to be held at Bletchley Park, birthplace of the programmable digital computer and home of Allied code breaking during World War Two.

However, there are doubts that any actionable legislation will come out of the summit. Regulating an industry among powers with common interests is one thing, but regulating an industry with cooperation from a nation that has long been suspected of cyber espionage is an entirely different beast. 

“It's just really tough getting global governance, but it is needed. It's always a good thing to get people talking to each other and exploring the issues. I'm just very sceptical that China will participate in a meaningful and substantive way with Europe and the United States. So that's the issue really. It's kind of hopeless before you start”

An AI head processing information

(Image credit: Shutterstock)

Knife’s edge

China is not just a security risk. As businesses now have more cyber tools available to them to enhance their processes, many believe that strict regulation could harm competition. A study from Gong, the revenue intelligence leader, suggests that two thirds (65%) of British businesses think that strict regulation would impede their competitiveness.

Obviously having no regulation is just as harmful as too much, so where do we draw the line?

“It definitely pays to secure your house and in the end it costs you money but it pays because now the criminals can’t steal your television,” Avivah continues, ”It's been an age old issue with security: how do you prove that spending a billion dollars on security is going to give you a billion in return. And you can’t prove it because it will only give you a billion in return if you don't lose a billion dollars in assets. So it's really cost avoidance.”

Competition isn’t the only thing at risk when it comes to AI. The persistent skills gap in cyber security has left many businesses vulnerable to cyber attacks, and the adoption and use of enhanced AI solutions could prove as much of a risk as regulation.

A recent study from O’Reilly revealed that 71% of IT teams in the UK believe there is a gap between the digital skills available and the UK government’s ambitions to become a global leader in AI and that 93% of IT professionals are concerned with C-suite ambitions to use generative AI within their business.

“On an every day level there's risks of faulty decision making because you're getting inaccurate information back that you're not tracking or not validating. You're getting hallucinations that you're basing your business decisions off. Made up information, inaccurate information,” Avivah states.

“There's consumer protection risks and customer facing risks where if you're using these technologies to interface with your customers you can give them the wrong advice. And if it's a critical function like healthcare then it's very dangerous to give your clients the wrong advice.”

The move towards low-code and no-code applications, with the assistance of AI, could help re-skill current employees and mitigate the threats posed by cyber literacy. As Avivah puts it, “English is the new programming language.”

“You still have to understand what it's doing and you have to make sure it's legal, that it's not infected with security threats and malware, so you still have to be technical but you don't have to know how to program anymore, you just have to know how to talk and tell it what to do”

Therefore, it seems that fine tuning regulation while also upskilling employees seems to be the best plan of action in terms of AI enterprise risk management.

The dangers of AI

(Image credit: Shutterstock)

 An existential crisis

The risks posed to humanity as a whole are significantly more difficult to mitigate. The law is only followed by those who follow the law, so managing the dangers and risks posed by frontier AI needs to be done at the source.

“The existential risks are not manageable unless whoever is creating the models is making it manageable. So that's why China and the EU and the US need to get together and make sure whoever is developing these frontier models are managed, because you can manage the risk.”

“There is some proof that putting these security guardrails in does result in better performance so could help the performance of the models because it's more transparent, it becomes more predictable. You’re monitoring model behavior, you know what to expect so you’re tuning the model more rigorously if you’re putting security and safety controls in. So it can help innovation.”

Many have decried the existential risks posed by AI, and that simply having a ‘kill-switch’ mitigates many of these worries. On a more individual level, we are experiencing the risks on a day to day basis. Trust in information is one of the key security risks outlined by the UK government, especially when it comes to synthetic media, disinformation, and financial manipulation. But Avivah explains solutions to these problems are already emerging. 

“There are plenty of controls coming out of the market. I spend my life writing about it. You just have to take one day at a time, one application at a time, one control at a time, and you can manage these AI risks before they manage you.”

A robot hand shaking a human hand

(Image credit: Shutterstock)

A future of opportunity?

If by some miracle some form of international legislation, governance or central body were to be established to regulate AI, then the opportunities provided could be revolutionary.

“To me the main opportunity is healthcare, in medicine, with curing diseases and coming up with the right medicine and the right pharmaceuticals for personalized medicine. Knowing exactly what your condition is and being able to generate a pharmaceutical that targets exactly what you have in a much faster timeframe. To me that's the main game changing use case. “

The applications for supply chain issues are also impressive. According to the United Nations, 17% of total global food production is wasted in households, food service and retail. This is where AI has another opportunity to shine.

“It's just having more brain power analyzing these problems and coming up with solutions. Generative AI is really good in terms of feeding all this information to the AI model and having it synthesize and tell you, ‘Oh yeah, we see in this area of Kenya, if you look at that specific acreage that's where the waste is.’ You need pinpointed analysis, and that's what AI is good at.”

The UK AI Safety Summit may just be a single step in a long process of regulating AI and collaborating on safety, but it is a step.

“It's good to see them organizing. They’ll come out with stronger cooperation with the US and Europe for sure. You see a lot of the AI researchers are happy to share with their colleagues and other companies. It's a small community and it's very intellectually driven. I think it's great to get them all together.”

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.