Through collaboration, we can shape a safe and secure AI future

Representation of AI
(Image credit: Shutterstock)

2023 will be viewed as the year artificial intelligence (AI) tipped into the mainstream – and it’s only just getting started. The global AI market is expected to grow to $USD 2.6 trillion within a decade. Given how transformative AI stands to become, across areas ranging from healthcare to food safety, the built environment and beyond, it’s critical we find a way to harness its power as a force for good.

Beyond the excitement around ChatGPT, there are serious questions around how we build trust in AI, especially generative AI, and what guardrails are needed. This is not a future challenge; according to BSI’s recent Trust in AI poll, 38% of people are already using AI in their jobs day-to-day and 62% expect to by 2030. As the uses for AI multiply, there will be many questions to answer. For technology leaders and those focused on digital transformation, these include what does safe use of AI look like? How do we bring everyone along on this exciting journey, and upskill those who need this? How can business be encouraged to innovate and what does government need to do to enable that while maintaining a focus on safety?

Susan Taylor Martin

Chief Executive at BSI.

Safe use of AI

Governments globally are racing to answer those questions. From Australia’s Responsible AI Network to China’s draft regulation of AI-powered services for citizens, to the EU AI Act and President Biden’s recent Executive Order on AI, this global conversation is live – its urgency a stark contrast to the slow global policy response to social media. Crucially, however, no country can influence how another chooses to regulate, and there is no guarantee of consistency. Yet in our globally-connected economy, organizations – and the technology they use – operate across borders. International collaboration to determine our AI future and catalyze future innovation is key.

Some, including former Google CEO Eric Schmidt, have called for an IPCC style body to govern AI, bringing different groups together to determine our future approach. This chimes with public opinion – BSI’s research found that three-fifths of people want international guidelines for the safe use of AI. There are many ways we can do this. Bringing people together physically is key, for example at the recent UK AI Safety Summit. I look forward to further progress at upcoming discussions in South Korea and France.

Another useful starting point is international standards, which are dynamic and built on common consensus between countries, and multiple stakeholders including consumers, as to what good practice looks like. With rapidly emerging technology, standards and certification can act as a common infrastructure, offering clear principles designed to ensure innovation is safe. Compliance with international standards can act as the golden thread, and is already a key component of similarly cross-border issues like sustainable finance, or indeed cybersecurity, where long-established international standards are commonly used to mitigate risk. Such guidance is designed to ensure that what is on the market is safe, generates trust and can help organizations implement better technological solutions for all. The agility of standards and organizations’ speedy ability to apply them is critical given the pace of change in AI. The endgame is both to promote interoperability and to give vendors, users and consumers confidence that AI-enabled products and systems meet international safety standards.

Global collaboration

While building consensus is not simple, on AI we are not starting from scratch. The soon-to-publish AI Management System Standard (ISO/IEC 42001), recognized in the UK Government’s National AI Strategy, draws on existing guidelines. This is a risk based standard to help ordinary organizations of all sizes protect themselves and their customers. It has been designed to assist with considerations like non-transparent automatic decision-making, using machine learning for system design, and continuous learning. Additionally, there are already many standards around trustworthiness, bias and consumer inclusion that can be drawn on immediately, and we are in the early stages of developing GAINS (Global AI Network or Standards). Ultimately, some of the big questions around AI lack a technological fix-all, but standards are helping to define the principles behind robustness, fairness and transparency whilst the technology continues to evolve.

To see this approach in action, we can look to how global collaboration is helping accelerate decarbonization. The ISO Net Zero Guidelines, launched a year ago, were developed out of a conversation between thousands of voices from over 100 countries, including many under-represented voices. Now adopted by organizations including General Motors to inform their strategies, Nigel Topping, the UN High-Level Climate Action Champion, described the guidelines as “a core reference text… to bring global actors into alignment”.

AI has the capacity to make a positive impact on society and accelerate progress towards a sustainable world. But trust is critical. We need global collaboration to balance the great opportunity it promises with its potential risks. If we partner across borders we can build the appropriate checks and balances to make AI a powerful force for good in every area of life and society.

We've featured the best business VPN.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Susan Taylor Martin has been Chief Executive at BSI since January 2021. Previously, she led a range of information, publishing and software businesses first at Reuters and then at Thomson Reuters.