How to maximize the rewards, and minimize the risks, of GenAI

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

2023 truly was the ‘year of generative AI’. Since the public release of ChatGPT a year ago, we have seen a stampede of model releases and upgrades from tech industry titans, such as Google’s Bard in May and Gemini in December and Meta’s LLaMA 2 in July. Generative AI has also dominated headlines, with industry experts and everyday consumers continuously debating its potential impact on how we work, live and play. Generative AI is positioned at the ‘peak of inflated expectations’ on Gartner’s latest Hype Cycle. But is this a technological innovation that will accelerate through this cycle and reach "the plateau of productivity" in record time?

Generative AI is a genuine revolution, providing opportunities and developing at a pace the likes of which we’ve never seen before. So, organizations worldwide must begin practically experimenting and scaling generative AI to take advantage of the opportunities presented – productivity efficiencies, personalized customer experiences, and potentially innovative and disruptive business models and operations. This is already top of the agenda for many - our research found three-quarters of businesses (76%) expect to use generative AI in the next 12-18 months if they haven’t started doing so already. Yet, less than 1 in 10 (8%) organizations describe widescale use across their organization.

Every department, from IT to Legal, HR to marketing, must examine all existing business processes to assess if and how generative AI can assist and evolve it for the better. But most importantly of all, organizations must ensure solutions can be deployed at an enterprise level of safety, security, regulatory compliance and ethical responsibility.

Ross Sleight

Chief Strategy Officer for EMEA at CI&T.

Identifying and mitigating generative AI’s risks

Whilst assessing the potential use cases of generative AI is imperative, business leaders must recognize the accelerated pace of the public release of these technologies. Major corporations are now racing for the next breakthrough for commercial gain.

It’s no wonder that Governments and Regulators are scrambling to develop AI legislation. The European Union has already agreed the Artificial Intelligence Act, a deal which sets comprehensive rules for trustworthy AI, to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.” And in the US, President Biden has issued an Executive Order that enshrines the safe, secure, and trustworthy development of Artificial Intelligence.

Meanwhile, in November 2023, the UK held the world’s first AI Safety Summit. The event brought together international governments, leading AI companies, civil society groups, and research experts to consider the risks of AI, particularly at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. The summit was widely hailed as a success – even a diplomatic coup for the UK – with outcomes including a joint declaration by 28 countries to understand and agree on the opportunities, risks and need for international action on frontier AI, and the establishment of the world’s first AI Safety Institute in Britain.

It’s vital to pay heed to this AI regulatory progress. However, the resulting changes will also heavily impact how businesses can use the technology. Along with these incoming regulations, companies must ensure that they are also defining their own governance for the adoption of AI within their businesses.

Protection via proper governance and preparation

AI governance at an individual business level will involve working out what guardrails a company puts in place regarding privacy, security, compliance and ethics in implementing AI solutions. This is a board-level conversation, in which leaders must fully comprehend the transformational change to their business that AI presents, and consider their AI strategy and roadmap, methodology and processes, risk management measures, and regulatory compliance. Then, they must examine how it all aligns with their business goals, investments and budgeting, and ESG commitments.

Once governance and team structures have been established, organizations can then begin to look for practical business use cases, both internally for efficiency and externally for improved end-user experiences.

Early focuses should include end-to-end software development, customer services and operations, marketing and sales, and broader automation of business processes. This process will involve calculating which pain points and opportunities present themselves best to be solved by generative AI. Then, working on a proof of concept (POC) to minimum viable product (MVP) scale, as with all digital innovation, to ensure the development of the right use cases.

Cultivating the human-AI relationship

Today, generative AI still has no idea whether what it’s producing is ‘good’. The only way it can learn is through (human) feedback. We need to remember that it is our choice to control AI so it augments our tasks and becomes an invaluable, time-saving tool across our working and personal lives. It can become the scaffolding of an organization, augmenting our lives by fulfilling menial and mundane tasks and allowing more time for humans to concentrate on alternative work.

Reaching this point will require comprehensive preparation from organisational leadership teams. We believe that AI will become a huge part of our everyday lives and that organizations must be ready to engage to avoid getting left behind. Those who begin this journey today will be the first to minimize its risks and maximize its rewards.

We've featured the best AI Writer.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:

Ross Sleight is Chief Strategy Officer for EMEA at CI&T.