Slowing the generative AI fear with correct regulation

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

Since the release of OpenAI’s ChatGPT there has only been one technology on people’s lips – generative AI. Whether its students using the technology to assist with their homework, consumers asking it for holiday itineraries, or employees creating AI-generated corporate content, the uses of the technology are vast.

The rhetoric surrounding generative AI, though, is vastly negative. Headlines warn of AI takeovers, countries banning ChatGPT, and even give space to AI experts who state the tech could lead to the extinction of humanity. As such, a fear around generative AI has been created, the flames of which are still being fanned today.

Why AI shouldn’t be feared

I’m here to tell people their fears around AI are unfounded, and they don’t have to be scared. Instead, consumers and workers need to see AI and generative AI, as a positive addition to their lives.

I intently listened to Eric Schmidt’s keynote on the topic of AI last month, at the Databricks Data & AI summit. Schmidt, the former CEO and Chairman of Google spoke about how particular uses of AI needed regulation to protect humans, whilst also being clear that the storylines in films are exactly that “just storylines”.

Organizations have a large role to play in changing the negative perception many have, into a positive one. It’s an education play which will not only allow them to implement AI technology, but improve productivity, customer experience and employee morale as a result. By changing the tide of AI opinion, organizations will be empowered to use the technology positively.

Whether it’s allowing teams to use ChatGPT or implementing AI-enabled solutions – such as Microsoft Fabric which allows organizations to manage their data more easily – this innovative technology is here to stay, and its benefits need to be taken seriously.

Ryan Price

Ryan Price is Executive of Data & Artificial Intelligence at Avanade.

AI regulation

The topic of global AI regulation is being widely debated at the moment. OpenAI has been in front of congress talking to the fact, Europe is leading the regulatory charge, and the EU AI Act has been passed. Yet, still, there is no specific regulatory framework for AI.

As regulation continues to be debated at the highest levels, it’s likely to be some time before the guidelines play out. In the meantime, and as part of the aforementioned education, organizations need to ensure their teams are comfortable with generative AI, frameworks need to be put in place.

In fact, businesses are responsible for their own AI structures which should cover how workers are and aren’t allowed to use the technology. Implementing guidelines and rules internally is a vital first step in limiting AI fear until people are comfortable using it. Implementing these rules also means organizations are responsibly using generative AI in a way that isn’t detrimental to their business or their workforce.

It's also important for organizations to do all they can to lobby for unifying regulation across their region, and globally. It’s absolutely the responsibility of the organizations driving generative AI forward – OpenAI and Microsoft – to be likewise leading the regulatory charge. Having this in place will ensure AI systems are ethically and safely developed from the outset.

Final thoughts

Many organizations are looking at taking their first steps into generative AI as we speak. But there is work to be done to ensure this technology is used correctly and not left by the wayside. They key to this, and successful generative AI strategies, is to ensure people are at the heart.

Whoever the end user, whether it’s customers or employees, generative AI must benefit them in their day-to-day and working lives. Not doing so, will mean the tech will be under-utilized, and fail to reach expected outcomes.

A people-first approach to AI implementation will also help stem the fears around the technology, with people understanding it’s here to help them, not replace them. And finally, this approach needs to also cover all internal rules and regulations around generative AI. We have talked a lot about the importance of having AI structures, but these need to be put in place in a way which is only positive for workers and customers alike.

We've featured the best AI writers.

Ryan Price is Executive of Data & Artificial Intelligence at Avanade.