ChatGPT is not where true generative AI innovation lies

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

The success of ChatGPT has been both a blessing and curse for technology leaders. On one hand, it has opened up the world’s eyes to the seemingly unlimited potential of generative AI, increased investment to underfunded R&D teams and propelled technology to the top of the boardroom agenda. On the other, it has knocked other important AI and technology projects off course, syphoned resources from elsewhere to fuel a desperate game of competitor catch-up, and potentially created a massive compliance timebomb that could go off at any time.

In short, ChatGPT has become a distraction. Most forward looking and innovative businesses were already experimenting or implementing with AI, often to great success. The issue is that a lot of AI innovation isn’t always very obvious or sexy. Automating and optimizing a vital but mundane process may improve operational efficiency and contribute to the bottom line, but it’s unlikely to grab headlines in the same way as a language model capable of credible performance on a difficult exam.

As ChatGPT fever took hold, it seemed every business was racing to get their generative AI news out there ahead of the competition. And with good reason – analysis shows that the businesses in the S&P 500 that mentioned AI during earnings calls saw their share price outperform those that didn’t. The likelihood is that behind these companies’ AI announcements were accelerated timelines, a few all-nighters and, crucially, redirected efforts that had been busy somewhere else.

Great news for the stock price. Not so great for innovation strategies that now need another layer of scrutiny.

Mike Mason

Chief AI Officer at Thoughtworks.

Competitive disadvantage

The irony is, even if in theory generative AI is available to everyone via a chat interface or a simple API and a credit card number, not everyone has grasped either the urgency or the most beneficial way of using this technology. Opportunities are being missed because although businesses understand they need to adopt GenAI to stay competitive, many fail to actually execute well.

It could be argued that the organizations currently sinking thousands of hours of strategy, R&D and product development time into these platforms – especially without robust mechanisms for governance and sharing learnings – are effectively draining their teams of precious innovation resources.

Don’t get me wrong, ChatGPT is an extremely useful tool that can deliver many efficiency and productivity benefits. It has the potential to transform a wide range of processes, such as improving customer support, speeding up content creation or as a lightning fast research assistant. And as a “starter for 10,” all companies should be using it, or another form of generative AI. But when everyone has access to the same tools, it becomes no more of a competitive advantage than using a search engine in the late ‘90s. You might gain an edge as an early adopter, but soon any advantages are diffused into ubiquity.

The true potential of generative AI can only be unlocked by using it to enhance and accelerate the distinctive competencies that exist inside any company. No one knows your business better than yourself, and your proprietary intelligence in the form of your organization's data is far more valuable than any off-the-shelf Generative AI product. By layering in industry-specific data to create bespoke LLMs and apps, businesses will benefit from hyper-relevant industry information, uniquely contextualized and aligned to their business strategy and priorities.

We’re already seeing this happen in the wild.

Made to measure vs off the peg

Law and accountancy firms across the world are using generative AI platform Harvey to conduct contract analysis, due diligence, litigation, and regulatory compliance. PwC has gone one step further and in March it announced a partnership with Harvey to train its own proprietary AI models which segregate company specific data. Similarly, Cadbury and Oreos owner Mondelez International built its own generative AI app that can suggest and refine new recipes for researchers looking to uncover new products that will appeal to consumers.

Of course, differentiation is an important motivation to develop or adopt a customised LLM, but security and IP protection are a far more critical issue. Earlier this year, Samsung revealed that confidential data, including source code and meeting notes had been compromised by employees feeding company data into ChatGPT. As ChatGPT retains any inputs as training data, any business using the tool risks inadvertently leaking sensitive information to other users. As a general rule of thumb, free tools offer poor levels of data protection, something everybody should be mindful of.

Keen to address these issues, OpenAI has since launched ChatGPT Enterprise, a play to make its flagship product more palatable to businesses. While it does claim to offer better controls on privacy, there is also the added issue of the data being stored and handled in the US which may cause issues for UK and European organizations.

Experimenting responsibly

Pragmatists, however, will be keenly aware that implementing AI tools and training a bespoke LLM takes time, while ChatGPT is available right now. They will also be aware that employees are already using Generative AI tools in their work in some capacity and policing this is a futile task.

Rather than burying your head in the sand, leaving yourself open to being leapfrogged by competitors or exposed to risks of careless use, an attitude of ‘responsible experimentation’ should guide you. A blanket ban on Generative AI tools is unlikely to be successful, so allowing some carefully vetted apps is a good start to mitigate most user-associated risks.

Responsible experimentation also requires careful consultation with legal and data protection teams to develop clear staff guidelines. Don’t take a shortcut on governance and oversight of AI at this stage, with a view to implementing when your custom solutions are ready. Bad habits are hard to break, and behaviors that reinforce responsible use of AI tools that put the onus of responsibility on employees must be established as early as possible.

The next few years are set to massively transform businesses across all industry sectors. The organizations that strategically invest in AI and data to deliver long-term growth and value will eclipse those that chase short term gains. That being said, exploring off the peg Generative AI tools shouldn’t be ignored, particularly if they stand to make a real contribution to efficiency and productivity.

Finding the right balance for your organisation is the challenge, and you won’t get there by waiting for the solution to come to you. Like with any search engine or chatbot, you only get the right answer by asking the right question. And you only get to the right question by asking a few wrong ones first.

We've featured the best business intelligence platform.

Mike Mason is Chief AI Officer at Thoughtworks.