Employees in most industries are using ChatGPT in their day to day work, but they could be putting businesses at risk

generative ai business use
(Image credit: Shutterstock / thanmano)

New research by Indusface shows that ChatGPT is seeing increased usage across industries despite its usage in the workplace being heavily questioned in recent months.

ChatGPT can be a very useful productivity tool, helping to gather, summarize, and simplify information - but there are a number of issues that could land workers in hot water.

The advertising industry came out on top, with 39% of respondents stating that they actively use ChatGPT at work.

Jack of all trades, but master of none

In the rankings, the legal sector came in a close second, with 38% of respondents using ChatGPT in their work. This is followed by the Arts & Media industry at 33%, both the Information & Communication Technology industry and Construction industry ranking at 30%, with Real Estate & Property, Manufacturing and Call Centers & Customer service all seeing around 29% of respondents using ChatGPT in the workplace.

The Healthcare & Medical industry matched the Government & Defence usage at 28%. Among all industries, the most common use of the generative AI was to write up reports (27%), closely followed by using ChatGPT to translate information (25%), with research purposes not fair behind (17%).

Venky Sundar, Founder and President of Indusface points out that there are a number of troubling issues in the usage of ChatGPT within the workplace stating that, “Specific to business documents the risks are: legal clauses have a lot of subjectivity, and it is always better to get these vetted by an expert.

“The second risk is when you share proprietary information into chatGPT and there’s always a risk that this data is available for the general public, and you may lose your IP. So never ask chatGPT for documentation on proprietary documents including product roadmaps, patents and so on.

Sundar also points out that the use of generative AI and large language models (LLM) have shortened development times across industries, allowing an idea to become a product in a very short amount of time.

“The risk though is that proof of concept (POC) should just be used for that purpose. If you go to market with the POC, there could be serious consequences around application security and data privacy. The other risk is with just using LLMs as an input interface for the products and there could be prompt injections and the risk is unknown there.”

Interestingly, over half (55%) of respondents stated that they would not trust working with another business that used ChatGPT or a similar AI in their day to day work.

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.