In a new report published by BlackBerry, 66% of organizations it surveyed said that they will be prohibiting the infamous AI writer and others at the workplace, with a further 76% of IT decision-makers accepting that employers are allowed to control what software workers can use for their job.
What's more, 69% of those organizations implementing bans said that they would be permanent or long term, such is the risk of harm the tools pose to company security and privacy.
However, there is also a conflict, as just of half (54%) of organizations also acknowledge that powerful AI like ChatGPT could boost productivity, thanks to its ability to accomplish a range of tasks much quicker than a human could.
And whilst ITDMs agree with the right to ban such tools, 66% also thought that such bans amounted to "excessive control" over corporate and BYO devices.
When considering the use of generative AI for cybersecurity purposes, a different picture was revealed. 74% were in favor of using them for this reason, perhaps in an effort to combat the use of AI by attackers, since anyone can access to these tools, and even those without technical skills can develop and deploy malware with relative ease.
Give the advantages that AI tools like ChatGPT can confer, Shishir Singh, CTO of Cybersecurity at BlackBerry, advises a more measured approach:
“Banning Generative AI applications in the workplace can mean a wealth of potential business benefits are quashed. As platforms mature and regulations take effect, flexibility could be introduced into organizational policies. The key will be in having the right tools in place for visibility, monitoring and management of applications used in the workplace.”
No doubt companies have been spooked by stories of workers leaking sensitive data to ChatGPT - most notably employees at Samsung, who entered information pertaining to confidential meetings and technical data into the Large Language Model. This information is now in the OpenAI servers, the developers of ChatGPT, and there is no way for the electronics giant to delete it now.
In order to alleviate the fears around private data being leaked, Microsoft is planning a more secure version of the GPT model, which it says will not send company data to the public-facing OpenAI servers.
- Here is the best endpoint protection to secure your firm
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Lewis Maddison is a Staff Writer at TechRadar Pro. His area of expertise is online security and protection, which includes tools and software such as password managers.
His coverage also focuses on the usage habits of technology in both personal and professional settings - particularly its relation to social and cultural issues - and revels in uncovering stories that might not otherwise see the light of day.
He has a BA in Philosophy from the University of London, with a year spent studying abroad in the sunny climes of Malta.
AMD new Instinct MI300A is the most powerful APU ever with 24 Zen 4 cores and an ultra-powerful GPU that matches Nvidia's best — it is shipping today and could be the biggest threat yet to Nvidia H100's hegemony
Microsoft wants to help macOS users finally conquer their most annoying printing problem for good