Google warning its own staff about chatbots may be a bad sign

Google Bard
AI-chatbotten Google Bard kan snart bli tilgjengelig for Pixel-mobilene – skal vi tro kode som er blitt oppdaget i Android-appen. (Image credit: Getty Images)

It seems that despite the massive push to increase its own market share in the AI chatbot-verse, Google’s parent company Alphabet has been warning its own staff about the dangers of AI chatbots.

“The Google parent has advised employees not to enter its confidential materials into AI chatbots” and warned “its engineers to avoid direct use of computer code that chatbots can generate,” according to a report from Reuters. The reason for these security precautions, which an increasing number of companies and organizations have been cautioning their workers concerning these publicly available chat programs, is twofold. 

One is that human reviewers, which have been found to essentially power chatbots like ChatGPT, could read sensitive data inputted in chats. Another reason is that researchers found AI could reproduce the data it absorbed and create a leak risk. Google stated to Reuters that “it aimed to be transparent about the limitations of its technology.”

Meanwhile, Google has been rolling out its own chatbot Google Bard to 180 countries and in more than 40 languages, with billions of dollars in investment as well as advertising and cloud revenue from its AI programs. It’s also been expanding its AI toolset to other Google products like Maps and Lens, despite the reservations of some in leadership around the potential internal security challenges presented by the programs. 

The duality of Google 

One reason for why Google is trying to have it both ways is to avoid any potential business harm. As stated before, the tech giant has invested heavily in this technology, and any major controversy or security slip-up could cost Google a huge amount of money.

Other businesses have been attempting to set up similar standards on how their employees interact with chatbot AI while on the job. Some have confirmed this notion with Reuters including Samsung, Amazon, and Deutsche Bank. Apple did not confirm but has reportedly done the same

In fact, Samsung outright banned ChatGPT and other generative AI from its workplace after it reportedly suffered three incidents of employees leaking sensitive information via ChatGPT earlier in 2023. This is especially damaging as the chatbot retains any entered data, meaning internal trade secrets from Samsung are now essentially in the hands of OpenAI.

Though it seems quite hypocritical, there are plenty of reasons why Google and other companies are internally being so cautious about AI chatbots. I wish it could extend that caution to how rapidly it develops and publicly pushes that same tech, however.

Allisa James
Computing Staff Writer

Named by the CTA as a CES 2023 Media Trailblazer, Allisa is a Computing Staff Writer who covers breaking news and rumors in the computing industry, as well as reviews, hands-on previews, featured articles, and the latest deals and trends. In her spare time you can find her chatting it up on her two podcasts, Megaten Marathon and Combo Chain, as well as playing any JRPGs she can get her hands on.

TOPICS