Enterprise AI applications are threatening security

A person's fingers type at a keyboard, with a digital security screen with a lock on it overlaid.
(Image credit: Shutterstock / Thapana_Studio)

Over the past year, AI has emerged as a transformational productivity tool, potentially revolutionizing industries across the board. AI applications, such as ChatGPT and Google Bard, are becoming common tools within the enterprise space to streamline operations and enhance decision-making. However, AI’s sharp rise in popularity brings with it a new set of security risks that organizations must grapple with to avoid costly data breaches.

Generative AI’s rapid uptake

Just two months after its launch into the public realm, ChatGPT became the fastest growing consumer-focused application in history, using generative AI technology to answer prompts and help user needs. With an array of benefits that ultimately streamline processes for the individual - suggesting recipes, writing birthday inscriptions, and acting as a go-to knowledge encyclopedia - ChatGPT’s wider application and benefit to the workplace was quickly recognized. Today, many employees in offices worldwide rely on generative AI systems to help draft emails, propose calls to action, and summarize documents. Netskope’s recent Cloud and Threat Report found that AI app use is increasing exponentially within enterprises across the globe, growing by 22.5% over May and June 2023. At this current growth rate, the popularity of these applications will double by 2024.The hacker’s honeypot

Ray Canzanese

Ray Canzanese is the Director of Netskope Threat Labs.

The hacker’s honeypot

An online poll by Reuters and Ipsos found that, of 2,625 US adults, as many as 28% of workers have embraced generative AI tools and use ChatGPT regularly throughout the working day. Unfortunately, after proving itself as a nimble tool for proofing documents and checking code for errors, ChatGPT has become an exposure point for sensitive information as employees cut and paste confidential company content into the platform. The sheer quantity of sensitive information being pooled onto generative AI systems is hard to ignore. Layer X’s recent study of 10,000 employees found that a quarter of all information being shared to ChatGPT is considered sensitive.

With 1.43 billion people logging into ChatGPT in August, it’s no surprise that its hype and popularity is attractive for malicious actors, seeking to leverage LLMs to achieve their own malicious goals and also to exploit the hype surrounding LLMs to target their victims.

Business leaders are scrambling to find a way to use third party AI apps safely and securely. Early this year, JPMorgan blocked access to ChatGPT, citing its misalignment with company policy, and Apple took the same path after revealing plans to create its own model. Other companies such as Microsoft have simply advised staff not to share confidential information with the platform. There is yet to be any strong regulatory recommendation or best practice for generative AI usage, with the most worrying consequence being that 25% of US workers have no idea if their company permits ChatGPT or not.

Many different types of sensitive information are being uploaded to generative AI applications at work. According to Netskope, the most commonly uploaded information is source code, the basic text that controls the function of a computer program and usually corporate intellectual property.

ChatGPTs uncanny ability to review, explain and even train users on complex coding makes this trend unsurprising. However, uploading source code to these platforms is a high-risk activity and can lead to the exposure of serious trade secrets. Samsung was faced with this exact problem in April this year when one of its engineers used ChatGPT to check source code for errors, leading to the total ban of ChatGPT company-wide.

Common scams

Removing generative AI from company networks comes with its own risks. In this scenario, users are incentivized to use third-party ‘shadow’ applications (not approved for secure use by the employer) to streamline their workflows. Catering to this trend, an increasing number of phishing campaigns and malware distribution campaigns have been found online, seeking to profit on the generative AI hype. In these types of campaigns, websites and proxies disguise themselves as offering free, unauthenticated access to the chatbot. In reality, all user inputs are accessible to the proxy operator and are collected for future attacks.

Securing the workplace

Fortunately for enterprises, there is an alternate middle ground to enable AI’s adoption in the workplace with safety perimeters, and this includes a combination of cloud access controls and user awareness training.

Firstly, a data loss prevention policy and tools should be implemented to detect uploads that contain potentially sensitive information, such as source code and intellectual property. This can then be combined with real-time user coaching to notify employees when an action looks likely to breach company policy, giving them an opportunity to review the situation and respond appropriately.

To lessen the threat of scam websites, companies should scan website traffic and URLs, and coach users to spot cloud and AI app themed attacks.

The most effective way to implement tight security measures is to make sure AI app activity and trends are regularly monitored to identify the most critical vulnerabilities for your particular business. Security should not be an afterthought, and with the right care and attention, AI can continue to benefit the enterprise as a force for good.

We've featured the best online cybersecurity course.

Ray Canzanese is the Director of Netskope Threat Labs, which specialises in cloud-focused threat research. His background is in software anti-tamper, malware detection and classification, cloud security, sequential detection, and machine learning.