How organizations can mitigate shadow AI without stifling innovation

AI writer
(Image credit: Getty Images)

Shadow AI is on the rise, and it’s causing problems for organizations.

A recent study by MIT revealed that over 90% of employees use personal AI tools and only 40% of organizations manage official usage.

Additionally,  IBM’s recent report found that 97% of organizations experienced AI-related cybersecurity incidents, yet most still lack governance.

Steve Povolny

Senior Director of Security Research & Competitive Intelligence at Exabeam.

A great game of tug-of-war has broken out in the cybersecurity landscapes: should organizations limit the use of shadow AI, and potentially stifle the creativity and opportunities that come with it, or should they let it run wild, and eat the risk of exploitation that comes with it?

Could there be a middle ground that seeks to strike a balance between innovation, visibility, compliance, and enforcement?

Shadow AI: What is it, and what’s the problem?

A big problem is the uncertainty that comes with shadow AI overall. Shadow AI does not have a universal definition, but it often occurs when resources are being used that the company is not aware of to perform business functions.

One reason it’s so difficult to limit shadow AI is because of how easy it is to implement, not just across industries, but in everyday life.

People will always find the easiest way to perform a task. If there is a way that they can adopt technology to do their job more efficiently, they will, even if it isn’t approved by their organization.

The issue is that the creative nature of AI makes it difficult to control. Between the individual and the prompt, there is a lot of grey area for risks to arise.

Organizations have no way to know if there is any secret information being taken in outside of what is shared by the user, nor can they confirm if the information generated by the AI is correct, or even real.

The dangers of rushed adoption

The ability to adopt new and exciting tech will always come before the ability to understand and control it, and AI displays this on an unprecedented scale.

The exponential growth and spread of AI in the modern cyber landscape has resulted in individuals having the greatest control over their own creative expression at any point in history, and with it comes an inarguable opportunity.

However, on the flip side, organizations have implemented and adopted AI without truly understanding it. As a result, the potential for organizational breaches has skyrocketed, and the amount of work and analysis that security teams must conduct to mitigate these breaches has become overwhelming to the point where the response to danger cannot keep up with the rate of AI growth.

We have to look at how we manage third-party risk, as well as indemnification and contracts. The reason for this is that oftentimes, while an organization may own the AI agent, it’s developed using another company’s software.

There lies the issue of how much organizations are willing to help when a problem arises, based on how much stake in the agent or potential fallout they would have.

AI Agents: The key to unlocking creative freedom

Additionally, we need a way to create greater visibility into the actions of AI agents. In the past, this has come from measures like network logs, endpoint logs, and data loss prevention strategies. We need to understand the system’s inputs and outputs, which identities were involved, and what the context of the situation was when issues began to arise.

On the response side, we need to determine how we can quickly identify if there’s a problem. However, response actions need to be updated to address the problems that modern AI agents pose. An AI government group should be established that is responsible for retaining AI agents to complete their programmed tasks without creating risk.

This would allow individuals to utilize the creative freedom and convenience that comes from AI, while also protecting organizations from risk of attacks and allowing security teams to rely on the agents to do their tasks without needing to constantly supervise them. Trustworthy, reinforced AI agents make for a more efficient security defense system.

There needs to be an additional response action where we retrain, disable, or force relearning of AI agents, which doesn’t exist today. There should be a counterpart within the SOC for instant response, and there will be business owners responsible for building this structure. Right now, we are on CMMI level one for this process, maybe even a zero.

Insider threat analysts will be heavily dependent on these adjustments. If we can build a structure and develop a process for handling information overload that shadow AI has created, insider threat analysts will be better suited to handle threats before they become devastating to organizations.

Having a clear and easily enforceable AI usage policy, with known and vetted tools, and a process to review, test, and implement new AI agents or tools with engineering and security reviews is the only way to achieve an appropriate level of risk mitigation. It is vital that this process be made simple and transparent. Otherwise, employees will always look for ways to circumvent it.

The path forward for AI usage requires understanding. Organizations can’t control what they don’t comprehend, and too many have prioritized rapid deployment over visibility and governance. If we can strike a balance between innovation and security, organizations can maximize their safety from outside threats while allowing their employees the freedom to innovate and change the world.

We list the best Antivirus Software: Expert Reviews, Testing, and Rankings.

TOPICS

Senior Director of Security Research & Competitive Intelligence at Exabeam.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.