How AI can make or break your security - AMD business expert on the risks and rewards of AI in security and governance

AI Governance
(Image credit: Shutterstock)

There is an appetite and a fear of AI within businesses. IT decision-makers (ITDM) believe that AI is fundamental in enforcing security and governance, but also see it as a threat to these same policies.

So how can IT leaders roadmap their journey to AI and keep their ecosystem secure, while also defending against an ever widening threat landscape?

Matthew Unangst, senior director, commercial client and workstation at AMD, understands the trepidation of IT professionals across the security industry, and we spoke to him to dig deeper into these challenges, and discover possible solutions.

 Security duality

A recent AMD survey found 75% of UK ITDMs believe AI is now an integral tool when it comes to administering security and governance policies, but 70% also see AI as a threat.

The human factor remains one of the biggest threats to cybersecurity- from phishing to disgruntled employees. But it is also a significant contributing factor in governance violations, with employees often inputting sensitive information while using AI tools such as ChatGPT. As a result, security professionals face a growing number of internal and external threats.

IT professionals are still largely optimistic about AI. “Largely, right now, it comes in the form of better collaboration experiences or maybe enhanced employee productivity,” Unangst tells us.

“But one of the other things that came out of those surveys was a strong desire from these IT professionals to further enhance and invest in the AI infrastructure.  I think the benefits we're seeing today are good, but they're a starting point.

“And as these AI solutions continue to mature and become more powerful over the next few years, I'm expecting that we're going to see significant gains and benefits around employee productivity, having better data to make decisions with, helping us improve business processes and improve the efficiency of our operations, as well as other things.”

 Risk versus Reward

One of the difficulties that security professionals are facing today is in justifying the ROI when it comes to AI, especially when it comes to the human factor. If you want to deploy an AI solution, there also needs to be significant amounts of training on how to use it, the governance surrounding its use, and whether each organizations usage is fully compliant with national and international regulations.

According to Unangst, “They have to pair that with the right training and the right enablement of their employees so that their employees know how to identify stuff that doesn't look correct, how to identify potential hallucinations or inaccuracies of data, how to use the information that comes out of these tools the right way, as opposed to just kind of blindly taking that information and moving forward with it.

“I think there's a broad range of risks there, ranging from something that's inaccurate that just kind of gets copied and pasted and used and it could be a publication or some kind of internal memo or, you know, incorrect data assumptions that could drive bad decisions. So certainly I expect that the industry and the capability of these tools and models will continue to evolve and improve.

“As we think about the AI based security solutions, that is just another layer that integrates into the broader security solution. And so, I think it's a space that is relatively new,  but as we think about some of the security agents and applications that are typically run on PCs and devices, certainly those are prone to opportunity for either improved effectiveness or running more efficiently, utilizing some of the AI capabilities.

“And then as we think about more intelligent PCs that can identify intrusions or detect any kind of malicious activity or something like that. There are models that are being developed and explored today where some of those models could run on an integrated AI inference engine on a device, and very effectively identify some of those attacks and then send the system into the proper state to go deal with those.

“So, very much a space that I'll say is at the beginning stages of development, but certainly an area that we anticipate is going to become a bigger part of the broader security solution over the coming years.”

Executive communications

In order to boost their budgets, IT professionals need to be able to communicate the risks they are facing to the executive level. However, as security is an area that revolves around preventing loss rather than generating profit, many security teams are facing difficulties in terms of skills and staffing and are being forced to do more with less. So how can they justify investment in AI solutions?

“Obviously when we think about how a CISO is going to communicate this and just how they're going to think about it more broadly, it's critical to balance the discussion around the opportunity with the risks and the need to maintain a strong focus on security.

“You have to make sure you're showing the value that these tools and capabilities bring to the table and how it's going to help businesses become more productive or improve operations. AI is this huge space right now, and so I think the key in terms of these communications and how you frame this is to make sure that you're talking more around the specific instances that you're deploying, the value it delivers, and how you're going to manage risk, security or any other business considerations around that specific deployment.”

 Finding the balance

After the explosion of AI capabilities we have seen in the past few years, it is unsurprising that international regulation has been limping behind, and struggling to maintain pace with emerging innovations. Some strides have been made by international commitments and the UK AI Safety Summit, but developers are worried that this could stifle innovation.

“There absolutely has to be a balance there, right? From an AMD perspective, we are very committed to executing our AI roadmap and our product portfolio in a responsible manner. We've joined a number of organizations across the industry that are focused on making sure that collectively we all do that.

“You have to make sure that these solutions are secure. You have to make sure that they are following a strong set of moral and ethical guidelines. So, I believe that there is the right balance there. A discussion that collectively we need to have as an industry is, how do you push the innovation in this space, but make sure that we don't cross lines, until we have the right guardrails and checkpoints there?

“And that's not something that I think any one company or organization is going to be able to do by themselves. It's going to have to be a partnership across the broader ecosystem.”

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.