'Understanding leads to better decision making - faster decision making - which is going to benefit all of us' - Cisco's AJ Shipley on using AI & LLMs for security incident communication

Young Colleagues Working on Computers and Talking at a Workplace
(Image credit: Shutterstock)

At Cisco Live 2024 in Amsterdam, the company unveiled its latest range of solutions for networking, cloud, and cybersecurity.

During his keynote, Cisco’s EVP & GM for Security Collaboration and Business Units, Jeetu Patel, highlighted that fragmented solutions are a thing of the past, and that the cloud environment requires an integrated platform on a “zero trust, with zero friction” basis.

But one of the key issues facing SOCs and CISOs today is not one of cyber defense, but of communication. In particular, communicating the needs, concerns and risks facing security teams to the executive and c-suite level.

This is an area that AJ Shipley, VP, Product - Threat Detection & Response, is very passionate about, and is an area that has excellent use cases for artificial intelligence and large language models (LLM).

Opening up the domain of security experts

Traditionally when responding to a threat or breach, security teams have to translate highly technical indicators, metrics and timestamps into a digestible and succinct report to be passed to the executive level, so that they can understand exactly how the business has been impacted.

This is a time consuming process, especially in the immediate aftermath of a breach, when the time of a security team could be better spent on incident response and data recovery. A fast response is an effective response, and this is where AI and LLMs can save crucial time.

Cisco’s Extended Detection & Response (XDR) platform provides exactly that, Shipley explains, as it can “take those same set of technical indicators and timestamps - different what we call in the industry tactics, techniques, procedures (TTP), credential dumping, or push bombing attack or lateral movement.

“We're able to take those, feed them into a large language model and say, ‘in four paragraphs, tell me what happened,’ and it spits out a very human readable four paragraphs, based on the timestamps.”

Shipley explains that the LLM can identify where an incident occurred, which machines communicated with each other and the connection they used, and what privileges were escalated through the process, providing the security team in seconds with a report that otherwise may have taken hours.

A primary concern of the security team is that the LLM could simplify highly technical language to an extent that compromises the accuracy of its description, but Shipley assures that a non-security audience, “can read it and they will know with a very, very high degree of precision, exactly what happens.”

The metrics involved in the security sector are very important for understanding how and where an attack has occurred, but highly specific industry terminology doesn’t communicate well outside of the expertise.

“I've spent my entire career in the security space. For too long It's kind of been the domain of just the security experts. It's almost kind of been like this black magic if you will, or this like very kind of like secretive club that you have to have a secret handshake to get into.

“I think ultimately at the end of the day, understanding leads to better decision making - faster decision making - which is going to benefit all of us.”

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.