How AI can support cybersecurity leaders

How AI can support cybersecurity leaders
(Image credit: / Nicescene)
About the author

Sridhar Muppidi is the CTO for IBM Security

In recent years, cybercrime has reached epidemic proportions, with far ranging impacts across the business world. Cyberattacks pose a monumental threat with major developments: attacks have become far more sophisticated and they have exponentially increased in volume. Better executed than ever before, the UN estimates that 80 per cent of all cyberattacks are carried out by technologically-advanced criminal organisations that share data, tools, and expertise.  

By 2021, it is estimated that cybercrime will cost the global economy over $2 trillion. This makes it imperative for companies to make concerted efforts to improve their cybersecurity health, and evolve from a compliance-focused approach to a more threat-aware strategy focusing on risk. However, cybersecurity leaders are currently facing three major challenges when it comes to defending data.

Skills, Insight & Speed

Skill shortages are a huge issue hindering the war against cybercrime. While attacks by cybercriminals grow more elaborate and sophisticated, the tools needed to combat them are also increasing in complexity. This leaves the cybersecurity industry with a skills gap: there simply aren’t enough people fully equipped to engage with and manage solutions. But the deficit goes beyond recruiting qualified people in sufficient numbers. Once in a cybersecurity role, it is a real challenge to keep skill sets refreshed, relevant, and current with the evolving cybersecurity landscape. 

Another challenge which professionals face when making strategic decisions around security is context. Unlocking and leveraging valuable technical & business insights are integral to making smart and quick business choices. But as the cybersecurity landscape grows bigger and more nebulous, the industry is struggling to absorb and utilize the necessary context which surrounds the issue. Put simply, we cannot access (let alone process) enough data in time before the landscape changes again.

Speed is the third and final hurdle which cybersecurity professionals are struggling to clear. Cyberattacks are happening at faster and faster speeds, so the demand for quick responses grows more critical. In certain US states, the law stipulates a four-hour breach notification timeframe. GDPR requires notification within 72 hours. There are business implications of not acting quickly to cybersecurity incidents. 

Predicting Analytics

AI offers a solution to these challenges faced by professionals. Threat detection time and accuracy is greatly improved by machine learning and AI-enabled analytics. It identifies anomalous behavior to detect fraud and threats both external and internal, in real-time and arms the security team with the information they need to make decisions as well as minimize impact on a user experience such as logging into a banking site.

There are already a number of applications which include some variation of analytics. Predictive analytics identifies network anomalies, detects malware, in addition to analysing user behavior patterns in order to find risky users within an enterprise and potentially thwart fraud or insider threats. 

Lesser known use-cases can be found in application security. Using machine learning, cybersecurity professionals can dramatically reduce the proportion of false positives that are generated from application security testing. Applying AI to behavioral biometrics, we can better identity the user based on keyboard strokes, mouse movements or use of their mobile device. This not only improves the security but also provides a better and frictionless user experience.

Context is King

Analytics is used for intelligence, technical and business context consolidation that helps make sense of the deluge of information and help make times decision and define priorities. Humans consume and process information through reading, watching, and participating in discussions. In a similar manner, AI can be used to train computers in the “language of security” using techniques such as large-scale natural language processing (NLP). This greatly helps in harvesting cybersecurity information to help security analysts work more efficiently and faster. 

For example, IBM trained Watson for Cybersecurity using billions of structured elements and millions of unstructured documents. A knowledge graph was complied with the harvested information to facilitate contextual reasoning. One company who employed Watson for Cybersecurity managed to reduce time spent on investigative tasks by 97 per cent.

The Trusted Advisor

AI and analytics enable a Security Orchestration to automatically block threats, correct problems, respond to attacks and automate low level alerts based upon prior examples or similar historical threats. But it doesn’t stop there – in addition to responding faster, AI can be used as a trusted advisor, capable of offering best practice recommendations. For example, AI can be used to take automatic action when a risky user is detected by either verifying the user and/or suspending the user. It can help reduce the time for the access certification process by providing guidance on risk, taking automatic action on low risk certification and allowing the security personnel to focus on high risk access certifications. 

AI technology enables a continuous loop of feedback between the people in the trenches or the human analyst along with machine learning logic to help with threat disposition as well as prioritizing the most important alerts.

Managing Bias

With great power comes great responsibility. Inherently biased AI programs can pose serious problems for cybersecurity. Bias can occur in three areas — the code, the data and the people who design AI systems. 

A biased program may end up focusing on the wrong priorities and could miss the real threats.  A biased dataset to train the AI will have a partial view of the problem and contribute toward incorrect outcomes. In the same way, if the people who design the program come from a similar culture or background, and are like-minded, then the cognitive diversity would be low, resulting in single dimensional results.  

Hence, companies need AI systems that are diverse and unbiased to deal with diverse cybersecurity threats and actors. 

Final Word

AI is not a one-stop fix. Cybersecurity professionals should bear in mind that attackers have also grown wise to the power of AI, and are exploiting it to overcome security systems. As we continue to develop AI-driven solutions, we must remain vigilant when it comes to good and bad AI. As an industry, we have to consider appropriate best practices in order to protect against the malignant applications of AI by cybercriminals.


Sridhar Muppidi is the CTO for IBM Security

Sridhar Muppidi

Sridhar Muppidi is the CTO at IBM Security is a proven technical leader with 20 years’ experience in security, software product development and security solutions architecture for a number of industry verticals.