The cybersecurity arms race: AI vs. AI

Padlock within dial signifying cybersecurity
(Image Credit: TheDigitalArtist / Pixabay) (Image credit: Pixabay)

Artificial intelligence (AI) is transforming business operations through automation, data analysis, and predictive capabilities. As AI advances, considerations must be made about both the benefits and risks, especially in cybersecurity where AI is emerging as a game-changing tool for defenders and hackers alike.

This article will explore the tricky double-edged sword of AI and how cybersecurity leaders can benefit from AI while also safeguarding against the threats this technology now poses.

The role of AI in cybersecurity

Recent research indicates that adoption of AI for cybersecurity is accelerating, with the majority of IT decision-makers planning investments in AI-driven security solutions over the next two years. A study by Blackberry found that 82% of IT decision-makers surveyed intend to allocate a budget to AI security by 2025, with nearly half aiming to do so by the end of 2023. These findings highlight the increasingly vital role of AI technologies in combating cyber threats.

AI amplifies cyber defenses through rapid pattern recognition and predictive capabilities. By automating threat detection and response, AI systems can rapidly sift through vast data to identify anomalies in real-time. With continued advances, AI promises to be a transformative force in cybersecurity's future.

Additionally, by leveraging intelligent automation, sophisticated analytics, and robust threat intelligence feeds, AI has the potential to provide comprehensive coverage across the entire vulnerability lifecycle - from exposure to exploitation. This type of proactive, analytics-based approach enabled by AI will become increasingly indispensable in the face of a rapidly expanding threat landscape.

Ed Williams

Ed Williams is VP for EMEA at SpiderLabs.

The dark side of AI

As AI brings a myriad of benefits for businesses, it is also powering a new generation of highly targeted, stealthy threats. For example, new AI systems like WormGPT and FraudGPT, have been developed, which allow novice hackers to easily purchase software that assists them with generating malware code, and provides AI-powered tools to facilitate cybercrime. Similarly, the deep learning language model GPT-4 can study communication patterns to effectively impersonate people online. This increases the already prevalent concerns that cybercriminals will leverage GPT-4 to create fraudulent emails and messages that appear authentic, making it nearly impossible to identify deception.

Currently, threat actors are primarily using AI’s natural Language Processing to create hyper-realistic and highly personalized phishing emails. In fact, Trustwave’s most recent report analyzing cybersecurity threats in the hospitality sector found that threat actors are using Large Language Models (LLMs) to develop more sophisticated social engineering attacks, due to the capability to create highly personalized and targeted messages.

These emails may contain malicious links or attachments, primarily HTML attachments, which are mainly used for credential phishing, redirectors, and HTML smuggling. It is also notable that 33% of these HTML files employ obfuscation as a means of defense evasion. Phishing attacks are expected to become more prevalent and harder to catch as AI capabilities grow. Another concerning trend is the rise in deepfake technology that can create fake audio or video used to dupe customers with the appearance of authenticity. Additionally, AI’s pattern recognition capabilities are being leveraged by hackers to uncover vulnerabilities in computer systems. By analyzing software code and security systems, AI pinpoints flaws and gaps that enable cyberattacks. The exposure of such weaknesses poses significant risks, as attackers can then develop customized malware or intrusion methods tailored to the vulnerabilities identified by AI.

Looking ahead

With the latest evolution of AI to contend with, cybersecurity must evolve to become more proactive, intelligent, and efficient. Through optimization of security processes like threat response and hunting, as well as analysis of massive datasets, AI holds immense potential to bolster cyber defenses. When it comes to safeguarding against risks posed by AI, however, cybersecurity experts must also consider the following:

  • Evaluating an organization's security solutions with Generative AI and LLMs in mind. This can include choosing security tools or partners that can detect AI-generated threats like advanced phishing.
  • Creating robust internal policies and employee training for proper data usage and data sharing to help minimize the risk of data breaches.
  • Possibly instituting an internal AI Infosec working group across relevant teams (such as Legal, Privacy, IT, etc.) to deal with governance and data sharing guidelines.
  • Implementing robust security measures, which include encrypting sensitive data, implementing strong access controls, regularly updating and patching systems, and using secure coding practices.

Security consultants must balance leveraging AI and machine learning technologies to provide advanced threat detection, predictive analytics, and automated response capabilities, while also safeguarding against the same capabilities when they are leveraged by bad actors and malicious users.

AI enables consultants to analyze vast amounts of data in real-time, identify patterns, and detect anomalies that could indicate a potential security threat. This allows them to anticipate and mitigate risks before they can cause significant damage. However, cybersecurity consultants must also keep in mind that the same capabilities can be leveraged by hackers. Doing so would involve conducting thorough vulnerability scans, and plugging any gaps that can be found by AI systems to ensure an ironclad defense system.

Additionally, consultants should consider how AI can be used to simulate cyberattacks, which can provide valuable insights for a robust incident response strategy. This ensures that in the event that an organization falls victim to an AI-powered cyberattack, resources are in place to mitigate the threat and limit the fallout from breach.

There is no doubt that artificial intelligence is transforming the cybersecurity landscape. While AI enables organizations to detect threats and streamline security processes more efficiently than ever before, it also arms cyber criminals with new capabilities. As AI proliferates, the cybersecurity community must work diligently to maximize its benefits while minimizing risks.

With AI developing as speedily as it is, the threat landscape is constantly evolving. Having a trusted cybersecurity partner on hand can ensure that experts are continuously monitoring the evolving threat landscape to understand how threat actors are applying AI for malicious purposes, and simultaneously developing adaptive defense strategies to ensure that organizations are kept safe from the malicious side of AI.

We've featured the best productivity tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Ed Williams, EMEA Director of SpiderLabs at Trustwave.