"AI is giving the bad guys the upper hand more than the good guys" — why ThreatLocker CEO Danny Jenkins thinks zero trust could be the answer

ThreatLocker CEO Danny Jenkins at Zero Trust World 2024
(Image credit: ThreatLocker)

Artificial intelligence poses a real threat to the cybersecurity landscape, offering little back in terms of protection, ThreatLocker CEO Danny Jenkins has declared.

Speaking to TechRadar Pro at the company’s recent Zero Trust World event, Jenkins emphasized that AI primarily empowers cybercriminals by giving them access to new and unique malware, with endpoint protection solutions yet to achieve the same sorts of benefits.

At the same time, Jenkins also raised concerns about the ethics of using generative AI tools, and particularly how poor tools like ChatGPT and Gemini (previously Bard) can be at protecting users and potential victims.

AI cybersecurity paradox

Jenkins described a scenario whereby he and his team asked a popular GenAI chatbot to produce malware for a reverse shell. Having denied their request, the team explained to the chatbot that it would be for research purposes in the field of cybersecurity. The malicious code was alarmingly produced and shared, illustrating how easy it can be for those with little to no technical knowledge to create malware, in turn significantly increasing the threat.

Turning attentions to detection and response, we posited that cybersecurity companies could also benefit from the productivity gains unlocked by AI. Jenkins revealed that, although the technology has been hitting headlines since the public preview launch of ChatGPT, companies like ThreatLocker have been using various algorithms and machine learning for years to improve threat detection, hinting that it’s not the magical pill many of us had hoped.

Ultimately, Jenkins argued that AI’s impact is more pronounced on the criminal’s side, and the ease with which AI can generate novel malicious code could quickly outpace a defender’s ability to block threats. Although it wasn’t built specifically to protect customers against the threats produced by AI, Jenkins affirmed that a zero-trust, default-deny approach provides such protection by default.

Beyond AI, Jenkins also expressed concerns about the increasing trend of nation-states stockpiling vulnerabilities, citing Russia as a notable example. Speaking about future trends, he noted that weaponizing software presents a serious threat of supply chain attacks and backdoors.

With AI-driven threats surging, Jenkins positioned default-deny solutions as a next-generation product, surpassing the capabilities of standard endpoint detection and response software by taking a ‘trust nobody and nothing’ approach.

More from TechRadar Pro

Craig Hale

With several years’ experience freelancing in tech and automotive circles, Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars and the decarbonisation of personal transportation. As an avid bargain-hunter, you can be sure that any deal Craig finds is top value!