AI will help fill the cybersecurity skills gap in 2024

Network of security icons
(Image credit: Shutterstock)

The UK’s growing cybersecurity skills gap poses a potential threat to both businesses and society at large. According to a recent government report, while security-related job posts have risen by 30% versus last year, employers found more than a third of positions hard to fill. When looking at existing workforces, the report also found that 50% of all UK businesses have a basic cyber security skills gap. This ultimately means that people in charge of cyber security in those businesses lack the confidence to carry out basic tasks laid out in the government-endorsed Cyber Essentials scheme. So this leaves business leaders with the question; what role will AI play in tackling this cybersecurity skills gap, and will it alleviate or exacerbate the situation?

Mark Woods

Chief Technical Advisor, at Splunk.

AI opens a Pandora’s box of security concerns...

AI is a potentially powerful tool that can be used for malicious purposes by bad actors. As the technology becomes increasingly prevalent, we could see a rise in weaponized AI in the form of more sophisticated deepfakes, life-like impersonations, effective social engineering attacks, and evasive malware. Cybercriminals are also experimenting with AI poisoning – tampering that seeks to deliberately alter small elements of input data to affect the outcome of an AI model’s decision-making capabilities.

But the Pandora's box of generative AI has truly been opened in the exponential use of ‘off-the-shelf’ tools by individuals within organisations. The most popular systems used by employees provide tangible benefits but are often closed with no real way to assess outputs and gaps, or worryingly what user questions and data will be used for and who they will be available to.

This will further fuel the potential rise in AI-assisted incursion techniques, the growth in attack surfaces and the lowering of the barrier-to-entry for cyber attackers. And this all presents a challenging scenario for even the most robust and experienced of cybersecurity teams. There simply aren’t enough skilled professionals out there to meet the mounting demand for cybersecurity professionals. Clearly, humans are not going to be able to keep up with the increasing speed and scale of today’s cyber threats alone. That’s where the other side of AI comes in…

AI may also present part of the solution

While AI may hold the potential to present some security and privacy challenges, in the right hands it also has the power to offer invaluable reinforcements to an organization's cybersecurity capabilities - something security professionals are well aware of. Figures from Splunk’s latest predictions report reveal that as many as 86% of CISOs believe generative AI will alleviate skills gaps and talent shortages they have on security teams.

Rather than replacing jobs per-se, we will see AI becoming the assistant that employees can’t function without. One that takes on the tasks that stretched teams find repetitive, mundane, and labour-intensive, such as reducing alert volumes and triaging security issues. This however cannot happen without better foundations and fundamentals. While AI offers numerous advantages, its full potential can only be realised when complemented by the appropriate skills, tools, and robust internal system protections. Without these essential components, AI has the potential to give rise to a myriad of challenges.

Step changes enabled by AI are 12-24 months out

In time, AI-driven insights, automation and productivity tools will make organizations significantly more efficient, so employees can be free to create and innovate. But while AI seems to improve by orders of magnitude every day, it will be a while before we see many of these AI-enabled tools appear in a mature and ‘testable’ form.

AI, in many ways, is still in its nascent stage in the business world, and most of the current AI use cases are not production-hardened, and won’t be for another 12 to 24 months. Long term, there is tremendous value to be realized, but it’s not mature enough to fully take advantage of at scale and in critical applications.

In order to future-proof their operations, organizations must invest in building core competencies and capabilities. While it might sound mundane, this is the key to unlocking the full potential of AI effectively. This must also be paired with an ability to experiment with AI, safely and in controlled areas. Prioritizing the application and scaling of AI should follow, with dedicated expert teams providing full support. The swiftness with which these elements are implemented will be the determining factor in ensuring that businesses are well-prepared for the impending inflection point in AI systems.

Keeping one step ahead

There’s no doubt that concerns about how AI will be used in the cyber underworld are growing – our CISO research anticipates faster and more efficient attacks (36%), voice and image impersonations for social engineering (36%) and extending the attack surface of the supply chain (31%).

If this tells us anything, it’s that skills – in terms of both security and AI – will be integral to building a resilient business in 2024 and beyond. Companies that succeed in enhancing their cybersecurity workforce with responsible, safe AI solutions will be ideally positioned to resist both present threats and future dangers. By nimbly leveraging evolving technologies, businesses will stay one step ahead of potential cyber-attacks.

We've featured the best business VPN.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:

Mark Woods is the Chief Technical Advisor, at Splunk. He helps executive teams and international policy makers understand the seismic effect that data-driven approaches can achieve.