AI is helping China-backed hackers but it’s also helping to hunt them down, NSA says

Computer chip with US and China flag
(Image credit: Shutterstock)

Cybercriminals of all skill levels are using AI to enhance their abilities, but AI is also helping to hunt them down, security experts have warned.

At a conference at Fordham University, National Security Agency’s director of cybersecurity, Rob Joyce, said that Chinese hacking groups are being assisted by AI to slip past firewalls when infiltrating networks.

Hackers are using generative AI to improve their use of English in phishing scams, and also using it for technical guidance when infiltrating a network or launching an attack, Joyce warned.

Two sides of the same coin

2024 is set to be a critical year for state-backed hacking groups, particularly those working on behalf of China and Russia. Taiwan’s presidential election kicks off in just a few days, which China will be looking to influence in its pursuit of reunification. But eyes will also be on the US elections coming up in November and the UK is expected to hold a general election in the second half of 2024.

China backed groups are already developing highly effective methods for infiltrating organizations and are using AI to do so. “They’re all subscribed to the big name companies that you would expect - all the generative AI models out there,” Joyce says. “We’re seeing intelligence operators [and] criminals on those platforms.”

The US experienced an increased number of attacks on critical energy and water infrastructure sites in 2023, which US government officials attributed to groups linked to China and Iran. One of the attack methods used by the China backed ‘Volt Typhoon’ group involves accessing a network covertly and then using built-in network administration tools to perform attacks.

While no particular examples were given of recent attacks involving AI, Joyce points out, “They’re in places like electric, transportation pipelines and courts, trying to hack in so that they can cause societal disruption and panic at the time in place of their choosing.”

China backed groups have been gaining access to networks by abusing implementation flaws - bugs caused by poorly implemented software updates - and then establishing themselves what would appear to be a legitimate user of the system. However, their activities and traffic within the network is often unusual.

Joyce explains that, “Machine learning, AI and big data helps us surface those activities [and] brings them to the fore because those accounts don’t behave like the normal business operators on their critical infrastructure, so that gives us an advantage.”

Just as generative AI is expected to help bridge the skills gap in cybersecurity by providing insights, definitions and advice to those working in the industry, it can also be reverse engineered or abused by cybercriminals to provide guidance on their hacking activities.

Joyce explained that AI is not a silver bullet that can suddenly make someone with no experience into a cybercriminal mastermind, “but it’s going to make those that use AI more effective and more dangerous.”

Via TechCrunch

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.