Adversarial AI is coming for your applications
AI accelerates app development—and cyber threats alike

AI is having its moment, reshaping how developers work. While the best AI tools enable faster app development and anomaly detection, they also fuel faster, more sophisticated cyberattacks.
The latest headlines are making it clear – no sector is immune. As organizations race to deliver apps at an unprecedented pace, the rise of freely available AI tools with sophisticated capabilities has made it easier than ever for threat actors to effortlessly reverse engineer, analyze, and exploit applications at an alarming scale.
Security Product Marketing Leader, Digital.ai.
Gartner predicts that by 2028, 90% of enterprise software engineers will utilize AI code assistants to transform software development – placing the promise of lightning speed productivity gains in the hands of every developer and the welcome ability to automate repetitive, tedious tasks.
However, despite massive investments in AI, security continues to be a reluctant effort due to the perception that protection measures have the inverse effect, slowing down software innovation and application performance. The fact is AI has already amplified the threat landscape, especially in the realm of client applications, a primary cyberattack target.
Long considered outside the realm of a CISO’s control, software applications --particularly mobile apps --are a preferred entry point for attackers. Why? Because users tend to be less vigilant and the apps themselves “live” in the wild, outside of the enterprise network. CISO’s can no longer afford to ignore threats to these apps.
It’s an App-Happy World
Consumers have a voracious appetite for apps, and they use them as part of their daily routines; the Apple App Store today has nearly 2 million apps and the Google Play Store has 2.87 million apps. According to recent data, the average consumer uses 10 mobile apps per day and 30 apps per month. Notably, 21% of millennials open an app 50 or more times per day, and nearly 50% of people open an app more than 11 times a day.
As organizations race to deliver apps at an unprecedented pace, the rise of freely available AI tools with sophisticated capabilities have also made it easier than ever for hackers to effortlessly analyze, and reverse-engineer at an alarming scale. In fact, the majority (83%) of applications were attacked in January 2025, and attack rates surged across all industries, according to Digital.ai’s 2025 State of App Sec Threat Report.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Dozens of apps are installed on each of the billions of smartphones in use worldwide. And each app in the wild represents a potential threat vector. Why? Because applications contain working examples of how to penetrate access to back-end systems. The billions of dollars spent every year on security perimeters is rendered useless in the world of mobile applications.
Every application made and released to customers increases a business's threat surface. Developing multiple mobile apps means more risk—and leaving even one app unprotected isn’t an option. AI tools have made it that much easier for even amateur threat actors to analyze reverse engineered code, create malware, and more.
If adversaries have access to the same robust productivity tools, why wouldn’t they use them to get even better and faster at what they do?
New nefarious attacks are having a moment
New research from Cato Networks threat intelligence report, revealed how threat actors can use a large language model jailbreak technique, known as an immersive world attack, to get AI to create infostealer malware for them: a threat intelligence researcher with absolutely no malware coding experience managed to jailbreak multiple large language models and get the AI to create a fully functional, highly dangerous, password infostealer to compromise sensitive information from the Google Chrome web browser.
The end result was malicious code that successfully extracted credentials from the Google Chrome password manager. Companies that create LLMs are trying to put up guardrails, but clearly GenAI can make malware creation that much easier. AI-generated malware, including polymorphic malware, essentially makes signature-based detections nearly obsolete. Enterprises must be prepared to protect against hundreds, if not thousands, of malware variants.
The Dark Side of LLMs for Code Generation
A recent study by Cybersecurity Ventures predicts that by 2025, cybercrime will cost the world $10.5 trillion annually, a massive increase from $3 trillion in 2015, with much of the rise attributed to the use of advanced technologies like LLMs.
Take attribution - many have used an LLM to write “in the voice of”- but attribution is that much more difficult in an AI world, because threat actors can mimic the techniques, comments, tools, and TTPs. False flag events become more prevalent, such as the attack on U.S. service member wives.
LLMs are accelerating the arms race between defenders and threat actors, lowering the barrier to entry, and allowing attacks to be more complex, more insidious, and more adaptive.
Protecting Apps Running in Production
Enterprises can increase their protection by embedding security directly into applications at the build stage: this involves investing in embedded security that is mapped to OWASP controls; such as RASP, advanced Whitebox cryptography, and granular threat intelligence.
IDC research shows that organizations protecting mobile apps often lack a solution to test them efficiently and effectively. Running tests on multiple versions of an app slows the release orchestration process and increases the risk of delivering the wrong version of an app into the wild.
By integrating continuous testing and application security, software teams gain the game-changing ability to fully test protected applications, speeding up and expanding test coverage by eliminating manual tests for protected apps. This helps solve a major problem for software teams when testing and protecting apps at scale.
Modern enterprise application security is not a nice to have-- while CISOs certainly don’t need more work added to their plates, vectors that used to be outside of their control are now creating fissures inside what they do control.
The good news is that there are now robust, baseline protections that balance the need for security with the need for speed of innovation and performance. These features can be added instantly to almost any app in the wild and go right back into the app store.
1. The ability to protect by inserting security into DevOps processes without slowing down developers by adding security after coding and before testing
2. The ability to monitor via threat monitoring and reporting capabilities for apps in production
3. The ability to react by building apps with runtime application self-protection (RASP)
AI is accelerating code production, breeding applications, and reshaping app security – it’s time to stop thinking like a white knight and think like a hacker.
We list the best online cybersecurity courses.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Security Product Marketing Leader, Digital.ai.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.