Inside the AI-powered assault on SaaS: why identity is the weakest link

IA y ciberseguridad
(Image credit: Forcepint)

Cyber attacks no longer begin with malware or brute-force exploits;

They start with stolen identities. As enterprises pour critical data into SaaS platforms, attackers are turning to artificial intelligence (AI) to impersonate legitimate users, bypass security controls, and operate unnoticed inside trusted environments.

Martin Vigo

Lead security researcher at AppOmni.

According to AppOmni’s State of SaaS Security 2025 Report, 75% of organizations experienced a SaaS-related incident in the past year, most involving compromised credentials or misconfigured access policies.

Yet 91% expressed confidence in their security posture. Visibility may be high, but control is lagging.

Identity is the new perimeter and attackers know it

Bad actors have always sought the path of least resistance. In the world of SaaS, that path often leads directly to stolen identities. Passwords, API keys, OAuth tokens and multi-factor authentication (MFA) codes: any credential material that unlocks access is now the initial focus.

While many organizations still treat identity merely as a control point, for attackers, it has become the attack surface itself. In SaaS applications, identity isn't just a boundary; it’s often the only consistent barrier between users and your most critical data.

Think about it: almost every enterprise relies on SaaS platforms for communication, HR, finance, and even code development.

These systems don’t share a physical perimeter in the way a traditional on-premise network does. This means that protecting access is paramount: specifically, ensuring the legitimacy of every identity trying to access these systems. Because if an attacker compromises a valid account, they inherit the same privileges as the legitimate user.

This is what makes identity attacks so effective. They bypass firewalls, endpoint protection, and nearly every traditional security layer that simply monitors cloud activities or blocks unauthorized data access or app usage at network-centric architectures.

And this is precisely where AI enters the fray. Threat actors are rapidly adopting AI to supercharge every aspect of their attacks, from crafting irresistible phishing lures to perfecting behavioral evasion techniques.

Researchers have documented a significant increase in high-volume, linguistically sophisticated phishing campaigns, strongly suggesting that large language models (LLMs) are being used to generate emails and messages that flawlessly mimic local idioms, corporate tone, and even individual writing styles.

This isn't just about malware anymore. The weapon of choice is identity: the password, the token, and the OAuth consent that unlocks a cloud application.

Cybercriminals are weaponizing AI to compromise SaaS environments through stolen identities in several ways: Accelerated reconnaissance, targeted credential exploitation, pervasive synthetic identities and automated attack execution.

Reconnaissance for identities: The AI advantage

Before an attacker can even attempt to log in, they need context: what are employee names? Who reports to whom? What do approval workflows look like? Which third-party relationships exist? Criminals are leveraging AI models to automate this reconnaissance phase.

In one documented case, a threat actor fed their preferred Tactics, Techniques, and Procedures (TTPs) into a file called CLAUDE.md, effectively instructing Claude Code AI to autonomously carry out discovery operations. The AI then scanned thousands of VPN endpoints, meticulously mapped exposed infrastructure, and even categorized targets by industry and country, all without any manual oversight.

In the context of SaaS, this means adversaries can rapidly identify corporate tenants, harvest employee email formats, and test login portals on a massive scale.

What once required weeks of painstaking, manual research by human operators can now be accomplished in mere hours by an AI, significantly reducing the time and effort required to prepare for a targeted attack.

Stealing identities: sifting for gold with AI

Gaining access often involves sifting through vast quantities of compromised information. Info-stealer logs, password dumps from past breaches and dark-web forums are rich sources of credential material.

However, determining which of these credentials are genuinely useful and valuable for a follow-on attack is a time-consuming process. This, too, has become an AI-assisted task.

Criminals are utilizing AI, specifically Claude via Model Context Protocol to automatically analyze enormous datasets of stolen credentials. The AI reviews detailed stealer-log files, including browser histories and domain data to build profiles of potential victims and prioritize which accounts are most valuable for subsequent attacks.

Instead of wasting time attempting to exploit thousands of low-value logins, threat actors can focus their efforts on high-privilege targets such as administrators, finance managers, developers, and other users with elevated permissions within critical SaaS environments. This laser focus dramatically increases their chances of success.

From deepfakes to deep access: synthetic identities at scale

One of the most disturbing advancements is the mass production of stolen or entirely synthetic identities using AI systems. Research has detailed sprawling online communities on platforms like Telegram and Discord where criminals leverage AI to automate nearly every step of online deception.

For example, a large Telegram bot boasting over 80,000 members uses AI to generate realistic results within seconds of a simple prompt. This includes AI-generated selfies and face-swapped photos designed to impersonate real people or create entirely fake personas.

These fabricated images can build a convincing narrative, making it appear as if someone is in a hospital, on a remote location abroad, or simply posing for a casual selfie.

Other AI tools within these communities are used to translate messages, generate emotionally intelligent replies, and maintain consistent personalities across conversations in multiple languages.

The result is a new, insidious form of digital identity fraud where every image, voice, and dialogue can be machine-made, making it incredibly difficult for humans to distinguish truth from fabrication.

These AI-driven tools empower even relatively unskilled criminals to fabricate highly convincing personas capable of passing basic verification checks and sustaining long-term communication with their targets.

When an AI agent can generate faces, voices, and fluent conversation on demand, the cost of manufacturing a new digital identity becomes virtually negligible, significantly scaling the potential for fraud and infiltration.

This dynamic is also playing out on a state-sponsored scale. Extensive North Korean IT-worker schemes have been uncovered in which operatives used AI to fabricate resumes, generate professional headshots, and communicate fluently in English while applying for remote software-engineering jobs at Western technology firms.

Many of these workers, often lacking genuine technical or linguistic skills, relied heavily on generative AI models to write code, debug projects, and handle day-to-day correspondence, successfully passing themselves off as legitimate employees.

This seamless blending of human operators and AI-made identities highlights how synthetic personas have evolved beyond simple romance scams or financial fraud, moving into sophisticated programs of industrial infiltration and espionage.

Abusing identities: AI-native attack frameworks

Beyond individual acts of deception, AI is now being weaponized to automate entire attack lifecycles. The emergence of AI-native frameworks such as Villager, a Chinese-developed successor to Cobalt Strike, shows autonomous intrusion is becoming mainstream.

Unlike traditional red-team frameworks which require skilled operators to script and execute attacks manually, Villager integrates LLMs directly into its command structure. Its autonomous agents can perform reconnaissance, exploitation, and post-exploitation actions through natural-language reasoning.

Operators can issue plain-language commands, and the system translates them into complex technical attack sequences, marking a significant step towards fully automated, AI-powered intrusion campaigns.

Even more concerning, these packages are publicly available on repositories like PyPI, which recorded roughly 10,000 downloads in just two months. The result is an AI-driven underground economy where cyberattacks can be launched, iterated, and scaled without human expertise.

What once demanded technical mastery can now be achieved through a simple AI-assisted prompt, opening the door for both amateur cybercriminals and organized threat actors to conduct highly automated, identity-centric attacks at scale.

Addressing the risks in an AI-empowered world

The old security paradigm won't protect you from these new threats.

Organizations must adapt their strategies, focusing on identity as the core of their defense:

Treat identity as your security foundation: Every login, consent, and session must be continuously assessed for trust, not just at the point of entry. Implement advanced behavioral context and risk signals, such as device fingerprinting, geographic consistency, and identify unusual activity patterns to detect subtle deviations from normal user behavior.

Extend Zero Trust beyond IT: Helpdesks, HR, and vendor portals have become popular targets for social engineering and remote-worker fraud. Extend the same verification rigor used in IT systems to all business-facing teams by verifying every request and access attempt, regardless of origin.

Acknowledge synthetic identity as a new cyber risk: Enterprises and regulators must treat AI-driven synthetic identity generation as a distinct and severe form of cyber risk. This necessitates clearer disclosure rules, robust identity management standards and enhanced cross-industry intelligence sharing to combat sophisticated impersonation.

Demand embedded anomaly detection from SaaS providers: SaaS providers must embed advanced anomaly detection directly into authentication flows and OAuth consent processes, proactively stopping malicious automation and synthetic identity attacks before access is granted .

Leverage AI for defense: Invest in AI models that can recognize the hallmarks of machine-generated text, faces, and behaviors. These AI-powered defenses will increasingly form the backbone of effective identity assurance, helping to distinguish the genuine from the synthetic in real time.

Securing SaaS in the age of AI

Phishing, credential theft, and identity fraud have become faster, cheaper, and disturbingly more convincing, all thanks to AI. But the same intelligence that enables these attacks can also power our defense.

The coming years will see success depend less on building ever-higher walls and more on developing intelligent systems that can instantaneously distinguish the authentic from the synthetic.

AI may have blurred the very boundary between a legitimate user and an imposter, but with thoughtful design, proactive strategies, and collaborative innovation, organizations can restore that boundary and ensure that trust, not technology, defines who gets access.

Check out our list of the best firewall for small business.

Lead security researcher at AppOmni.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.