When agentic AI systems fall into the wrong hands

Half man, half AI.
(Image credit: Shutterstock)

Agentic AI is an artificial intelligence system that can act independently to achieve goals without constant human oversight.

These systems are able to make decisions, perform actions and adapt to changing conditions on their own when programmed correctly.

Keeley Crockett

Leading IEEE expert and professor of computational intelligence at Manchester Metropolitan University.

They operate by interpreting goals, breaking them into sub-goals and then working out the best way to achieve them. Their agency comes from this ability to act autonomously without continuous instruction.

As they execute tasks, agentic systems also learn and refine their responses, becoming more effective over time.

Facing the potential risks

Agentic AI systems are already in daily use, often handling large volumes of sensitive information. In fact, a recent IEEE survey forecasted that by 2026 agentic AI will reach mass or near-mass adoption by consumers in 2026.

In healthcare, for example, the technology is used to support administrative processes, reviewing and organizing clinical data. However, this level of interaction with personal data raises legitimate privacy concerns. Without clear alignment to GDPR principles, an agentic system could collect more data than necessary or attempt to bypass legal safeguards.

Beyond the risk of compliance violations, Agentic AI’s potential access to highly sensitive data makes it an appealing target for bad actors.

Unlike typical business applications, these systems don’t just collect location, payment, health, biometric and contact data - they also build detailed profiles from user behavior and preferences, pulling information from multiple sources. This wealth of personal data can be weaponized to manipulate both the system and its user.

Threats from compromised systems

If a threat actor were to hijack an agentic AI, they could do far more than access personal data. They could actively influence a person’s behavior. For example, by taking over a chatbot, they might engage in behavioral nudging, gradually manipulating someone’s choices by shaping the content they see, spreading misinformation or steering them toward specific purchases or even harmful content.

The risks escalate if an attacker gains control of an AI system set up to operate autonomously. A compromised agent could impersonate its user by sending automated emails, texts or voice messages on their behalf. In the case of smart home integration, it could even interfere with door locks, alarms or security cameras, with a direct impact on personal safety.

Beyond hijacking, adversaries could also poison the data that trains an agentic AI, feeding it biased or hostile inputs designed to warp its outputs. Over time, this could lead to inaccurate, misleading or potentially harmful decisions.

In each of these scenarios, malicious access could result in blackmail, harassment or identity theft. This is a strong illustration of how virtual attacks on agentic AI can quickly create serious real-world consequences.

Mitigating the danger

Organizations that utilize agentic AI systems have a responsibility to operate systems safely with appropriate guardrails. Although the potential threats are significant, their impact can be reduced.

Users must become more data-savvy and avoid complacency, particularly when relying on agentic AI. For highly sensitive information, it may be safer to opt out of using such systems for decision-making altogether.

Where these systems are used, users should review terms and conditions carefully, ensure transparency over what data is being processed, and understand how automated decisions are reached. Prioritizing systems that clearly explain their reasoning helps reduce the risk of hidden data practices and strengthens user control.

A double-edged sword

Agentic AI has the potential to transform industries and streamline everyday tasks. With 96 percent of global technology leaders agreeing that agentic AI innovation, exploration and adoption will continue at lightning speed in 2026, managing the risks will be more important than ever.

These systems must be deployed ethically and safely, with human oversight at every stage. Their development should include transparency protocols and clear processes for explaining automated decisions.

By recognizing the risks and committing to responsible use, we can harness the benefits of agentic AI while safeguarding users.

We've featured the best AI tools currently available.

Leading IEEE expert and professor of computational intelligence at Manchester Metropolitan University.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.