Seeing double – increasing trust in agentic AI

An AI face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

Agentic AI is the next big development in AI tools, using autonomous agents to improve business and organizational efficiency with more sophisticated capabilities that can make decisions, take actions and even learn on its own to achieve specific goals.

Analysts at Gartner believe a third of enterprise software applications will include agentic AI by 2028 – up from close to zero last year (2024).

This will enable businesses to enable teams to focus on high-impact decisions, faster customer responses, and driving innovation and growth.

Saurav Gupta

Sales Engineer at InterSystems.

Yet the path to using agentic AI is far from straightforward for many organizations, unless they revise their data architecture. Gartner believes more than 40% of agentic AI projects will be cancelled by the end of 2027. Hype, cost and complexity all cause projects to stall.

The real power lies in multiple agents that can communicate and coordinate with each other on a connected, real-time and trusted data infrastructure. The large language models (LLMs), on which agents depend, remain susceptible to hallucinations – inventing facts or applying information erroneously.

These are hazards no organization can afford to ignore because the consequences of mistakes are potentially serious, leading to personal harm or legal liabilities in any area from healthcare to insurance.

However, productionizing multi-agent systems is hard and most agentic AI projects are great for prototypes but fall apart when scaling to real-world systems.

Agentic AI requires access to real-time data

While agentic AI relies on several LLMs, it also requires real-time context from an organization's own on-premises and cloud databases, streaming data, external databases and providers, and historical data.

Achieving this while reducing the probability of hallucinations to a very low percentage or if at all is challenging. Contextual data must fill in the gaps because LLM data is never completely up-to-date and can be extremely dated.

This challenge highlights a broader transformation underway: enterprises are beginning to converge analytics with operational systems to support agentic AI. The result is a shift from information overload to clear, actionable insight.

Yet most organizations perform poorly when it comes to getting the right data in front of the right person in real-time, which makes it unsurprising so many agentic AI pilots fail.

Unifying fragmented data in an auditable, trusted approach

To unlock the full benefits of agentic AI, organizations need to bring together data from multiple sources in a way that users can trust. Strong guardrails, clear permissions, and full audit trails will be essential to ensure data is secure, accurate, and used responsibly.

Organizations need to clean and normalize their data, and put in place rigorous data governance, so that when they use LLMs and contextual data, they can generate the right intelligence for task automation. Data must be current, real-time and trustworthy.

Scalability and Fault Tolerance

AI agents don’t operate in isolation and need to share context, coordinate actions and make real time decisions while integrating with external tools, APIs and organization wide data sources.

Recent open standards within Agentic AI space like Model context protocol (MCP) and Agent to Agent communication (A2A) seems very promising for AI agents to communicate, access information, execute tasks and make decisions across complex enterprise workflows.

Persistent Memory

Effective prompt engineering leverages persistent memory capabilities to improve the AI's understanding, context preservation, and ability to generate relevant and personalized responses.

If Agents do not remember anything beyond its current query, it would be very difficult to implement production-ready Agentic AI systems.

Rise of AI Data Layer

Traditional data platforms like data warehouses, data lakes which primarily served SQL analysts and data engineers are no longer sufficient.

Today’s landscape demands data access for variety of use cases: machine learning, business analysts, reporting, dashboards and next generation Agentic AI enabled applications.

Greatest challenge in building data infrastructure for agentic AI lies in operationalizing and scaling it effectively in a cost-efficient manner. At the heart of this infrastructure lies data governance, access control, observability and security.

Is data fabric architecture the best data strategy?

Success of any AI data layer hinges on accessibility and quality of the underlying data. This is where data fabric comes in which acts as a smart data layer that connects and manages data from all your systems in real time.

It eliminates data fragmentation by seamlessly integrating every source, ensuring consistency and accessibility of data. Data fabrics utilize metadata management, knowledge graphs and semantic layers to add context and meaning to data. This enables AI Agents to understand business context and relationships between different data points.

This fulfils the basic needs of agentic AI –leverage unified data to fuel AI models that deliver accurate, context-aware insights for decision making and task automation.

Data Fabrics vs Data Mesh

Data fabric using centralized data architecture and governance model, allows data to be shared and integrated across the entire organization.

Data Mesh on other hand is a decentralized approach to data management. The data ownership remains with the domain owner and each domain is responsible for and must define, deliver, and govern its data products. It relies a lot on people and processes.

Agentic AI systems require multiple agents which is federated by definition but having federated data infrastructure and governance adds another layer of complexity and even more coordination across people and processes.

Some of the early successes in implementing agentic AI systems within an enterprise can be correlated with centralized data infrastructures based on data fabrics.

Guardrails

AI agents need to operate within safe and ethical boundaries aligning their actions with use cases, organisational policies and compliance with regulations.

Tracking data origins, enhanced observability, incorporating ethical AI frameworks and enterprise-wide security are critical to any production ready Agentic AI systems.

We list the best data visualization tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Sales Engineer at InterSystems.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.