Retrieval-augmented generation can manage expectations of AI

An AI face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

The adoption of AI tools is accelerating across the economy, with 39% of UK organizations already using the technology.

Across industries – from finance and healthcare to manufacturing and retail, the technology is being integrated to drive efficiencies at scale.

The debate is no longer whether to adopt AI, but how quickly – and where.

Philip Miller

AI Strategist, Progress Software.

Yet as implementation rises, so do expectations. While many assume AI should be able to deliver flawless outputs every time, this double standard is damaging trust, slowing down adoption and holding back innovation.

So how can organizations rethink how they use AI? This starts with focusing on small use cases, continually testing and avoiding overdependence on any single system.

Retrieval-augmented generation (RAG) can add another layer of reassurance, grounding responses in verifiable data and producing outputs that are both relevant and trustworthy.

Changing perspectives

As AI becomes increasingly integrated into day-to-day operations, tools like RAG are vital for accuracy. Yet equally important is changing how we use the technology. When another employee makes a mistake, we see it as a vital part of the learning process.

When AI delivers an imperfect answer, the majority assume the technology isn’t ready for wider deployment. However, these errors aren’t bugs in the system; they’re an expected trade-off of models that work in probabilities. Expecting flawless performance is like hiring a new employee and expecting their work to be perfect every time.

Organizations need to stop thinking in binary terms - that AI must be either perfectly right or completely wrong. Instead, the focus should be on how the technology is used, the safeguards we put in place and how it combines with human insight. AI is an agile technology.

These models can fail, learn and improve in days or even minutes, far faster than human learning cycles. Ultimately, our approach towards deploying AI should be equally flexible.

Organizations that take a multi-year, top-down transformation plan risk waiting for a ‘perfect’ version of AI that may never arrive. Instead, they need short-term, incremental projects that deliver value quickly, before scaling from there.

Responsible AI in practice

Adopting AI responsibly requires translating this mindset into concrete, manageable actions that deliver results. However, this should also be built around trust and a wider human-centric approach.

While every organization's journey is unique, there are a number of ways to accelerate adoption without compromising on accuracy or ethics. Focusing on achievable goals is key.

By targeting use cases that can be delivered in weeks or months, organizations can generate wins early on that demonstrate tangible value and build confidence in the technology.

AI models are inherently imperfect, so each mistake should be treated as an important learning opportunity. Analyzing errors, refining prompts or experimenting with different models are all crucial to improving performance over time. Small adjustments allow teams to continuously enhance results while keeping projects manageable.

Once initial use cases deliver tangible benefits, adoption can expand gradually across the wider organization. Maintaining oversight and governance ensures outputs remain accurate, relevant and aligned with ethical standards, allowing organizations to scale AI effectively while minimizing risk.

Building trust through RAG

One of the most effective ways to improve reliability is through RAG. Within a RAG framework, AI systems access relevant, up-to-date information from a variety of sources before generating a response.

This ensures outputs are anchored in verified, contextually accurate data rather than relying solely on potentially outdated or incomplete patterns learned during training.

By connecting human-centric AI to data in the right way, organizations can reduce hallucinations, deliver context-aware answers and increase stakeholder confidence; all critical steps for responsible adoption at scale.

Embedding a culture of careful, iterative AI use complements RAG, creating a continuous feedback loop that further strengthens trust and ensures insights are actionable and reliable across the organization.

Final thoughts

Every organisation operating within the AI era faces the same challenges when trusting the technology.

What separates success from failure is the ability to anticipate these errors, design ways of working that identify them quickly and adapt accordingly.

AI is neither infallible nor magical, but it is a great resource. Organizations that balance ambition with realism will be the ones that succeed.

We list the best IT Automation software.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

AI Strategist, Progress Software.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.