How to reliably connect LLMs to real-world data and systems
MCP Is great, but it isn’t the whole AI answer
The model context protocol (MCP), an open-source standard introduced by Anthropic in late 2024, standardizes how large language models (LLMs) connect to external tools, databases, and data sources.
It enables Claude or Cursor to easily interact with files, APIs, and databases without needing bespoke, hard-coded integrations.
CEO, Memgraph.
This is why MCP is often compared to a “USB-C port for AI.”
Article continues belowUnlike USB-C, however, it carries risks, especially around whether the model truly understands what it’s querying and what the underlying data represents.
Without careful planning around this aspect of the protocol, AI projects can fall short of expectations. Let’s consider why.
The benefits of giving LLM tools access
MCP provides a far more effective way for LLMs to interact with external tools, APIs, and data sources. Instead of manually orchestrating multi-step pipelines—retrieving data, formatting it, injecting it into prompts, and parsing outputs—AI teams can expose capabilities directly to the model, allowing the LLM to decide what to use, when, and how to combine results.
That moves AI from static prompt-response loops toward more dynamic, agent-like behavior. Instead of hardcoding workflows, teams define tools and the model orchestrates them—querying databases, triggering workflows like sending emails or updating systems—choosing the right tools and sequencing them based on the task.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Both in theory and in practice, this approach reduces engineering overhead and increases flexibility. Model providers, frameworks, and platforms are increasingly supporting MCP-style interactions, and many developer tools now assume tool-based orchestration.
However, MCP introduces new challenges, notably tool overload. The temptation is to give the model access to as many tools as possible, assuming more capability should mean better outcomes, but that’s not always the case.
Just as giving an LLM too many options can increase the risk of hallucination, the same dynamic appears in deployment. As the number of available tools grows, the model’s ability to reliably select the correct one decreases, tempting it to pick the wrong one or misuse them, producing unintended results.
In this setting, instead of hallucinating text, the model now hallucinates actions.
You can have too much of a good thing
So, what’s the path forward? Our experience shows progress comes from a minimal, tightly scoped set of tools tailored to the specific task. Complex workflows should also be decomposed into smaller steps, each with a clearly defined set of capabilities.
Next is context. An LLM may know how to use a tool, but not what to do with it. For example, a model might generate syntactically correct queries, but without understanding the underlying schema or data relationships, those queries may end up meaningless or incomplete.
That’s the equivalent of handing someone access to a vast filing system without an index. Without the structured knowledge needed to interpret data correctly or to decide which tool is appropriate, even well-designed MCP systems can behave unpredictably.
I am deliberately not discussing security issues, but they are far from trivial: think unauthorized data access, prompt injections that trigger unintended actions, and more. The bottom line is you need to know which tools your AI app used, why it chose them, and what actions were executed as a result.
An answer to MCP’s shortcomings: GraphRAG
It’s clear that as helpful as it is, MCP is not a complete solution. It serves as an enabling layer, but does not structure knowledge, provide reliable context, or guide decision-making. Architectural choices are critical: improving the quality and structure of the context provided to the model makes a significant difference.
This is where approaches like retrieval-augmented generation (RAG) helps; and increasingly, where graph-based approaches are gaining attention, giving rise to GraphRAG, as first suggested by Microsoft.
Traditional RAG systems use vector search to retrieve relevant information. This approach helps reduce hallucinations but can struggle with complex relationships or implicit structures in the data. GraphRAG, however, extends this idea by introducing a knowledge graph layer, which encodes entities, relationships, and rules in a structured form.
This gives the model a clearer understanding of how data is connected and what it represents. In the context of MCP, this improves tool selection. When the LLM has access to structured knowledge, it can determine which tools are relevant to a given task, and it enables more controlled execution, guiding the model toward valid actions and away from risky or nonsensical ones.
For example, a graph can encode constraints such as permissions, dependencies, or business logic. This provides a form of guardrail that complements MCP’s action layer. The result is a more balanced system: MCP handles interaction and execution, RAG supplies relevant context, and the knowledge graph adds clarity, constraints, and reasoning support.
This combination helps reduce hallucination and misuse, two of the biggest risks in MCP-driven systems. By integrating GraphRAG with MCP-based workflows, developers can create systems in which models are not only capable of acting but are also better informed about when and how to act—bringing us closer to practical AI that is both powerful and reliable.
We've listed over 70 of the best AI tools.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
CEO, Memgraph.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.