Why relying on unverified chatbot financial advice leads to costly errors
Public AI tools lack the accountability required for financial advice.
Since the start of 2026, safety for public and general-purpose AI tools such as ChatGPT, Gemini etc has appeared to be pulling in two different directions. The technology is moving deeper into everyday decisions, yet the companies building it are sending mixed signals about how tightly it should be controlled.
In January, OpenAI strengthened health guardrails in ChatGPT, limiting how the chatbot responds to medical queries and steering users toward professional advice - a subtle recognition that general-purpose AI can mislead people in high-stakes situations. Meanwhile, Anthropic has recently stepped back from parts of its voluntary AI safety pledge.
Chief Product and Technology Officer at Dext.
These contrasting moves expose a bigger question: where should guardrails for public AI actually sit? Healthcare has emerged as a clear red line, but another critical area is expanding through AI with far less scrutiny - finance.
The habit of consulting public AI tools for financial guidance is quickly becoming a regular fixture, even as the real-world costs of following flawed suggestions mount.
The hidden cost: AI as the new “mate in the pub”
This shift is already happening at a huge scale. Research from Lloyds Bank suggests 28 million people are using public AI tools to help manage their finances or get guidance on money decisions. For many, tools like ChatGPT have become the digital version of the mate in the pub who always has a tip about where your money should go.
That comparison feels harmless until businesses start acting on this advice. In the context of chatbots, this is a ‘mate’ which holds a PhD across a number of topics, meaning it can provide multiple answers with fluency and confidence - even when the underlying information and context is shaky.
And because systems like ChatGPT are designed to be helpful and satisfying for users, they can sometimes favor producing a clear, coherent response over emphasizing uncertainty, which can further reinforce the impression that the output is more reliable than it really is.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Whilst the risk of general-purpose AI tools rarely comes from malicious behavior by the technology, the reality is that these systems were never designed to deliver regulated financial advice. They generate answers based on patterns in training data rather than evaluating whether a decision suits a business.
This makes a business owner trust a chatbot's explanation far too quickly, leading them to charge ahead with decisions and completely ignore the fact that the system has no real accountability or clear view of its own shortcomings.
Ultimately, confidence is easy to generate but accurate judgement is much harder.
The evidence: the damage is already happening
But now, the damage is being felt across the board - especially amongst accountants who are being forced to pick up the pieces. Recent research found that half of UK accountants and bookkeepers say businesses have lost money after acting on incorrect AI-generated advice.
Across the profession, a familiar scenario is starting to play out. Finance professionals increasingly meet clients who arrive with a financial plan that began life as a chatbot prompt. What looked convincing on screen rarely survives professional scrutiny.
The cracks tend to appear during routine reviews. A tax rule has been misunderstood. An expense sits in the wrong category. A financial decision relies on outdated guidance. None of these errors appear catastrophic on their own, but they can quickly snowball into significant hits to a company’s bottom line through tax penalties and lost capital.
Financial errors rarely stay contained and a single flawed decision can quickly spiral into cash flow issues, compliance exposure or wider strategic challenges.
When these scenarios occur for clients on the basis of poor guidance from financial advisers, there is always the option to hold them accountable. For example, F1 boss Eddie Jordan famously sued HSBC for £5mn in 2024 for unsuitably advising him to invest in a ‘low-risk’ fund.
However, the harsh reality of dealing with the fallout from incorrect advice from chatbots is there is no one else to face the consequences of HMRC other than you.
To make matters worse, many businesses assume public AI systems hold greater expertise than the professionals sitting in front of them. In reality, general-purpose models lack visibility of a company’s financial history, obligations or commercial pressures. Without that context, even the most confident advice can send a business off course.
Why regulation alone won’t solve the problem
AI is moving faster into financial decision-making than the protections designed to govern it. While financial advice has long operated inside a regulated professional framework, general-purpose AI tools are now entering the same space without equivalent boundaries.
Businesses are already weaving these tools into everyday financial decisions, often without guardrails or a clear understanding of the risks involved. The result is a widening gap between how quickly the technology is being used and how slowly oversight is developing.
Waiting for legislation alone leaves that gap exposed.
Technology companies have the opportunity to act sooner. Clear, product-level guardrails around financial advice would provide an immediate layer of protection. Chatbots already redirect users when medical questions become too specific. The same principle could apply when conversations drift toward investment strategies, tax decisions or business finance.
General-purpose AI excels at explaining concepts, summarizing information and speeding up routine admin but financial advice is a different category altogether. It requires professional judgement, regulatory awareness and a deep understanding of context that’s usually found in specialized AI tools built specifically for these purposes rather than public AI systems. Drawing a firm line between those categories would remove much of the current confusion.
Businesses experimenting with AI also need a clearer understanding of where the technology helps and where it doesn’t. Chatbots can support financial workflows and accelerate routine tasks, but they cannot replace professional expertise. Treating them as assistants rather than advisers is where their real value lies.
Tackling the accountability gap
Treating a general chatbot like a finance director is a gamble that rarely pays off. These models offer a convincing performance without an ounce of professional liability. Real security lies in choosing specialized tools that respect the line between admin support and regulated advice.
Equally, accountants remain the vital filter, ensuring generated insights actually align with the context of the real-world. With this in mind, the goal for clients should be to get the best of both worlds - an accountant who can understand the nuances of their business, but is effectively leveraging AI to deliver outcomes with greater speed and accuracy.
This will be essential in navigating a year whereby businesses are facing an unprecedented level of uncertainty which requires the backbone to keep human judgement at the center of every high-stakes decision, rather than simply relying on faster software. For businesses, this means knowing exactly where the code needs to end and the accountability begins.
We've ranked the best budgeting software.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Stephen Edginton is Chief Product and Technology Officer at Dext.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.