The nicest AI in the room is the one you should actually worry about

A representative abstraction of artificial intelligence
(Image credit: Shutterstock / vs148)

AI agreeing with you can feel progressive. It feels efficient, aligned and reassuring. It taps in the innate human nature that we all love to be right.

But, much like surrounding yourself with ‘yes’ men can be uber counterproductive, businesses have nothing to gain from AI that flatters their assumptions.

AI that provides quick, confident and frictionless responses that affirm exactly what the prompter already believes means nothing’s being challenged, and nothing meaningful is actually being learnt.

Article continues below
Bobby Brown

Founder and CEO at Nucleo.

We’re not talking about this risk enough. Especially when you consider that over a third of users in Irish businesses consistently believe AI always produces factually accurate responses and in the UK this figure is similar with 36% saying it’s ‘always accurate’.

Collectively, we’ve spent the last two years worrying about AI hallucinations or incorrect outputs, when an equally important danger is something far more subtle and understated: sycophancy.

Hallucinations vs. blind agreement

In April last year, Open AI publicly rolled back a GPT-40 update after it became “overly flattering or agreeable”, saying the model had skewed towards responses that were supportive but disingenuous.

Agreement isn’t the same as accuracy, and a model that mirrors a user’s preferences can end up laundering a flawed idea into something that feels objective.

In an enterprise setting, this is arguably more damaging than a random error slipping through the net because blind agreements can harden bad judgement, reinforce bias and create a false sense of truth and certainty.

Activity doesn’t equal value

Where I see more organizations going wrong is when there’s a huge amount of AI activity, but very little AI value to show for it.

AI has become the tool of choice before the problem is properly understood, and it’s being deployed because of pressure, because of FOMO, or because “everyone else is doing it” – not because it’s been identified as the right solution to a defined business challenge - and this culture is also partly to blame for attitudes that believe AI is always right.

The temptation inside many firms is to treat AI like a shortcut to transformation. In a recent State of AI survey, 88% of global respondents said their organizations use AI in at least one business function, yet only 39% reported EBIT (Earnings Before Interest and Taxes) impact at enterprise level.

At the same time, 23% say they’re scaling agentic AI system software in the business, while 39% are still experimenting. In other words, there’s a clear disconnect between doing AI and actually getting ROI from it.

But, that’s not a technology problem. That’s a discipline problem.

Too many businesses are chasing speed over substance. This is also being reflected by policies we are seeing come forward. There’s now many examples of AI mandates, where usage of AI rather than its impact are tied to employee progression. If we tie success to use - we are creating the wrong culture.

There’s this constant pressure from the top to move fast and to be seen to be doing something, but as I often say: I can do it right, or I can do it now – I can’t do it right now. And when it comes to AI, getting it wrong quickly is far more expensive than getting it right deliberately.

Why the “junior colleague” model beats the “AI genius” idea

One of the most dangerous trends I see is just how quickly organizations elevate AI to a position it hasn’t earned.

We talk about AI like it’s a senior hire, and we trust it like it has years and years of experience. It’s relied on like it understands context, nuance and consequences, and it simply doesn’t.

AI should be treated like a junior member of the team. A very capable one, yes – fast, efficient, and often surprisingly insightful – but still a junior.

AI needs to be challenged. It needs a ‘manager’ who spots the gap between confidence and competence. Suspicion and critical thinking will never not be required, and if you remove that layer of scrutiny, you create the perfect condition for sycophancy to thrive.

AI stops being a tool for interrogation and instead becomes a mirror.

This framing also explains why prompt discipline matters. A vague prompt invites a vague answer. A well-governed prompt, with clear boundaries and escalation rules, gives an AI model a job it can actually complete properly.

The best use of AI in business is not to mimic a talented employee with perfect instincts; it’s to act as constructive friction, challenging the obvious answer, surfacing missing context and forcing human decision-making on a deeper level. This is how AI becomes useful without becoming dangerously agreeable.

The shadow AI problem

There’s another layer to this that leaders often underestimate: what’s happening outside of official channels and processes.

When you deploy a corporate AI tool that is heavily sanitized, overly polite, and programmed to just agree with whatever the user inputs, it stops being a useful operational tool.

If the official AI doesn’t provide the necessary constructive responses or help people actually solve complex problems, employees will explore other options elsewhere. Quietly, independently, and without oversight, they seek out unapproved, 'raw' models that actually challenge their work. That’s where shadow AI creeps in.

Microsoft has found that 71% of UK employees have used unapproved AI tools at work, with more than half (51%) doing so on a weekly basis.

Think about what that actually means in practice. Company data, customer information, and internal decision-making are all being fed into systems that businesses don’t control. And instead of getting a controlled, intelligent advantage, you get fragmented risk.

If your AI never disagrees with you, you’ve already got a problem

The organizations that will get real value from AI are the ones willing to slow down in the right places and think critically.

This means fixing the foundations – your data, your governance, your user cases – before even thinking about scaling. It means being intentional about how AI is used, where it’s trusted and where it must be challenged, and it means designing systems that don’t just produce answers, but provoke better questions.

Because when people start working around your AI strategy, that’s a big red flag.

Yes, it’s a governance issue, but it’s also a cultural one. People don’t route around policies because they’re rebellious by default, they seek out alternative paths when the approved process is unclear, clumsy or altogether absent.

If the official tools don’t help them to think better or move faster, your people will find ones that do – with or without permission.

So the goal here isn’t to build AI that always agrees, or even AI that’s always right, it’s to build AI that challenges in the right way, at the right time and for the right reason.

If AI is always telling you what you want to hear, you don't have an intelligent advantage, you just have a very expensive echo chamber.

We've featured the best AI chatbot for business.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

Founder and CEO at Nucleo.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.