Why more than half of AI projects could fail in 2026
Here is why AI projects could fail in 2026

In 2025, to borrow a phrase: the AI revolution is already here; it's just not evenly distributed. While individuals are seeing productivity gains from LLMs or newer agentic systems, larger projects struggle.
Look at the landscape, and for every success story of engineers single-handedly "vibe‑coding" complex apps, we see plenty of enterprise pilots stalling.
Co-Founder and CEO at RecordPoint.
Research and industry forecasts consistently warn that between 60% and 90% of AI projects are at risk of failure by 2026, with failure defined as abandonment before deployment, failure to deliver measurable business value, or outright cancellation.
AI projects aren’t a model problem — it's a data and governance problem. However, it is solvable, and by solving it, organizations can not only make their AI efforts more viable but also reduce their organizational risk.
Why are organizations struggling with AI?
It's tempting to blame things like model choice, parameter tuning, or vendor selection for stalled proofs of concept. This is a new technology, so the apparent response to a failed pilot is "you must be doing it wrong." In reality, the most common problem is more fundamental: messy data resulting from a lack of governance.
Gartner's guidance is stark: by 2027, 60% of organizations will fail to realize the value they expected from AI use cases because their governance is incohesive. Even if you ship features, you may still fail to achieve outcomes without a coherent governance framework, and data that is not "AI-ready".
Underlying data governance issues are also the root cause of problems like cost overruns and shadow AI: without usage guardrails, permissioning, and retention hygiene, compute costs can climb, and risk expands.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Data governance vs AI governance
Before we explore how each relates to a successful AI rollout, let's define both forms of governance.
Data governance is the work of finding, classifying, securing, retaining, and monitoring data across its lifecycle. It creates a framework for who can access data, how it's collected, stored, and used, and assigns responsibility to ensure consistency, prevent issues, and support better decision-making across the entire business.
AI governance, a relatively new discipline, complements data governance by outlining an organization's use of AI, ensuring it operates within legal and ethical boundaries and aligns with the organization's values and societal norms.
Data governance: from afterthought to AI enabler
Data governance has traditionally been considered mainly in terms of how it can help organizations avoid adverse outcomes. Organizations have only tackled failures in data governance as an afterthought, in the wake of a compliance failure or a data breach, or when they have such unreliable data that they're making obviously bad decisions.
With strong data governance, the thinking went, you can ensure audit trails are preserved. Data is retained in line with regulations, so as to avoid a failed audit and a damaging regulatory penalty. And you can better protect your data by managing access to and removing data when you are obligated, to reduce the likelihood and impact of a data breach.
With the advent of AI, data governance now has another selling point: it can enable enhanced innovation. AI needs data the same way an engine needs oil. From an afterthought for many organizations, governance has become an enabler.
Organizations that prioritize strong data governance can provide their AI platforms with data that is authentic, reliable, and free from bias, error, and that respects individuals' privacy.
When governance gaps go public
In last year's high-profile case involving Air Canada, the British Columbia Civil Resolution Tribunal found the airline liable after a site chatbot gave misleading bereavement‑fare guidance.
The underlying issue was due to the model confusing two similar (real) policies and hallucinating a link between the two. The lesson isn't that "AI is dangerous"; it's that policies must be treated as authoritative, versioned content, and AI bots should retrieve only from approved sources with human verification for sensitive claims.
What does good look like?
For organizations that want to be in the half of AI projects that do succeed next year, the path begins with establishing strong data governance, ensuring your data is AI-ready, and focusing on compliance.
Establish AI-ready data: provenance, context, and trustworthy
The starting point for good data governance is to develop an understanding of your data, both structured and unstructured, so you can trust it, ensure its provenance, and guarantee that it's "AI ready".
AI-ready data is governed, observable, and permissioned. This can be easier said than done, as different systems have different ontologies or metadata models, and you need to ensure that enough context is provided for an LLM or agentic system to provide valuable responses to queries.
You need to do this continuously, at scale. Clear ownership, repeatable pipelines, and continuous testing ensure that data flows securely to the right place. Readiness isn't a tool — it's a process.
Focus on compliance
Once you know what you have, you can take action to make it compliant, secure, and error-free. Start by removing the ROT, the redundant, obsolete, and trivial data clogging up your systems.
ROT makes it harder to comply with privacy or records regulations, makes a data breach more damaging, and yes, means your AI models may provide substandard or noncompliant output. For the data that is left, apply retention schedules and minimize (remove) sensitive data in line with relevant regulations.
Audit data access and sharing
There is no more obvious way to demonstrate the connection between data and AI governance than a holistic review of a company's access management. Have you audited your employees' access to data recently? You need to, before you introduce an AI model like Microsoft Copilot, which can act as an accelerant for any existing issues with over-permissioned users or overshared data.
A study by Concentric found 15% of business-critical resources were at risk of oversharing. AI platforms like Copilot and ChatGPT Teams inherit data access configurations, so bringing them into an organization without adequate preparation can lead to unintended consequences, also known as "own goals."
If an employee can access specific files, their Copilot can too, so an over-permissioned user can ask Copilot for the CEO's salary, or request sensitive employee performance records, breaching privacy policies and leading to internal chaos. And if an over-permissioned user was hacked, a threat actor could do much worse.
Establish a centralized AI governance hub
Establish a thin control plane that sits above your data sources, AI services, and user interfaces to declare policy once and enforce it everywhere — consistently, measurably, and with an audit trail.
The companies that will scale AI in 2026 are not the ones with the flashiest demos; they're the ones that govern their data and their AI with the same discipline they apply to finance or safety.
Continuously manage your metadata to ensure your inputs are trustworthy and compliant. Stand up a governance control plane so your models behave predictably and responsibly. Do those two things well, and you don't just ship more AI—you ship AI that works, at a lower risk profile.
We've featured the best AI website builder.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Anthony Woodward is Co-Founder and CEO at RecordPoint.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.