What fighter pilots can teach us about enterprise AI decisions
Decision traceability and human judgment in enterprise AI
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
You are now subscribed
Your newsletter sign-up was successful
Join the club
Get full access to premium articles, exclusive features and a growing list of member rewards.
In the 1950s, a U.S. Air Force pilot named John Boyd made an unusual claim: starting from a disadvantage, he could defeat any opponent in air combat in under 40 seconds.
He rarely lost the bet. Boyd’s insight centered on decision speed: the ability to interpret signals, adapt quickly, and act before the opponent could respond.
Over time, Boyd expanded this insight into a broader theory of decision-making known as the OODA loop – Observe, Orient, Decide, Act – which describes how individuals and organizations process information and translate it into action.
Article continues belowChief Product Officer, Axonis.
Today, that same decision dynamic is beginning to emerge inside enterprise AI systems.
As artificial intelligence moves from analysis into operational workflows, it increasingly participates in the decision cycle itself, analyzing signals, generating interpretations, and proposing actions.
The challenge for organizations is to ensure humans remain inside the decision loop as AI systems begin to influence operational decisions.
For consequential decisions, people must still evaluate evidence, apply judgment, and ultimately take responsibility for the outcome.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The Missing Context Behind AI Decisions
Every time an employee uses AI tools at work – asking a question, refining a prompt, reviewing a recommendation, or exploring a dataset – more is created than a simple output; a decision trail forms around the interaction.
What triggered the inquiry?
Which data sources were consulted?
How were conflicting signals interpreted?
Why was one course of action chosen over another?
Taken together, these steps form the context behind a decision.
Historically, most enterprise systems have focused on capturing outcomes. A report is generated. A transaction is approved. A recommendation is accepted or rejected. The reasoning that led to those outcomes is often scattered across emails, dashboards, and conversations.
AI interactions accelerate this dynamic. Much of the reasoning now unfolds inside conversational interfaces or automated workflows that were never designed to serve as systems of record. The result is that the logic behind important decisions can become ephemeral: visible in the moment but difficult to reconstruct later.
When Decisions Leave No Institutional Memory
As AI becomes embedded in critical workflows, the reasoning behind decisions increasingly involves both humans and machines.
Analysts may rely on AI to surface relevant signals. Managers may use AI-generated summaries to interpret trends. Automated systems may propose recommendations based on large volumes of data.
Yet once a decision is reached, the chain of reasoning that produced it often disappears. Without a record of how the decision was formed, organizations lose the ability to revisit it later.
That matters for several reasons.
If an outcome proves problematic, it may be difficult to determine what evidence influenced the original decision. As regulatory expectations evolve around AI-assisted decision-making, organizations may need to demonstrate how automated insights shaped particular outcomes.
And without access to the reasoning behind past decisions, teams lose the ability to learn from experience and improve future decision processes.
In effect, decisions become transient events rather than valuable organizational knowledge.
A Lesson From Boyd’s Decision Framework
John Boyd’s work offers an instructive lens for thinking about this challenge. His OODA framework describes how individuals and organizations interpret information and translate it into action. While the model is often associated with speed, Boyd emphasized that the most important phase is orientation.
Orientation is the moment when incoming information is interpreted through experience, context, and mental models. It determines what signals are noticed, which explanations seem plausible, and what options appear viable.
In complex environments, orientation is rarely straightforward. Information is incomplete. Signals arrive from multiple sources. Different teams may see different parts of the problem.
Modern enterprises face a similar dynamic. Data lives across operational systems, financial platforms, collaboration tools, and external feeds. AI systems help surface patterns across this landscape, but they also introduce a new layer of reasoning into the decision process.
Without a way to capture how information was interpreted and used, organizations lose visibility into the orientation phase of decision-making – the very stage where judgment is formed.
The Problem With Ephemeral AI Workflows
Many AI-driven workflows today function as closed loops. A system retrieves information, generates a response, and moves on. The reasoning that connects evidence to conclusions often remains invisible.
In practice, effective decision systems must remain open loops, where AI surfaces evidence and proposes conclusions, but a human remains responsible for interpreting context, validating the evidence, and making the final judgment.
The distinction matters because it determines where accountability sits. In a closed loop, accountability is diffused; no single person owns the reasoning. In an open loop, a human evaluates the evidence, applies judgment, and takes responsibility for the outcome.
This reflects a fundamental design choice about how organizations use intelligence. Closed loops optimize for speed. Open loops optimize for judgment. In environments where decisions carry real consequences, judgment must prevail.
This becomes especially problematic as AI begins to influence operational decisions rather than simply supporting analysis.
If the reasoning behind those decisions cannot be revisited, several consequences follow. Governance becomes harder because organizations cannot easily demonstrate how conclusions were reached. Institutional learning slows because teams cannot examine past reasoning and refine their approaches.
And decision processes become dependent on tools that were never designed to preserve the context behind important judgments. The deeper risk is that the decision process itself disappears once the output is delivered.
Treating Decisions as Durable Artifacts
One way to address this challenge is to rethink what enterprises record when these decisions are made. Instead of capturing only outcomes, organizations can treat decisions as structured artifacts that preserve the reasoning behind them. A decision record might include the initial signal that triggered the investigation, the data sources consulted, the analysis performed, and the final judgment reached.
Capturing this context transforms decisions from transient events into durable knowledge. Teams can revisit earlier conclusions, understand the evidence that shaped them, and refine decision processes over time.
This approach also reflects a broader shift in how value is created in AI-enabled organizations. The most important asset may not simply be the data being analyzed or the models performing the analysis, but the reasoning that emerges when humans and machines interpret that information together.
The Strategic Layer Above the Model
Another reality of enterprise AI is that models will change. New systems will emerge, costs will shift, and different teams will adopt different tools depending on their needs.
If the reasoning behind decisions remains embedded inside specific tools, organizations risk losing continuity each time technology evolves.
Capturing decision context at the organizational level creates a strategic layer above the model itself. It allows enterprises to change tools while preserving the most valuable part of the process: how decisions are made.
In Boyd’s terms, it strengthens the organization’s ability to orient: to interpret signals and act with confidence even as conditions change.
Decision Traceability as Infrastructure
As artificial intelligence becomes part of everyday decision-making, the ability to trace decisions will likely become foundational rather than optional. Enterprises already invest heavily in data governance, auditability, and access control. Decision traceability represents the next step in that evolution and enables organizations to see how data is used to guide actions and decisions.
Organizations that capture and analyze decision context gain a powerful advantage. They can observe how decisions unfold across teams, identify where assumptions break down, and continuously improve how judgment is applied in complex environments.
Artificial intelligence will undoubtedly continue to advance. Models will become faster, more capable, and more widely adopted. The long-term advantage will belong to organizations that can understand and continually refine how decisions are made.
More than half a century ago, John Boyd showed that success often comes down to who can interpret signals and act effectively in uncertain environments.
Boyd also believed decisions should be revisited after the fact, examining how signals were interpreted, what assumptions proved wrong, and how actions shaped the outcome so the next decision could be made with greater clarity.
In the age of AI, that lesson may be more relevant than ever.
We've ranked the best IT automation software.
This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.
The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit
Chief Product Officer, Axonis.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.