AI agents are flashy, but machine learning still pays the bills

Machine Learning AI
(Image credit: Shutterstock)

You’d be forgiven for thinking it’s impossible for something as frequently discussed as machine learning to be actually, functionally overlooked.

And yet, here we are. Machine learning has seemingly slipped from its rightfully-earned pedestal. Its current state is an almost baffling one.

Over the last two years, agentic systems have snagged and kept the AI spotlight. These agents promised autonomous workflows and natural language orchestration—and, in some cases, delivered.

Article continues below

But that’s beside the point. Whatever these AI tools did or failed to do, the spotlight stayed fixed on them, and machine learning has suffered for it.

Michele Tucci

Co-Founder and Chief Strategy Officer at Credolab.

At no point did machine learning actually lose value. The industry, however, forgot what it actually takes to operationalize AI inside real businesses, and LLMs can’t do that alone.

It’s worth remembering that machine learning didn’t go anywhere. In fact, it is the core technology used to create and power LLMs. In the hierarchy of AI, large language models are a specialized subset of deep learning, which is itself a subset of machine learning.

Strip away the layers and what remains is ML, doing the heavy lifting.

Distracted executives in strange new worlds

Executive perception of AI can be divided into three distinct phases. Up until about 2023, AI had a certain novelty factor to starry-eyed C-suites. Those first GenAI demos indeed felt like magic.

Executives saw models writing code, summarizing reports, and generating images. Although impressive, they remained somewhat unclear on the exact operational use. This didn’t stop them from adopting it.

At some point around the middle of 2023, executives stopped asking whether or not it should be used, and started asking how many different areas they could shove it into whether it was necessary or not.

These decision-makers catapulted their AI strategy from ‘useful tool in certain contexts’ into ‘competitive checkbox for everything that pleases shareholders.’ Extensive pilots with vague KPIs (and a noticeable lack of measurable ROI) abounded.

That lasted until early 2024, when reality set in and we found ourselves in the current phase. Pragmatism has mostly returned. Executives have begrudgingly realized that while AI agents are exciting, they are also expensive to run, difficult to control, and poorly suited to high-precision decisions.

As is typical of such sobering moments, many organizations were forcibly reminded of the value of the fundamentals. Data quality, feature engineering, and the traditional ML models that quietly power high-velocity digital operations have never looked better.

Was it ML all along?

The LLM-centric hype cycle that dominated the last several years of AI did not unseat ML’s quiet supremacy in a few select domains.

It’s not a competition, but classical ML models still massively outperform agentic systems in high-velocity, low-latency, high-precision decision environments. Gradient boosting machines, random forests, and logistic regression do what agentic systems can’t.

An agent cannot ‘ponder’ whether or not a payment transaction processed through a fintech is fraudulent without running up a ridiculous bill. To do that cheaply, you need a random forest that can generate a probability score in milliseconds at only fractions of a cent per million inferences.

Behind the scenes, ML makes sense all over. In ecommerce, ML determines recommendations, pricing, and inventory levels at scale. In cybersecurity, anomaly detection models scan millions of data points per second, well exceeding the highest capability of LLM-based agents.

In mobility and logistics, ML forecasts demand, optimizes routing, and adjusts allocations with tight latency constraints. If you want AI to give your operational decision-making an actual backbone, you’re going to turn to traditional ML, not agents.

Neglecting ML foundations at your own peril

Many organizations today are suffering from what I call Sistine Chapel Syndrome. Everyone is devoting all their attention to the dazzling ceiling, and no one is looking at the foundation keeping the whole thing aloft. That’s a problem for organizations where the marble underfoot is starting to crack.

The “garbage in, garbage out” rule didn’t disappear because the interface now looks like a chatbot. If anything, it got worse. An agent fed poor data will not only return incorrect numbers but confidently deliver plausible-sounding narratives that lead teams into poor decisions.

When the foundational ML processes are weak, agents amplify its weaknesses rather than compensate for them. When companies skip data quality checks, let feature engineering slide, and fail to adequately monitor and retrain pipelines, that ceiling comes crashing down. The rise of agents should motivate organizations to double down on ML hygiene.

Two is better than one

AI isn’t a zero-sum game between ML or agents. The best teams make them work together. To borrow from Daniel Kahneman, the ML should be fast, automatic, and precise, charged with handling predictions and classification. The LLMs ought to be slow but smart, sorting through reasoning, orchestration, and interpretation.

That split shouldn’t be 50/50. Numbers aren’t the point here at all, since the right balance depends on outcomes. Success should be measured by tangible business impact (like an X% gain in operational efficiency), not just by how sophisticated the underlying models appear.

ML is cheaper, more reliable, and easier to audit. It’s already powering most mission-critical decisions. Any given agent will only be as good as the ML layer beneath them. This split might even keep you out of trouble, since regulators are keen on the kind of explainability that can’t be gleaned from probability-driven token generators.

ML, all over again

The flashiest agents will not send you sailing past your competitors. Clean, well-managed data pipelines built on strong ML baselines just might. Add robust monitoring and retraining cycles, clear interfaces between agents and predictive models, and top it all off with strict cultural discipline around model governance, and you’re in it for the long haul.

Investing in invisible layers will never be glamorous, but in overlooking machine learning, we learned it was essential the whole time. Perhaps ML was only overlooked because it works so well that it faded into the background. Wherever the industry’s attention goes next, value creation will remain overwhelmingly in ML’s ability to make reliable predictions at scale.

ML keeps everything standing. More leaders ought to realise that the path to powerful agentic AI runs through—and only through—a strong ML foundation. If you want to keep the ceiling gorgeous, you’d better start repairing those cracks in the floor.

Check our list of the best IT Automation software.

Co-Founder and Chief Strategy Officer at Credolab.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.