"The height of nonsense": Oracle co-founder Larry Ellison’s 1987 argument that not everything should be AI makes perfect sense in 2026

Larry Ellison, CTO and Chairman of Oracle
(Image credit: Oracle)

In 1987, long before artificial intelligence became the mass-market obsession it is today, Computerworld convened a roundtable to discuss what was then a new and unsettled question: how AI might intersect with database systems.

What makes the discussion notable in hindsight is not the optimism around AI, which was common at the time, but Ellison’s repeated insistence on limits.

While others described AI as a new architectural layer or even a "new species" of software, Ellison argued that intelligence should be applied sparingly, embedded deeply, and never treated as a universal solution.

AI merely a tool

“Our primary interest at Oracle is applying expert system technology to the needs of our own customer base,” Ellison said. “We are a data base management system company, and our users are primary systems developers, programmers, systems analysts, and MIS directors.”

That framing set the tone for everything that followed. Ellison was not interested in AI as an end-user novelty or as a standalone category. He saw it as an internal tool, one that should improve how systems are built rather than redefine what systems are.

Many vendors treated expert systems as a way to replicate human decision making wholesale. Kehler described systems that encoded experience and judgment to handle complex tasks such as underwriting or custom order processing.

Landry went further, arguing that AI could form the architecture for an entirely new generation of applications, built as collections of cooperating expert systems.

Ellison pushed back at this notion, prompting moderator Esther Dyson to ask: "Your vision of AI doesn’t seem to be quite the same as Tom Kehler’s, even though you have this supposed complementary relationship. He differentiates between the AI application and the data base application, whereas you see AI merely as a tool for building data bases and applications."

“Many expert systems are used to automate decision making,” Ellison replied. “But a systems analyst is an expert, too. If you partially automate his function, that’s another form of expert system.”

Ellison drew a clear line between processes that genuinely require judgment and those that don't. In doing so, he rejected what might now be called AI maximalism.

“In fact, not all application users are experts or even specialists,” he said. “For example, an order processing application may have dozens of clerks who process simple orders. Instead of the order processing example, think about checking account processing. Now, there are no Christmas specials on that. There are no special prices. Instead, performance is all-critical, and recovery is all-critical.”

Business God | Software Billionaire | YouTube Documentary - YouTube Business God | Software Billionaire | YouTube Documentary - YouTube
Watch On

"The height of nonsense"

When Dyson suggested a rule such as automatically transferring funds if an account balance dropped below a threshold, Ellison was blunt.

“That can be performed algorithmically because it’s unchanging,” he said. “The application won’t change, and to build it as an expert system, I think, is the height of nonsense.”

This was a striking statement in 1987, when expert systems were widely promoted as the future of enterprise software. Ellison went further, issuing a warning that sounds surprisingly modern.

“And so I say that a whole generation is going to be built on nothing but expert systems technology is a misuse of expert systems. I think expert systems should be selectively employed. It is human expertise done artificially by computers, and everything we do requires expertise.”

Rather than applying AI everywhere, Ellison wanted to focus it where it changed the economics or usability of system development itself. That led him to what he called fifth-generation tools, not as programming languages, but as higher-level systems that eliminated procedural complexity.

“We see enormous benefits in providing fifth-generation tools,” he said. “I don’t want to use the word ‘languages,’ because they really aren’t programming languages anymore. They are more.”

He described an interactive, declarative approach to building applications, one where intent replaced instruction.

“I can sit down next to you, and you can tell me what your requirements are, and rather than me documenting your requirements, I’ll sit and build a system while we’re talking together, and you can look over my shoulder and say, ‘No, that’s not what I meant,’ and change things.”

The promise was not just speed, but a change in who controlled software.

“So not only is it a productivity change, a quantitative change, it’s also a qualitative change in the way you approach the problem.”

Larry Ellison on the Race for AI - YouTube Larry Ellison on the Race for AI - YouTube
Watch On

Not anti-AI

That philosophy carried through Oracle’s later product strategy, from early CASE tools to its eventual embrace of web-based architectures. A decade later, Ellison would argue just as forcefully that application logic belonged on servers, not on PCs.

“We’re so convinced that having the application and data on the server is better, even if you’ve got a PC,” he told Computerworld in 1997. “We believe there will be almost no demand for client/server as soon as this comes out.”

By 2000, he was even more forthright.

“People are taking their apps off PCs and putting them on servers,” ZDNET reported Ellison as saying. “The only things left on PCs are Office and games.”

In retrospect, Ellison’s predictions were often early and sometimes overstated. Thin clients did not replace PCs, and expert systems did not transform enterprise software overnight. Yet the direction he described proved durable.

Application logic moved to servers, browsers became the dominant interface, and declarative tooling became a core design goal across the industry.

What the 1987 roundtable captures is the philosophical foundation of that shift. While others debated how much intelligence to add to applications, Ellison questioned where intelligence belonged at all.

He treated AI not as a destination, but as an implementation detail, valuable only when it reduced complexity or improved leverage.

As AI once again dominates enterprise strategy discussions, the caution embedded in Ellison’s early comments feels newly relevant.

His core argument was not anti-AI, but anti-abstraction for its own sake. Intelligence mattered, but only when it served a larger architectural goal.

In 1987, that goal was making databases the center of application development. Decades later, the same instinct underpins modern cloud platforms. The technology has changed, but the tension Ellison identified remains unresolved: how much intelligence systems need, and how much complexity users are willing to tolerate to get it.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS
Wayne Williams
Editor

Wayne Williams is a freelancer writing news for TechRadar Pro. He has been writing about computers, technology, and the web for 30 years. In that time he wrote for most of the UK’s PC magazines, and launched, edited and published a number of them too.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.