The AI conversation is a mess — and that’s stopping us from making good decisions

ChatGPT app on an iPhone
(Image credit: Shutterstock / Primakov)

The term AI has turned into a catch-all. It’s used to describe everything from ChatGPT to cancer detection tools to toothbrushes. No wonder everyone is confused.

Interestingly, many of the experts don’t always agree either. There are lots of broadly accepted terms, but if you start asking people how to classify AI, the definitions diverge fast.

So what actually counts as AI? Let’s get into the basics. And look, I know a conversation about terminology might sound dull. But stick with me. Because I really do believe that if we want to feel confident in the age of AI (whether that means using these tools or choosing not to), we need a shared understanding of what we’re talking about that’s separate from the hype.

Why AI definitions feel messy

“The term AI has become meaningless in some ways because people often use it synonymously with technology in general,” Vasant Dhar, a Professor at NYU Stern and author of Thinking With Machines: The Brave New World of AI, explains.

But the problem is that AI isn’t one thing. “AI is many different technologies,” Rupert Shute, Professor of Practice in Emerging Technology Governance and Regulation at Imperial College London, tells me. But the popularity of tools like ChatGPT, which we call generative AI, has muddied the waters.

“Generative AI is drowning everything else out, which is a shame,” Shute says. “Because most actual value comes from other classes of AI that we’ve been using for decades.”

You're already using AI – you just don't know it

It’s sort of no surprise we use AI to describe tech more generally, because a lot of our tech does have AI elements built into it. Most of it just doesn’t feel that new or futuristic because it’s been around long enough that we no longer notice it.

“There are many types of AI quietly working behind the scenes,” Thiago Ferreira, CEO and Founder at Elevate AI Consulting, an AI training and consultancy company, tells me. “Things like spam filters, fraud detection, medical imaging tools, recommendation systems, or even the way your phone sorts your photos are all forms of AI.”

When you list these examples, he says, people often respond with: “Oh, I didn’t realize that counted as AI.”

What is generative AI?

I’ve mentioned generative AI a few times, but it’s worth getting clear on this definition.

Generative AI is a type of AI that creates new content in response to a prompt. It can create text, images, audio, code, explanations, and more. It doesn’t go and find information; it generates something new based on patterns it has learned from lots of data.

Chatbots that most people use (like ChatGPT and Gemini) are a form of generative AI. They run on large language models (LLMs). The LLM is the part that has learned from vast amounts of data; think of this as the engine. The chatbot is the conversational interface that lets you interact with it; think of this as the steering wheel.

It’s easy to see why this subset of AI now dominates public understanding. “Generative AI feels tangible,” Ferreira says. “It generates unique content based on your request.” That makes it easier to grasp and name than the quieter systems that might be running in your bank or your phone.

“Generative AI just made the technology more visible and relatable,” Ferreira says. “Which is why for many people it feels like ‘AI equals chatbot’ today.”

The different views of AI

Once you dig into the types of AI, you’ll find that even the companies making it and the experts working alongside it organize definitions differently. Like looking at the same landscape, just from different vantage points.

Ferreira thinks it’s most helpful to describe AI by what it does, which is the practical and everyday lens that makes the most sense to people.

“While generative AI gets the most of the attention because of tools like ChatGPT, there are many other types of AI quietly working around us every day,” he says.

Recognition tools help doctors spot tumors or let your phone identify a friend in a photo. Prediction AI powers weather forecasts or fraud alerts from your bank. Autonomous systems let cars, robots, or delivery drones operate with minimal human input.

Dhar’s approach is more conceptual, like a map of the field’s evolution. He starts with expert systems, where human expertise is encoded into rules. Then, traditional machine learning where data is transformed into features and used to learn patterns. Then, deep learning, where models learn directly from raw sensory input.

And finally, general intelligence, the way he describes today’s large systems trained across many forms of data, capable of talking about almost anything. He says ChatGPT would fall into this last category.

Shute slices the field into waves, defined by how each generation of AI thinks. The first is symbolic logic, built on hand-crafted rules, which is transparent but limited.

The next is statistical learning, which includes deep learning and the transformer models behind ChatGPT. These are powerful pattern-recognizers that don’t give us much of an explanation of how an input was arrived at.

The third wave is neuro-symbolic AI, which tries to fuse the strengths of both. Systems that can learn yet also reason in ways humans can audit. Shute points to emerging startups like Umnai exploring this space.

The hype gap

So where does that leave the big, bold ideas like AGI (artificial general intelligence) and ASI (artificial super intelligence)?

AGI refers to the idea of AI that can think or learn as broadly as a human. ASI takes that idea further, imagining systems that surpass human intelligence altogether. These are concepts that often fuel the more breathless debate about the future.

But Shute tells me: “Any sentence proclaiming AGI or ASI will also work if you replace the term with ‘space aliens’, and needs to be treated with similar scepticism.”

Bringing AI back down to Earth, away from fantasy and closer to what systems can do right now, is essential if we want to really understand these tools and use them responsibly.

Understanding the power dynamic

One thing the experts all stress is that these systems depend on us. The key is knowing when a simulation of thinking is enough, and when you need genuine understanding.

“The ‘intelligence’ in artificial intelligence starts with us,” Ferreira says. “They don’t know what to do until a human gives direction. When people understand that, the power dynamic flips.”

Ask a thoughtful prompt, and you get a thoughtful answer. Ask a vague one, and you’ll get something similarly vague. “AI isn’t replacing our thinking,” he says. “It’s extending it.”

I believe a big step in understanding the power we have is in learning that AI isn’t one thing. Instead, it’s dozens of techniques, many of which have been around for decades and quietly power the systems we use every day. Generative AI might be loud, visible, and impressive, but it’s only a fraction of the field.

The next time a company claims its product “uses AI”, it’s worth asking: Which type? Doing what? Does it actually matter? Some applications deserve caution, others deserve enthusiasm. But we can only make those calls if we better understand what we’re talking about.

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And, of course, you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.