I asked ChatGPT if it thinks and it answered my biggest question about AI

AI thinker
(Image credit: Shutterstock)

There's a problem with ChatGPT and other modern generative AI platforms: they appear to be engaging in "human conversation," so much so that people mistake them for friends, confidants, even therapists. But as I recently told someone, AI chatbots like Gemini and ChatGPT aren't human and lack, for instance, basic empathy, the ability to understand your feelings because they have sometimes felt them, too.

What it comes down to, though, is a fundamental question of thought. Humans think. Some might argue that it's all we do: think and think and think. There are external representations of those thoughts, like movement, speech, actions, laughter, tears, anger, creativity, and disagreement. Human thought is a complex thing that is, in some ways, a physical act; it happens in the squishy brain that uses its massive and highly complex maze of information (perceptions, memories, learnings, emotions connected to those things) to produce those thoughts.

Increasingly, ChatGPT, Gemini, Claude AI, and other generative platforms appear to be engaging in similar 'thought' processes. This has led to people assuming that when they chat with an AI – via prompts or speech –that it's understanding their intentions and all the various signals humans convey during a typical conversation. ChatGPT appears to be thinking about your prompt and then, in its own considered way, responding with a thoughtful response.

Is that thought? Is ChatGPT 'thinking'? Intuitively, I know the answer must be no. But I'm also more deeply embedded in the industry that birthed these generative chatbots. I've been covering technology for 40 years and AI for almost 20 years. The question of whether or not an AI passes the Turing test is not new, and it mostly relates not so much to thinking as it does to how well an AI can fool someone into believing they're talking to another human.

In a way, though, that is for some proof of 'thinking' or at least faking it.

Instead of endlessly pondering the question, I put the prompt to ChatGPT and Gemini:

"Would you describe what you do as 'thinking'? If not, how do you describe what you do?"

Here's ChatGPT's response:

"I wouldn’t call what I do 'thinking' in the human sense.

When you think, you combine memory, emotion, intuition, and reasoning into a lived, conscious experience. I don’t have consciousness, emotions, or intuition.

What I do is closer to patterned computation:

I take the text you give me as input.

I analyze it against patterns in the data I was trained on.

I generate the most contextually useful continuation or response.

You could describe it as:

Prediction: I’m always predicting what’s most likely to come next in a conversation.

Simulation: I simulate reasoning by chaining together these predictions in ways that look like human thought.

Conversation modeling: My goal is to produce coherent, relevant, and helpful replies that match your intent and style.

So while it looks like thinking from the outside, under the hood it’s more like structured pattern-matching and probability—without the awareness you have when you think."

Thinking it through

Naturally, this is a perfect answer and really puts generative AI chatbot capability in perspective. In the early days of ChatGPT (a whole three years ago!), we lauded its conversational capabilities, understanding that it was a demonstration of not consciousness but the ability to accurately predict what the next word should be in a conversation.

As you can see, 'prediction' is still a big part of what ChatGPT does. The complexity comes into play with ChatGPT's ability to chain together myriad predictions to produce a simulation of conversation.

The conversation modeling bit is interesting because it, in a way, tries to do what people often do: model behavior based on what's in front of you. When you're talking to someone face-to-face, you may unconsciously duplicate some of their actions, like their stance, facial expressions, or even arm crossing.

ChatGPT does it with the style of conversation. The more it's like your own, the more comfortable you may feel chatting with it, and the more human (and thoughtful) it may seem.

Google Gemini's response was similar:

"I wouldn't describe what I do as 'thinking' in the human sense. Human thinking involves consciousness, emotions, personal experiences, and a subjective understanding of the world. My process is fundamentally different."

Like ChatGPT, Gemini notes that pattern recognition and predictive text generation are core aspects of its process.

So, the next time you're engaged in a deep and lengthy conversation with ChatGPT or any other generative AI chatbot, it's worth remembering what they themselves will tell you: They're not thinking. They're not human. They don't truly understand you. They're just very good at coming up with answers.

You might also like

TOPICS
Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.


Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.