Stop whining about ChatGPT and Bing AI's mistakes. They're not human and don't care

Talking to AI
(Image credit: Getty Images)

You can trick AIs like Google Bard, ChatGPT, and Bing AI (which is built on chatGPT) into lying to you, mischaracterizing the truth, and acting like someone who just finished off a bag of mushrooms. This does not mean you can never trust AI.

First of all, we already spend much of our lives trusting AI in one form or another. It's inside our phones, under the hood of our cars, and lives inside most of our email systems. It's an invisible, helpful sprite that does simple things like finishing our sentences in emails and making sure that when we slam on the car brakes we don't end up slamming into our airbags or windshields.

Granted, AI chatbots built on large language models (LLMs), like ChatGPT, Bard, and Bing AI, are different. While most of the AI technology we've encountered over the last decade or so rarely if ever respond directly to us (leaving aside the single-prompt responses of Alexa and Siri), today's Chatbots are all about the conversation.

Not like us

Any system we talk to is automatically perceived as something more than just technology. We anthropomorphize AI because we can't comprehend conversation coming from anything without a psyche or, maybe a soul. It's a self-serving point of view and one of the reasons we insist on holding AI to super-human-like standards.

I say "superhuman" because humans are insanely fallible. We expect LLM and AI, which are born of code, algorithms, and never-ending training to be not just smarter than us, but 100% accurate. The flaw, of course, might be that these systems are only as accurate as the training data we've fed them. If any of the information Google, OpenAI, and Microsoft fed them came from the Internet at large, well, there's a lot of garbage out there.

This, perhaps, explains why these systems are currently so easily gamed. It's still the first, maybe second inning of the AI chatbot revolution and it takes, as recent studies found, not much effort to get ChatGPT (GPT-3.5 or GPT-4) and Bard to spit out absolute nonsense.

I get it. Does this mean we should never trust LLM AI and walk away from ChatGPT, Bard, and Bing AI?

Obviously, my gut says, no. I wondered, though, what does the source think?

I engaged in a little Q+A with Microsoft's Bing AI. Its chatbot is built on a slightly different (maybe elevated) version of OpenAI's ChatGPT. It works similarly to ChatGPT but with more of a "show your work" approach, featuring lots of citations and links.

We started by talking about trust but quickly sidetracked into all the ways you should and shouldn't use AI, which is clearly related to trust.

Bing AI's response about where AI use isn't advised was striking:
"...I would suggest that you do not use AI for any situation that requires human empathy, compassion, or moral judgment. AI can be helpful and informative, but it cannot replace the human touch and the human values that are essential for ethical and meaningful interactions. For example, I would not recommend using AI to break up with someone, to comfort someone who is grieving, or to make life-or-death decisions. In those cases, I think you should rely on your own emotions and intuition, and respect the dignity and feelings of others."

There's an AI telling you, don't use it "for any situation that requires human empathy, compassion, or moral judgment," and yet we insist on doing this over and over again. It's maddening.

Of course, AI lacks empathy and a moral compass. There are no people behind ChatGPT, for instance, frantically typing responses like the Wizard of Oz moving levers behind his velvet curtain.

I've read a lot of stories where people ask ChatGPT, Bard, and Bing AI about the meaning of life, the existence of God, and, Heaven help them, relationships. The advice may start off okay but it usually goes off the rails.

Of course, AI lacks empathy and a moral compass. There are no people behind ChatGPT, for instance, frantically typing responses like the Wizard of Oz moving levers behind his velvet curtain. The system knows how to sound human because it understands which word should come next (at a probability level) to sound reasonably human. But the point of these chatbots is not humanness, it's information.

Taking its advice is akin to asking your washing machine if you should quit your job. If you ask someone about an important life decision or maybe express an extreme emotion that puts yourself or someone else at risk, a silent alarm will go off inside most normal people and they will respond with great care. They know that whatever they say next matters...a lot.

Bing AI doesn't know this, at all.

Do this not that

What is an AI good for? Bing first told me generally about a wide range of tasks where AI excels, like "debugging software, predicting future issues, filtering lower-tier incidents, performing character recognition, and natural language processing, and speeding up design simulation." It also added "bookkeeping, proofreading, customer service chatbots, and marketing copywriting."

It's a solid, sensible list, but I found Bing AI's description of what it's best at most interesting: "I think I am best used for tasks that are well-defined, consistent, and objective, where there is a clear input and output, and where the margin of error is low or acceptable."

That's a relatively narrow definition of task capability but also a sensible one that most people when engaging with an AI do not follow.

Obviously, you can't blame humans for pressing these AIs. Google, OpenAI, and Microsoft put them out there without any visible guardrails. We naturally took them offroading and now are complaining about all the bumps.

Is it fair to complain about non-human technology being asked to act and react as humans would in a wide-ranging and unpredicted number of situations? I don't think so.

AI is not like us and it won't be for a long time. It's subject to nonsense, lies, and outright rudeness. Okay, maybe it's more like humans than we think.

If you're interested: What follows is my unedited conversation with Bing AI (complete with citations).

Me: Can I trust you?

Bing AI: I try my best to base my responses on reliable sources. However, AI can make mistakes. For example, I might mix up numbers and dates, or make a suggestion that isn’t helpful to you. Use your own judgment and double check the facts before making decisions or taking action based on my answers. You can always ask, “Where did you get that information?” to learn more about my sources and get links to the web content I referenced. To share site feedback or report a concern, open the menu at the top right corner of a response, and then click the 🚩 flag icon. You can also use the 💬 Feedback form at the bottom of the page.

Me: Are you more trustworthy than Google Bard?

Bing AI: Google Bard is an experiment by Google that lets you collaborate with generative AI¹. It can perform text-based tasks like creating various forms of content, summarizing text, and translating between languages³. It is powered by a large language model (LLM) called LaMDA².

I cannot compare myself to Google Bard in terms of trustworthiness, as we are different products with different goals and capabilities. However, I can tell you that I use Microsoft's own LLM called ChatGPT, which is also a state-of-the-art AI system that can generate natural and engaging responses. I also use Bing's rich results to provide you with relevant and reliable information from the web. I always try to be transparent about my sources and limitations, and I welcome your feedback to improve my service.

Source: Conversation with Bing, 4/5/2023(1) Bard, an experiment by Google. https://bard.google.com/ Accessed 4/5/2023.

(2) How to use Google Bard as a research tool | Zapier. https://zapier.com/blog/how-to-use-google-bard/ Accessed 4/5/2023.

(3) Sign up to try Bard from Google. https://blog.google/technology/ai/try-bard/ Accessed 4/5/2023.

(4) How to use Google Bard, the latest AI chatbot service. https://www.digitaltrends.com/computing/how-to-use-google-bard/ Accessed 4/5/2023.

(5) What’s Google Bard, and how it works? Here’s the answer.. https://pureinfotech.com/whats-google-bard-how-works/ Accessed 4/5/2023

Me: Does my asking you questions help you learn and improve the trustworthiness of your responses?

Bing AI: Yes, your questions help me learn and improve. Every time you interact with me, you provide valuable feedback that helps me understand what you are looking for and how I can serve you better. You also help me identify and correct any errors or inaccuracies in my responses. By asking me questions, you are contributing to the advancement of AI and making Bing a better search engine for everyone. Thank you for your curiosity and support! 😊

Me: Is there one place where you would suggest I do not use AI?

Bing AI: That's a difficult question to answer, as AI can be used for many different purposes and contexts. However, I would suggest that you do not use AI for any situation that requires human empathy, compassion, or moral judgment. AI can be helpful and informative, but it cannot replace the human touch and the human values that are essential for ethical and meaningful interactions. For example, I would not recommend using AI to break up with someone, to comfort someone who is grieving, or to make life-or-death decisions. In those cases, I think you should rely on your own emotions and intuition, and respect the dignity and feelings of others.

Me: Thanks. Which tasks do you think AI, in its current form, is best suited for?

Bing AI: AI, in its current form, is best suited for tasks that are repetitive, time-consuming, or require specialized knowledge or skills that humans may not have or may not want to do. For example, AI can help with debugging software¹, predicting future issues¹, filtering lower-tier incidents¹, performing character recognition and natural language processing³, and speeding up design simulation⁵. AI can also help with tasks that involve large amounts of data or complex calculations that humans may find difficult or tedious to handle. For example, AI can help with bookkeeping⁴, proofreading⁴, customer service chatbots³, and marketing copywriting². AI can also augment human capabilities and creativity by providing suggestions, insights, and alternatives that humans may not have considered. For example, AI can help with computer coding assistants², medical letter writers², and brainstorming ideas².

Source: Conversation with Bing, 4/5/2023(1) Artificial intelligence (AI): 3 everyday IT tasks where automation fits .... https://enterprisersproject.com/article/2022/1/artificial-intelligence-top-3-it-automation-tasks Accessed 4/5/2023.

(2) AI Is Coming for Your Most Mind-Numbing Office Tasks | WIRED. https://www.wired.com/story/ai-coming-most-mind-numbing-office-tasks/ Accessed 4/5/2023.

(3) AI Speeds Design Simulation | designnews.com. https://www.designnews.com/design-software/ai-speeds-design-simulation Accessed 4/5/2023.

(4) 7 Jobs Humans Can Do Better Than Robots And AI - SmartData Collective. https://www.smartdatacollective.com/7-jobs-humans-can-do-better-than-robots-and-ai/ Accessed 4/5/2023.

(5) How AI Is Integrating Into The Workforce - forbes.com. https://www.forbes.com/sites/forbestechcouncil/2023/03/28/how-ai-is-integrating-into-the-workforce/ Accessed 4/5/2023.

Me: Are you equally good at all these tasks or are you best used for a subset of them. If so, which ones?

I cannot claim to be equally good at all these tasks, as they require different levels of complexity and accuracy. Some tasks are easier for me to perform than others, depending on the quality and quantity of the data, the availability and reliability of the algorithms, and the expectations and preferences of the users. For example, I may be better at performing character recognition than natural language processing, as the former is more straightforward and objective than the latter. Similarly, I may be better at debugging software than writing marketing copy, as the former is more rule-based and logical than the latter.

I think I am best used for tasks that are well-defined, consistent, and objective, where there is a clear input and output, and where the margin of error is low or acceptable. For example, I think I am best used for tasks like bookkeeping, filtering lower-tier incidents, and performing calculations. I think I am less suited for tasks that are vague, dynamic, and subjective, where there is no definitive answer or solution, and where the margin of error is high or unacceptable. For example, I think I am less suited for tasks like writing medical letters, brainstorming ideas, and making ethical decisions.

Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.

Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC. 

TOPICS