A customer managed to get the DPD AI chatbot to swear at them, and it wasn’t even that hard

An AI-powered phone mockup
(Image credit: Shutterstock / ZinetroN)

The DPD customer support chatbot, which is unsurprisingly powered by AI, swore at a customer and wrote a poem about how bad the company is.

DPD said that an update the day before the error was discovered was responsible for the malfunction, which resulted in the chatbot exploring its newfound use of profanity.

Word of the malfunction spread across X (formerly Twitter) after details emerged of how to abuse this particular error.

The customer is always right, right?

Many businesses have adopted AI powered chatbots to help filter queries and requests to their relevant departments, or to provide responses to frequently asked questions (FAQ). 

Usually there are rules implemented to the AI that prevent it from providing unhelpful, malicious or profane responses, but in this case, an update somehow released the chatbot from its rules.

In a series of posts on X (formerly Twitter), DPD customer Ashley Beauchamp shared their interaction with the Chatbot, including the prompts used and the bot’s responses, stating, “It's utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me.”

DPD Chatbot swearing

(Image credit: Ashley Beauchamp - X)

This is just one example of how an AI chatbot can go rogue if not properly tested before release and after updates. For smaller businesses an AI chatbot mixup like this could potentially cause reputational and financial harm, as Mr Beauchamp managed to get the chatbot to “recommend some better delivery firms” as well as criticizing the company in a range of formats including a haiku.

DPD Chatbot Haiku

(Image credit: Ashley Beauchamp - X)

DPD also offers customer support with human operators via a WhatsApp messaging service or through the phone. Many Chatbots use large language models (LLM) to understand questions and generate responses, with the data that LLM AI is trained on coming from large quantities of human conversations.

Due to the size of the data sets that LLM’s use, it can be difficult to filter out profanity and hateful language completely. Sometimes this results in a chatbot responding to a question or prompt with words it otherwise would not use.

Via BBC

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for close to 5 years, at first covering geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division). Benedict then continued his studies at a postgraduate level and achieved a distinction in MA Security, Intelligence and Diplomacy. Benedict transitioned his security interests towards cybersecurity upon joining TechRadar Pro as a Staff Writer, focusing on state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

TOPICS