A customer managed to get the DPD AI chatbot to swear at them, and it wasn’t even that hard

An AI-powered phone mockup
(Image credit: Shutterstock / ZinetroN)

The DPD customer support chatbot, which is unsurprisingly powered by AI, swore at a customer and wrote a poem about how bad the company is.

DPD said that an update the day before the error was discovered was responsible for the malfunction, which resulted in the chatbot exploring its newfound use of profanity.

Word of the malfunction spread across X (formerly Twitter) after details emerged of how to abuse this particular error.

The customer is always right, right?

Many businesses have adopted AI powered chatbots to help filter queries and requests to their relevant departments, or to provide responses to frequently asked questions (FAQ). 

Usually there are rules implemented to the AI that prevent it from providing unhelpful, malicious or profane responses, but in this case, an update somehow released the chatbot from its rules.

In a series of posts on X (formerly Twitter), DPD customer Ashley Beauchamp shared their interaction with the Chatbot, including the prompts used and the bot’s responses, stating, “It's utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company. It also swore at me.”

DPD Chatbot swearing

(Image credit: Ashley Beauchamp - X)

This is just one example of how an AI chatbot can go rogue if not properly tested before release and after updates. For smaller businesses an AI chatbot mixup like this could potentially cause reputational and financial harm, as Mr Beauchamp managed to get the chatbot to “recommend some better delivery firms” as well as criticizing the company in a range of formats including a haiku.

DPD Chatbot Haiku

(Image credit: Ashley Beauchamp - X)

DPD also offers customer support with human operators via a WhatsApp messaging service or through the phone. Many Chatbots use large language models (LLM) to understand questions and generate responses, with the data that LLM AI is trained on coming from large quantities of human conversations.

Due to the size of the data sets that LLM’s use, it can be difficult to filter out profanity and hateful language completely. Sometimes this results in a chatbot responding to a question or prompt with words it otherwise would not use.

Via BBC

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict Collins is a Staff Writer at TechRadar Pro covering privacy and security. Benedict is mainly focused on security issues such as phishing, malware, and cyber criminal activity, but also likes to draw on his knowledge of geopolitics and international relations to understand the motivations and consequences of state-sponsored cyber attacks. Benedict has a MA in Security, Intelligence and Diplomacy, alongside a BA in Politics with Journalism, both from the University of Buckingham.