Character.AI won't let its chatbots get romantic with teenagers anymore

Character AI
(Image credit: Character AI)

Character.AI has a new set of features aimed at making interactions with the virtual personalities it hosts safer, especially for teenagers. The company just debuted a new version of its AI model specifically designed for its younger users, as well as a set of parental controls to manage their time on the website. The updates follow earlier safety changes to the platform in the wake of accusations that the AI chatbots were negatively impacting the mental health of children.

These safety changes have been accompanied by other efforts to tighten the reins on Character.AI's content. The company recently began a purge, albeit an incomplete one, of any AI imitations of copyrighted and trademarked characters.

Character.AI is also working to keep parents in the loop about what their teenagers are doing on the website, with controls set to come out early next year. The new parental controls will give parents insight into how much time their kids spend on the platform and which bots they’re chatting with the most. To make sure these changes hit the right notes, Character.AI is working with several teen online safety experts.

Disclaimer AI

It's not just teenagers that Character.AI is looking to help maintain a sense of reality. They’re also tackling concerns about screen time addiction, with all users getting a reminder after they've been talking to a chatbot for an hour. The reminder nudges them to take a break.

The existing disclaimers about the AI origins of the characters are also getting a boost. Instead of just a small note, you'll see a longer explanation about them being AI. That's especially true if any of the chatbots are described as doctors, therapists, or other experts. A new extra warning makes it crystal clear that the AI isn’t a licensed professional and shouldn’t replace real advice, diagnosis, or treatment. Imagine a big yellow sign saying, “Hey, this is fun and all, but maybe don’t ask me for life-changing advice.”

"At Character.AI, we are committed to fostering a safe environment for all our users. To meet that commitment we recognize that our approach to safety must evolve alongside the technology that drives our product – creating a platform where creativity and exploration can thrive without compromising safety," Character.AI explained in a post about the changes. "To get this right, safety must be infused in all we do here at Character.AI. This suite of changes is part of our long-term commitment to continuously improve our policies and our product."

You might also like...

Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.