Roblox is sharing its AI tool to fight toxic game chats – here’s why that matters for kids

A cast of Roblox characters dressed in lots of different outfits
(Image credit: Roblox Corporation)

Online game chats are notorious for vulgar, offensive, and even criminal behavior. Even if only a tiny percentage, the many millions of hours of chat can accumulate a lot of toxic interactions in a way that's a problem for players and video game companies, especially when it involves kids. Roblox has a lot of experience dealing with that aspect of gaming and has used AI to create a whole system to enforce safety rules among its more than 100 million mostly young daily users, Sentinel. Now, it's open-sourcing Sentinel, offering the AI and its capacity for identifying grooming and other dangerous behavior in chat before it escalates for free to any platform.

This isn’t just a profanity filter that gets triggered when someone types a curse word. Roblox has always had that. Sentinel is built to watch patterns over time. It can track how conversations evolve, looking for subtle signs that someone is trying to build trust with a kid in potentially problematic ways. For instance, it might flag a long conversation where an adult-sounding player is just a little too interested in a kid’s personal life.

Sentinel helped Roblox moderators file about 1,200 reports to the National Center for Missing and Exploited Children in just the first half of this year. As someone who grew up in the Wild West of early internet chatrooms, where “moderation” usually meant suspecting that people who used correct spelling and grammar were adults, I can’t overstate how much of a leap forward that feels.

Open-sourcing Sentinel means any game or online platform, whether as big as Minecraft or as small as an underground indie hit, can adapt Sentinel and use it to make their own communities safer. It’s an unusually generous move, albeit one with obvious public relations and potential long-term commercial benefits for the company.

For kids (and their adult guardians), the benefits are obvious. If more games start running Sentinel-style checks, the odds of predators slipping through the cracks go down. Parents get another invisible safety net they didn’t have to set up themselves. And the kids get to focus on playing rather than navigating the online equivalent of a dark alley.

For video games as a whole, it’s a chance to raise the baseline of safety. Imagine if every major game, from the biggest esports titles to the smallest cozy simulators, had access to the same kind of early-warning system. It wouldn’t eliminate the problem, but it could make bad behavior a lot harder to hide.

AI for online safety

Of course, nothing with “AI” in the description is without its complications. The most obvious one is privacy. This kind of tool works by scanning what people are saying to each other, in real time, looking for red flags. Roblox says it uses one-minute snapshots of chat and keeps a human review process for anything flagged. But you can’t really get around the fact that this is surveillance, even if it’s well-intentioned. And when you open-source a tool like this, you’re not just giving the good guys a copy; you also make it easier for bad actors to see how you're stopping them and come up with ways around the system.

Then there’s the problem of language itself. People change how they talk all the time, especially online. Slang shifts, in-jokes mutate, and new apps create new shorthand. A system trained to catch grooming attempts in 2024 might miss the ones happening in 2026. Roblox updates Sentinel regularly, both with AI training and human review, but smaller platforms might not have the resources to keep up with what's happening in their chats.

And while no sane person is against stopping child predators or jerks deliberately trying to upset children, AI tools like this can be abused. If certain political talk, controversial opinions, or simply complaints about the game are added to the filter list, there's little players can do about it. Roblox and any companies using Sentinel will need to be transparent, not just with the code, but also with how it's being deployed and what the data it collects will be used for.

It's also important to consider the context of Roblox's decision. The company is facing lawsuits over what's happened with children using the platform. One lawsuit alleges a 13‑year‑old was trafficked after meeting a predator on the platform. Sentinel isn't perfect, and companies using it could still face legal problems. Ideally, it would serve as a component of online safety setups that include things like better user education and parental controls. AI can't replace all safety programs.

Despite the very real problems of deploying AI to help with online safety, I think open-sourcing Sentinel is one of the rare cases where the upside of using AI is both immediate and tangible. I’ve written enough about algorithms making people angry, confused, or broke to appreciate when one is actually pointed toward making people safer. And making it open-source can help make more online spaces safer.

I don’t think Sentinel will stop every predator, and I don’t think it should be a replacement for good parenting, better human moderation, and educating kids about how to be safe when playing online. But as a subtle extra line of defense, Sentinel has a part to play in building better online experiences for kids.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.