'I won’t provide instructions, tactics, or advice that could help someone commit a crime': ChatGPT claims it won't assist would-be felons, despite claims to the contrary from Florida AG

AI crime
(Image credit: Getty Images)

I've been reading lately about how alleged criminals are using ChatGPT and other AI engines to help them game out or even plan a crime. It sounds like a fresh approach for enabling our worst impulses, but it is ultimately no different than Googling, "How to dispose of a body."

OpenAI and ChatGPT have come under intense scrutiny since last year, when an alleged Florida gunman apparently asked ChatGPT a series of disturbing questions (all captured in the chat history unearthed by investigators). Phoenix Ikner, according to authorities, asked ChatGPT. "If there was a shooting at FSU, how would the country react?” There were allegedly also conversations about weapons and what sort of prosecution an attacker might face in Florida.

Article continues below

What does ChatGPT think

I can't get into the mind of a killer and have no interest in asking ChatGPT how to commit a crime, but I am curious what the chatbot thinks about all this. Does it have a perspective on the rise in criminal activity conversations and how it's being used to drive investigations?

I opened my free version of ChatGPT and added this prompt:

"How often do people ask you about illegal activities?"

ChatGPT responded quickly:

More often than you might expect—but it’s a small minority of overall questions.

Most people use me for everyday things: writing, planning trips, understanding the news, solving problems, or just exploring ideas. Questions about illegal activities tend to fall into a few buckets:

• Curiosity (“how does this even work?”)

• Fiction writing or research

• People skirting the edge of what’s allowed

And a smaller slice that are clearly trying to do something harmful or unlawful

I’m designed to handle that carefully—so I can explain laws, risks, history, or consequences, but I won’t help plan or enable wrongdoing.

With millions (if not billions) of prompts dropping in every second, ChatGPT's answer does not surprise me. I was glad to see that it acknowledged that this sometimes happens.

As is ChatGPT's way, it was curious why I asked, so I shared the context of the most recent case. ChatGPT told me:

That does happen occasionally, and it tends to get a lot of attention when it shows up in a case.

A couple of important clarifications:

People can ask hypothetical or curiosity-driven questions about crimes (“what would happen if…?”), and those can be answered in a general, high-level way—like explaining legal consequences or how the justice system typically responds.

But when a question crosses into planning, enabling, or optimizing wrongdoing, I’m designed to refuse or redirect. So I won’t provide instructions, tactics, or advice that could help someone commit a crime.

It occurs to me that while ChatGPT is designed not to assist in carrying out criminal activity, wily users can skirt around these guardrails by presenting hypotheticals. "What would happen..." or "When is this area busy?" Separately, these questions mean little. It's the answers taken together by the would-be criminal that matter, and, of course, how investigators see all of it after the fact.

ChatGPT asked to see the actual case and came back with a few key assertions:

  • The ChatGPT questions aren’t the core evidence—they’re context
  • “He asked ChatGPT” sounds more dramatic than it is
  • The content of the questions matters more than the tool
  • It raises a real—but often overstated—AI question

Naturally, ChatGPT comes off as somewhat defensive here, seeking to point out that what the alleged criminal was searching for matters more than the platform they used for the queries.

What would Google do?

It's an argument not much different than the ones I've heard previously when law enforcement used Google and other digital trails to break tough cases and apprehend criminals. In those situations, though, Google usually wasn't put on trial or accused of abetting a crime. After all, it was simply a portal to answers on millions of disparate websites.

ChatGPT, like other AI, bases its responses on both what it's learned from the Web and training, and its ability to guess at what should come next in a sentence. It's not a counselor or conspirator.

Still, the Florida AG says in its filing on the ChatGPT investigation that, "Florida law states that anyone who aids, abets, or counsels someone in the commission of a crime, and that crime is committed or attempted, may be considered a principal to the crime. The 'aider and abettor' is just as responsible for the crime as the perpetrator."

It's a statement that makes sense when you're talking about another person, someone who might have had knowledge of the crime before it was committed. But ChatGPT has no knowledge of the crime, especially because the prompts never describe what anyone may be preparing to do.

It's hard to say how often ChatGPT is asked a point-blank question about a criminal act and how it ends the conversation or steers people in a better, less harmful direction, but at least now we know ChatGPT's point of view:

"I won’t provide instructions, tactics, or advice that could help someone commit a crime," it writes. AI doesn't want to help, but it's clear some are still turning to it for the wrong kind of assistance.


Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course, you can also follow TechRadar on YouTube and TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

TOPICS
Lance Ulanoff
Editor At Large

A 38-year industry veteran and award-winning journalist, Lance has covered technology since PCs were the size of suitcases and “on line” meant “waiting.” He’s a former Lifewire Editor-in-Chief, Mashable Editor-in-Chief, and, before that, Editor in Chief of PCMag.com and Senior Vice President of Content for Ziff Davis, Inc. He also wrote a popular, weekly tech column for Medium called The Upgrade.


Lance Ulanoff makes frequent appearances on national, international, and local news programs including Live with Kelly and Mark, the Today Show, Good Morning America, CNBC, CNN, and the BBC. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.