OpenAI's new tool says it can spot text written by AI

typing
(Image credit: Shutterstock.com)

OpenAI has announced a new tool that it says can tell the difference between text written by a human and that of an AI writer - some of the time.

The Microsoft-backed company says the new classifier, as it is called, has been developed to combat the malicious use of AI content generators, such as its very own and very popular ChatGPT, in "running automated misinformation campaigns, … academic dishonesty, and positioning an AI chatbot as a human."

So far, it claims that the classifier has a success rate of 26% in identifying AI-generated content, correctly labelling it as being 'likely AI-written', and a 9% false positive rate in mislabeling the work of humans as being artificially created. 

TechRadar Pro needs you!
We want to build a better website for our readers, and we need your help! You can do your bit by filling out our survey and telling us your opinions and views about the tech industry in 2023. It will only take a few minutes and all your answers will be anonymous and confidential. Thank you again for helping us make TechRadar Pro even better.

D. Athow, Managing Editor

Spot the difference

OpenAI notes that the classifier performs better the longer the text, and that compared to previous versions, the newer version is "significantly more reliable" at detecting autogenerated text from more recent AI tools.

The classifier is now publicly available, and OpenAI will use the feedback it gets to determine the usefulness of it and to help improve further developments of AI detection tools going forward. 

OpenAI is keen to point out that it has its limitations and should not be relied upon as a "primary decision-making tool", a sentiment shared by most involved in all fields of AI. 

As mentioned, the length of the text is important for the classifier's success, with OpenAI stating that it is "very unreliable" on pieces with less than a thousand characters. 

Even longer texts can be incorrectly identified, and human written content can be "incorrectly but confidently labeled as AI-written". Also, it performs worse on text in written in non-English languages as well as computer code. 

Predictable text where the content can only realistically be written one way is also unable to be labelled reliably, such as a list of the first one thousand prime numbers, to give OpenAI's example.

What's more, OpenAI points out that AI text can be edited to fool the classifier, and although the classifier can be updated and learn from being tricked like this, interestingly, the company says it is "unclear whether detection has an advantage in the long-term."

Text that is also very different from that which it has been trained on can cause the classifier issues too, with it "sometimes [being] extremely confident in a wrong prediction."

On this training data, OpenAI says that it used pairs of written text on the same topic, one AI-produced and the other it believed to be written by a human - some gathered from human responses to prompts used to train InstructGPT, the AI model from the company that is primarily used by researchers and developers.

The development of the classifier comes amid numerous concerns and debates surrounding the use of AI chatbots, such as OpenAI's own ChatGPT, in academic institutions such as high schools and universities.

Accusations of cheating are mounting, as students are using the chatbot to write their assignments for them. Essay submission platform Turnitin has even developed its own AI-writing detection system in response. 

OpenAI acknowledges this fact, and has even produced its own set of guidelines for educators to understand the uses and limitations of ChatGPT. It hopes its new classifier will not only be of benefit to this institution, but also "journalists, mis/dis-information researchers, and other groups."

The company wants to engage with educators to hear about their experiences with ChatGPT in the classroom, and they can use this form to submit feedback to OpenAI. 

AI writing tools have been causing a stir elsewhere too. Tech site CNET recently came under fire for using an AI tool to write articles as part of an experiment, but was accused of failing to distinguish theses articles from those written by actual people. Such articles were also found to contain some basic factual errors.

Reviews Writer

Lewis Maddison is a Reviews Writer for TechRadar. He previously worked as a Staff Writer for our business section, TechRadar Pro, where he had experience with productivity-enhancing hardware, ranging from keyboards to standing desks. His area of expertise lies in computer peripherals and audio hardware, including speakers and headphones, having spent over a decade exploring the murky depths of audio production and PC building. He also revels in picking up on the finest details and niggles that ultimately make a big difference to the user experience.