AI news-writing system deemed too dangerous to release

Fake news
Image credit: Rawpixel on Unsplash (Image credit: Rawpixel on Unsplash)

OpenAI, a company backed by Elon Musk, has decided not to release an AI system that can generate news stories and fiction on the grounds that it could be dangerous in the wrong hands.

OpenAI is a non-profit company that aims to finding a way to safely bring about artificial general intelligence. Normally it releases its research to the public, but its latest AI model, known as GPT-2, is reportedly so convincing that it has too much potential for misuse, generating huge volumes of misleading news stories.

GPT-2 takes a sample of text (a few words of several paragraphs) and predicts the following sentences in a similar style, with surprisingly plausible results.

The system was trained using a dataset of roughly 10 million news articles sourced by trawling Reddit – several times the size of those used by previous state-of-the-art systems.

The sheer volume of data gave the system a much better understanding of written language, and means it's more general purpose than other systems. The Guardian reports that it's able to pass simple reading comprehension tests, and translate and summarize text – often better than systems built for those specific purposes.

Telling tall tales

To demonstrate why it's keeping GPT-2 under wraps, OpenAI created a tweaked version of the system that can generate an infinite stream of positive or negative product reviews.

It's also possible that GPT-2 could develop biases due to its unfiltered dataset, learning from news stories written with an agenda and feeding that influence into its own work.

OpenAI says that, as systems like GPT-2 become commonplace, "The public at large will need to become more skeptical of text they find online, just as the 'deep fakes' phenomenon calls for more skepticism about images."

"We need to perform experimentation to find out what they can and can’t do,” said Jack Clark, OpenAI's head of policy. “If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

Cat Ellis

Cat is the editor of TechRadar's sister site Advnture. She’s a UK Athletics qualified run leader, and in her spare time enjoys nothing more than lacing up her shoes and hitting the roads and trails (the muddier, the better)