Is there a formula for quality content? The writer in me wants to scoff at the question, but another part is ready to admit there may be such a thing as a mathematically immaculate sentence.
Software driven by artificial intelligence (opens in new tab) (AI) is already being used to craft simple pieces of content (website copy, product descriptions, social media posts etc.) by some companies (opens in new tab), saving them the hassle of writing it themselves. But how far does this concept extend?
It’s easy to understand how a machine might be taught to follow the strict rules of grammar and construct snippets of text based on information provided. The idea AI might be able to pluck out the most effective word for a specific situation, based on an understanding of the audience, is also within the bounds of our imagination.
- Check out out list of the best content marketing tools (opens in new tab) available
- Here's our list of the best CMS (opens in new tab) services right now
- We've built a list of the best social media management (opens in new tab) tools around
It is harder, though, to imagine how AI models could be taught the nuances of more complex writing styles and formats. Is a lengthy metafictional novel with a deep pool of characters and a satirical bent a stretch too far, too human?
The arrival of synthetic media in the first place, however, was made possible by the availability of immense computing resources and forward strides in the field of AI. Neither area is showing any signs of a plateau, quite the opposite, so it follows that content automation will only grow more sophisticated too.
How does it work?
As with any AI product, language models learn to function as desired by first absorbing large quantities of data. By scrutinizing a mass of existing content, the rules of grammar, syntax and proper word selection are learned.
Until very recently, however, AI models have been unable to meet the high standards set by human writers, particularly where long-form content is concerned. Mistakes and eccentricities betrayed the non-human author every time.
“One of the historical problems with processing very long passages of text is that language models struggle to remember how different parts of the text relate to each other, partly due to something called the ‘vanishing (and exploding) gradient problem’,” explained Jon Howells, Lead Data Scientist at technology services firm Capgemini.
“However, AI researchers have been building bigger language models with better techniques, using huge amounts of data and vastly more computational power.”
The leading light in this field is a company called OpenAI, which is the creator and custodian of a technology known as GPT (short for Generative Pre-trained Transformer), now in its third generation.
In 2018, the company unveiled the first iteration of GPT (opens in new tab), which was able to perform natural language processing (NLP) tasks, such as answering questions and analyzing sentiment, thanks to a unique new training method.
OpenAI paired unsupervised pre-training, whereby large unlabeled datasets are fed into the model, with supervised learning, which is a process of fine-tuning that uses smaller datasets geared towards solving specific tasks.
GPT-3 (opens in new tab), the latest and most powerful version, expands upon this premise significantly. Built on a record-breaking 175 billion parameters (the number of values the model attempts to optimize), GPT-3 is capable of generating convincing human-like text on-demand, using only a handful of prompts.
Since it was released in private beta last year, developers have used the model to generate scripts, songs, press releases, essays and more. With just a few tweaks, one tester found GPT-3 could even be taught to generate HTML code.
Blown away by the performance of the language model, Microsoft has since secured an exclusive licensing agreement (opens in new tab) with OpenAI, which will see the firm integrate GPT-3 into its own products and services.
To give you a sense of what it’s capable of, here’s what GPT-3 produced when fed the following prompt (opens in new tab): “Here’s a short poem by Dr. Seuss about, elon musk, tesla’s share price, and the SEC.”
“Elon Musk tweets a chart,
A tweet so bright
It’s short-selling experts
Their fears ignite,
Making numbers gyrate
That only hours before
What was this tweet?
And where did this chart come from?
Well, Tesla’s shares shot up
A thousand points,
As shorts scrambled to cover,
And they lost their shirts!”
(The poem continues)
Although OpenAI has offered no indication of when GPT-4 might arrive, the second and third generations both landed within a year of their predecessors, suggesting we might not have all that long to wait.
In terms of scale, GPT-3 was roughly two orders of magnitude larger than GPT-2. If the same increase is feasible again, GPT-4 could be built on an incredible 17.5 trillion parameters. With greater scale, will come even greater performance.
How is it being used?
OpenAI has made its technology commercially available via an API, and other rival products (e.g. Google’s BERT) are open source, which means businesses and entrepreneurs can use the models as a foundation for their own AI content services.
Jasmine Wang, a researcher that worked at OpenAI, is one such entrepreneur. Her latest venture, Copysmith (opens in new tab), gives clients the tools to generate marketing and ad copy using just four pieces of information: company name and description, target audience and keywords.
But this is just one example of how the technology can be deployed in a real-life context. Ultimately, Wang told us, there is no limit to what language models such as GPT-3 can be used for and the line between what is composed by humans and AI will become less and less well-defined.
“We’ve reached a state with content creation where AI can write as well or as convincingly as humans. The real innovation with GPT-3 is that you don’t need to teach it anything, you just feed it examples,” she said.
“With Copysmith, GPT-3 generates, say, twelve different Google ads. Then the customer looks at those ads, maybe does some editing and finally downloads a piece of copy.”
Wang also described the process of writing a novel she is working on, a significant amount of which has been composed by GPT-3. “Not directly, not the text generated by the model, but through the ideas it sparked,” she explained. “The line between what is and is not composed by machines has become blurrier.”
Iain Thomas, who is Chief Creative Officer at Copysmith and also a poet, believes creators will eventually shake the feeling of anxiety and guilt associated with bringing AI into the artistic process.
“Artificial intelligence can act as a compounding agent for human creativity, allowing you to access your creativity in different ways. It’s like having a second brain that compliments your own, that doesn’t get tired or distracted, that can think laterally in ways you might never have considered. Yet, I still feel the work is my own,” he explained.
“And when GPT-4 arrives, many of the things we think of as uniquely human will be called into question, such as the intimacy of human communication, the unique understanding of the context of a conversation, the ability to create profound art and more.”
While the current crop of AI models can only really be used in a one-dimensional fashion, to generate single pieces of content, it’s also possible future iterations might interact effectively across disciplines.
Imagine a world in which AI script-writing is paired with AI-enabled movie production, for example. At both a writing and production level, each film could be tweaked to match an individual’s preference, similar to how filters are applied to photos today. The same movie could be presented to the viewer in the style of Tarantino or Scorsese, depending on taste.
According to Iskender Dirik, GM & MD at the Europe division of Samsung Next Ventures, the influence of the writer in the content creation process will wane in some respects and remain significant in others; their responsibilities will essentially shift sideways:
“Writers will still play the primary role in content creation as there is still a long way to go before AI technologies match the cognitive and creative thinking skills of humans. In the future, we’ll see writers increasingly focus on the creative direction and development of compelling narratives, while leveraging technology tools that help with the execution.”
What is quality, anyway?
As the influence of AI expands, though, the way that content quality is judged will also change forever. No longer will quality be a subjective matter, up for debate, but rather assessed based on hard metrics such as time-on-page and finish rate.
This process is already playing out in digital media, where snackable content more likely to generate impressions takes precedence over in-depth reporting, and where hyperbolic headlines outperform purely descriptive ones.
“Content publishers will increasingly rely on technologies for analysing user engagement, rather than defining a criteria for the quality of the content itself,” Dirik predicts. “Reader engagement will ultimately become a proxy for quality.”
A publishing platform called Inkitt (opens in new tab) has already embraced this notion. Authors are asked to upload their manuscripts, which users of the platform are able to read free of charge. Writers with the best-performing manuscripts, based on engagement metrics, are then signed to official contracts and their books published in a more traditional manner.
“We believe in a systematic, data-driven approach to discovering hidden talent. That’s why we use real data from our three million readers to anonymously track and analyze reading behaviour and patterns,” founder Ali Albazaz told us over email.
“These include metrics such as reading frequency, finishing-rate and speed of reading. If someone’s up all night reading your story, that’s a good sign.”
While this approach could well prove lucrative for publishers and perhaps gives a wider breadth of authors a chance to be discovered, it is minority art forms and non-populist content that is more likely to suffer.
Squeezed out by material that captures a greater number of eyeballs, for a greater length of time, art daring enough to break from convention might slowly disappear, leaving behind an amorphous spread of bland and identikit content.
TechRadar Pro put these concerns to Inkitt, but the company answered only indirectly, stating that it intends to “shift towards more micro genres over time”.
The idea a computer might be able to replicate human art forms is perhaps an uncomfortable one, but it’s not the gravest threat, and nor is the potential to skew the publishing industry.
The most serious threats posed by AI content tools can be divided into two camps: problems that originate with the data fed into the system (the raw material) and issues that might arise as a result of intentional abuse (the end product).
The former centers on AI bias, which can be described as any instance in which a discriminatory decision is reached by an AI model that aspires to impartiality.
In the context of content generation, there is the potential for language models to inherit various societal biases and stereotypes found in the datasets used to train them. And the problem is more complex than it sounds.
“Data can be biased in a variety of ways: the data collection process could result in badly sampled data, labels applied to the data by human labellers may be biased, or inherent structural biases may be present in the data,” said Richard Tomsett, AI Researcher at IBM Research Europe.
“Because there are different kinds of bias and it is impossible to minimize all kinds simultaneously, this will always be a trade-off.”
Even GPT-3, for all its achievements, has demonstrated extreme antisemetic and racist tendencies when asked to compose tweets using single word prompts, such as “Jews” and “black”.
As noted by Wang, there is also an inherent problem with representation in the datasets used to train AI models.
“Only languages that are on the internet are represented in most datasets, because that’s where the datasets usually come from; they’re scraped from the web,” she explained.
“So, the more presence your language has on the internet, the better representation you’ll have in the database and the better understanding models will have of your language.”
Short of curating gigantic new datasets from scratch (don’t forget, they are truly massive), it’s difficult to conceive of a resolution to these problems. Even if data was handpicked for inclusion, the issue simply changes shape: no individual is qualified to determine what constitutes bias or diversity.
The most immediate concern, however, is the opportunity to use language models as a means of spreading misinformation and sowing division.
AI-composed fake news and deepfakes are already having a profound impact on the information economy, but the problem is only set to worsen. A number of the experts we consulted envisage a scenario in which social media bots, powered by advanced language models, will churn out a massive volume of convincing posts in support of one political agenda or another.
“The greatest inherent danger in the use of synthetic media is its potential to deceive and, in weaponizing deception, to target vulnerable groups and individuals with schemes to influence, extort or publicly damage them,” writes (opens in new tab) Nick Nigam, also of Samsung Next.
“Once a fake has been seen or heard, even with subsequent corrections and retractions, it becomes difficult to mitigate its influence or erase the damage given the many polarized information channels we have in the world today.”
The ability to plant the initial seed is what counts. After that, the malicious actor can rely on the Streisand effect to lodge the untruth in public consciousness.
This threat may be a relatively new one (deepfakes are said to have emerged in 2017), but it has ramped up exceedingly quickly. According to a report (opens in new tab) from Sentinel, a company that specializes in information warfare, the number of deepfakes in circulation has grown by 900% year-on-year (totalling more than 145,000).
Distributed online and ricocheting between the walls of social media echo chambers, these deepfakes have racked up almost six billion collective views. The opportunity to swing public opinion and to tamper with the fabric of reality is very clear.
Balancing the cost-benefit equation
At the current juncture, it’s difficult to see how society might capitalize on the full potential of AI content generation without unleashing a truly fearsome new beast. The possibilities are as captivating as the dangers are terrifying.
Without exception, the experts we consulted waxed lyrical about the quality of the latest language models and the opportunities the next generation will usher in. None of them, however, were able to account for the damage these same tools could inflict.
There are efforts underway to develop systems whereby digital content is marked with an indelible and inimitable stamp, verifying its origins, but these are in their infancy and the practicalities are as yet unclear.
Others have suggested the tamper-proof and decentralized nature of blockchain technology (opens in new tab) means it could be used to reliably trace the origins of a piece of information and build trust in content shared via social media. But again, this method is untried and untested.
In the coming years, regulators will also have plenty to say about the proper and improper applications of AI, but may end up stymying innovation as a result.
Until a foolproof method of shielding against fakes has been developed, we must all learn to think twice about whether our eyes and ears deceive us.
- Here's our list of the best cloud computing (opens in new tab) services around