Since its first introduction, AI programs have been the subject of intense controversy and scrutiny for a variety of reasons including hackers using them to push malware, inconsistency issues ranging from benign to genuinely dangerous, sending fake emails that threaten legal action or even other people’s jobs, and, of course, the ongoing plagiarism issue.
AI programs can be used to create pieces of art that, while sometimes complex and stunning in their own right, often have clear markers for being artificially made. One of the main problems, however, isn’t the art itself, but that to create said art, the program pulls from vast online databases that include copyrighted works.
The other problem, which is quickly emerging as an epidemic, is that people are submitting this AI-generated art as their own. We’re already seeing it with the Amazon Kindle store being overrun with ChatGPT-authored books and an AI-generated image recently winning a photography contest.
The AI disaster with Clarkesworld
That Amazon Kindle story is particularly relevant to this story, as it coincides with an ongoing incident concerning Sci-Fi/Fantasy online publication Clarkesworld Magazine (opens in new tab) earlier this February.
On February 20th, 2023, the editor of the magazine, Neil Clarke, announced on the official Twitter account that he was closing submissions (opens in new tab). In an unfolding thread, he revealed the scope of what caused this: the magazine had been receiving hundreds of what he believed to be AI-generated submissions in a single month.
Clarke purported that the main driving force behind this is those outside the SF/F community, “‘side hustle’ experts making claims of easy money with ChatGPT.” A graph showing the number of people the zine has had to ban in just one month compared to an entire year not only backs his claim of these so-called experts driving this massive increase but is cause for deep concern in its own right.
Updated version of the graph. pic.twitter.com/dDeWDhHZiMFebruary 21, 2023
What’s even more concerning is that the zine hasn’t found a foolproof way (opens in new tab) to weed out these false submissions, as detectors are far too unreliable and are not viable to charge for submissions or enact for mail-in-only submissions, as that would also greatly affect legitimate authors.
What does this mean for publications?
While the Amazon Kindle issue is mainly targeting a corporate giant that has the finances to combat it, publications like Clarkesworld are much smaller in scale, and having to close submissions while they seek out a resolution puts their finances at risk.
Various third-party tools are more effective for combating AI spam submissions. However, those cost far more than what an average publication can afford on a subscription-based model. These tools also have certain regions they don’t operate under, which would then require banning those countries outright, reducing submissions and revenue anyway.
Though we don’t know for certain why this particular publisher was chosen as the first target, we can surmise that it's because sci-fi publications typically offer higher rates than other genre works – at least eight cents per word (opens in new tab) – due to the Science Fiction & Fantasy Writers Association. That makes it ripe for those trying to make a quick buck with low-effort AI-generated stories.
In the worst-case scenario, this could be the beginning of a deluge of spam submissions to other outlets that offer similarly fair rates, which would have a devastating impact on the industry at large. Even now, Clarke isn’t sure what the next steps will be regarding his magazine, other than tentatively opening submissions in March (opens in new tab). And he even stated that he might have to close them right back up again.
Can AI and art coexist?
AI isn’t all bad. It’s a tool like any other, and shades of the current version have been in use for ages now. Machine learning is embedded in spell checkers and grammar tools, which have been around since the 90s at least. There are also plenty of other uses for the technology that can assist with day-to-day living.
The issue isn’t with the tech itself but with how it's used and how it’s regulated. Right now, we’re in a delicate area thanks to the lack of concrete safeguards combined with plenty of bad actors looking to profit while targeting as many people as possible. And most AI-writing detection programs don’t work well (opens in new tab), with OpenAI’s own tool even failing 74% of the time (opens in new tab), according to data.
That’s not to say nothing is being done to curb the worst of the AI woes. AI chatbots are likely to be scrutinized under the proposed Online Safety Bill, as confirmed by Lord Stephen Parkinson, a junior Parliamentary Under-Secretary in the Department for Culture, Media, and Sport. If so, this could be a massive step towards proper regulation, as well as putting both corporations and people under legal ramifications for gross misuse.
Another landmark decision was made by the US Copyright Office in February, as it reconsidered its decision (opens in new tab) to give copyright protection to Kristina Kashtanova for her comic book Zarya of the Dawn. The ruling (sent to her lawyer by Robert Kasunic, the associate Register of Copyrights) stated that while the written and visual arrangements of the comic were hers, the images themselves, created by AI image generator Midjourney, are not her own works and therefore cannot be protected by copyright.
So to answer the question ‘can AI and art coexist?’ the answer is yes but only if we take this new technology seriously and establish vital boundaries and regulations as well as create better tools to sniff out those who would use it deceptively. Or else we’ll keep getting cases like Clarkesworld, which would deal irreparable damage to the creative world.