A new EU bill aimed at ChatGPT could give creatives more power over their work
Citation needed
According to new draft legislation from the European Union, developers of artificial intelligence tools like ChatGPT will be required to publicly disclose any copyrighted material used in building and training their systems. This new legislation is set to be the West’s first comprehensive set of rules governing the steady stream of AI expansion.
If it goes ahead, this kind of legislation would give both publishers and content creators a new way to seek a share of profits when their work is used as source material for AI-generated content - something many writers, artists, and other creatives have been begging for.
Say you ask ChatGPT to write you a script based on your favorite YouTube series, the bot will comb through the web and gobble up all the existing relevant content online in order to produce the ‘new’ work. The issue of AI bots sifting through other people's work has been one of the bigger concerns raised as we’ve seen more and more chatbots integrated into our lives through the likes of Google Bard or Microsoft Bing.
The drafts of the bill and amendments aren’t final yet, but they do reflect a solid agreement among members that emergent AI tech is in dire need of proper regulation. The EU states aim to negotiate and pass a final version of the bill later this year.
Nipping things in the bud
The AI arms race began quickly after the release of ChatGPT last fall, prompting companies like Microsoft and Google to rush into releasing their own generative AI tools. Under the new provisions that will be added to the EU’s AI bill, developers will have to publish a ‘sufficiently detailed summary’ of any copyrighted materials used in training or informing the bots.
For the unaware: AI models are essentially trained to make their own content by ingesting and analyzing a vast amount of existing text, images, videos, and music clips. The new legislation is geared towards making it so that the owners of the original work the models are learning from will have more rights as to what happens with their work - and potentially enable them to demand compensation.
The big question at the core of this is whether AI companies truly have the right to scrape content off the internet to feed their machine learning models. A lot of users feel hesitant to give ChatGPT their work when seeking improvement or critique from the bot as they fear what the bot will do with the information given. Only recently did we get the option to opt out of letting ChatGPT use your data for training purposes.
As we invite more and more AI tools into our lives, we should be quick to hold these models - and the developers behind them - accountable to the law, as there is no telling what may come from such a popular emerging technology.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison. Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place. Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).