Google is cracking down hard on deepfakes

Deepfake
(Image credit: Shutterstock / meamorworks)

Earlier this month, Google decided to add deepfake training to its list of forbidden projects on its Colaboratory service. The change was first spotted by DFL developer going on Discord by the name ‘chervonij’. When he tried to train his deepfake models on the platform, he got an error message, saying 

“You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.”

Google appears to have made the change under the radar, and has since remained quiet on the matter. While ethics are the first potential theory to come to mind, the actual reason might be a bit more on the pragmatic side.

Share your thoughts on Cybersecurity and get a free copy of the Hacker's Manual 2022end of this survey

<a href="https://polls.futureplc.com/poll/2022-cybersecurity-survey" data-link-merchant="polls.futureplc.com"" target="_blank">Share your thoughts on Cybersecurity and get a free copy of the Hacker's Manual 2022. Help us find how businesses are preparing for the post-Covid world and the implications of these activities on their cybersecurity plans. Enter your email at the <a href="https://polls.futureplc.com/poll/2022-cybersecurity-survey" data-link-merchant="polls.futureplc.com"" data-link-merchant="polls.futureplc.com"" target="_blank">end of this survey to get the bookazine, worth $10.99/£10.99.

Abusing the free resource

Deepfakes are “photoshopped” videos - fake videos showing people saying things they never really said. Their creators leverage artificial intelligence (AI) and machine learning (ML) technologies to create super convincing videos, which are getting more and more difficult to distinguish from legitimate content. 

To make them convincing, though, deepfakes require significant computing power, not unlike the one on offer through the Colab service. This Google project allows users to run Python in their browser while using free computing resources.

 As deepfakes are usually used to crack jokes, create fake news, or spread fake revenge porn, it’s easy to think ethics are behind Google’s decision. However, it might also be that too many people were using Colab to create fun little deepfake videos, preventing other researchers from doing more “serious” work. After all, the computing resource is free to use.

Besides deepfakes, Google doesn’t allow Colab to be used on projects such as mining cryptocurrency, running denial-of-service attacks, password cracking, using multiple accounts to work around access or resource usage restrictions, using a remote desktop or SSH, or connecting to remote proxies.

Via: BleepingComputer

Sead Fadilpašić

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.