The big YouTube crackdown: 8.3 million videos were removed in just three months
YouTube opens up about how it deals with flagged videos
YouTube has revealed the extent of its video problem, with a total of 8.3 million video removed from the site in October to December last year.
Out of these videos, some 6.6 million of these were flagged by an algorithm. The rest were picked up by human eyes that we guess need a little bleaching right now.
The YouTube Community Guidelines enforcement transparency report highlights what Google's doing to make sure YouTube is a safer place to visit. But it also reveals just what a cesspit of controversy the site could be if it didn’t have these measures in place – measures that many still think don’t go far enough to stem the flow of hateful or abusive imagery.
The report states that out of the videos taken down, 75% were never actually seen by the public. Flip this, though, and it means a quarter were.
It is the public that is doing much of the job YouTube should be doing, acting as moderators of the site by flagging content where necessary. Currently 30% of what is flagged is 'sexual content', while 26% is badged a ‘spam or misleading’. 'Hateful or abusive' content clocks in at 15%, with 'violent or repulsive' content making up 13% of videos flagged.
Red flag
In the report YouTube explains a little more about its Trusted Flagger program, which is available to government agencies and other institutions. This, for YouTube, is a priority program but the videos go through the same checks as they would if a regular user had flagged a video, they are just prioritized.
“The Trusted Flagger program was developed to enable highly effective flaggers to alert us to content that violates our Community Guidelines via a bulk reporting tool,” says the report.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
“Individuals with high flagging accuracy rates, NGOs, and government agencies participate in this program, which provides training in enforcing YouTube’s Community Guidelines.”
The bulk, however, of video checking is done by AI – something that is unlikely to change given the sheer amount of content that is being flagged, although YouTube is said to be recruiting thousands of people to aid in this job.
“YouTube developed automated systems that aid in the detection of content that may violate our policies. These automated systems focus on the most egregious forms of abuse, such as child exploitation and violent extremism,” reads the report.
“Once potentially problematic content is flagged by our automated systems, human review of that content verifies that the content does indeed violate our policies and allows the content to be used to train our machines for better coverage in the future.
“For example, with respect to the automated systems that detect extremist content, our teams have manually reviewed over two million videos to provide large volumes of training examples, which improve the machine learning flagging technology.”
It goes on to note that, in 2017, automated systems helped flag more violent extremist content for human review and its advances in machine learning meant that nearly 70% of violent extremism content was taken down within eight hours, and half of it in two hours. It is using what it learned from this to deal with other problematic content.
When it comes to flagging these type of videos, India tops the list of the country with the most flagged content, with the US in second place - the UK is sixth.
If you'd like to see how YouTube's flagging system works in a strangely cutesy way, then head to the video below.
Marc Chacksfield is the Editor In Chief, Shortlist.com at DC Thomson. He started out life as a movie writer for numerous (now defunct) magazines and soon found himself online - editing a gaggle of gadget sites, including TechRadar, Digital Camera World and Tom's Guide UK. At Shortlist you'll find him mostly writing about movies and tech, so no change there then.