Google's got a new strategy for combating online terrorism
Four steps to take down online terror
Google has revealed an additional four steps it will take in order to tackle online terror. The pledge came via a post on the company blog written by Kent Walker, Google’s General Counsel, acknowledging the scale and scope of its YouTube and Google platforms, and the “uncomfortable truth […] that more needs to be done. Now”.
Before going into detail on what shape the extra steps will take, Walker outlines the existing measures the company is already taking to help prevent the distribution and redistribution of terrorist material. This ranges from the thousands of employees Google has reviewing content, to technologies and systems that automatically prevent the upload and re-upload of known terrorist material, to the government and law-enforcement co-operation that the company is involved in.
Throughout the post, attention is paid to the company’s desire to achieve a balance between open and free societies and the prevention of terrorist actions that aim to erode these same values.
The steps
The first step is to devote more engineering resources and advanced machine learning to improve Google’s identification software. This is the software that will ideally help identify inappropriate videos automatically and distinguish between propaganda or glorification of terrorist content and legitimate reports on such content by reputable journalistic networks.
Secondly, the company hopes to greatly increase its amount of Trusted Flaggers on YouTube by almost doubling the amount of Non-Government Organizations (NGO) that are already operating, and financially backing them up with operational grants. While a great deal of content flagged as inappropriate can be inaccurate, Google claims that over 90% of its flags from this group of independent experts are accurate.
Thirdly, and perhaps most significantly for your average YouTube user, the online video behemoth “will be taking a tougher stance on videos that do not clearly violate [its] policies”. This means that videos that border on infringing its policies but are technically still allowed (such as supremacist content) will be void of the comments section, lack the ability to be recommended or monetized, and will appear behind an “interstitial warning”.
The last step is a proactive measure in counter-radicalization. More specifically, YouTube will be increasing its efforts along the lines of the 'Redirect Method' – an approach that redirects targeted Isis recruitment advertisements to anti-terrorist videos instead, a process which has apparently already proved rather successful.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
- Here's another initiative by Google to use YouTube's community as flaggers of inappropriate content.