posted by Josh Lee

Will this finally snuff out extremist content on YouTube?

YouTube is taking yet more steps to clamp down on extremist content circulating on the platform. After a spate of high-profile terror attacks across the Western world in 2017, as well as a rise in racist hate speech online, YouTube has come under huge pressure to stop hateful content spreading on the site

The company lost millions in ad revenue after an investigation by The Times found high-profile brands’ adverts on extremist videos. They initially responded by tightening up their restricted mode filter, which resulted in family-friendly LGBTQ content from being shown in restricted mode, sparking backlash.

Then, YouTube added strict new rules on what sorts of videos could be monetised to stop creators who make extremist content from profiting from it, and gave advertisers more control over where their ads would appear.

extremism isis videos

So-called Islamic State (ISIS) have been able to upload extremist content onto YouTube

Today, Google introduced even more measures to combat extremist content on YouTube

“Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all,” Google said in a statement. Google and YouTube are committed to being part of the solution. We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services.”

The four changes aim to bring together technology and people power to not only devalue extremist content on YouTube, but to also divert people who may be vulnerable to radicalisation away from it.


Here’s what Google have in store

Increasing their use of technology to find extremist content on YouTube

“This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user…  We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.”

Increasing the number of real people scouting for extremist content on YouTube

“Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants.”

Blocking comments and monetisation on problematic or inflammatory content

“…For example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find.”

Redirecting at-risk-of-radicalisation users to anti-extremist content

“Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.


These changes have come as the European Union begins plans to force sites like YouTube to curb hate speech by introducing laws that could see platforms fined if they don’t remove content that violates its guidelines.

Google said in a statement that these changes should “strike the right balance between free expression and access to information without promoting extremely offensive viewpoints.” They have also pledged to “keep working on the problem until [they] get the balance right.” So there watch this space for more changes.

 

Save