Why Is YouTube Verifying Hateful Creators Who Post Fake News?

8 February 2017, 15:30 | Updated: 17 July 2017, 12:24

We the Unicorns

By Josh Lee

Members of the Alt-Right movement have received verification ticks

Social platforms are only as good as the people who use them. So when users start to connect with each other and spread ideas that aren't good - like racism, sexism or homophobia - things can turn nasty, quickly. Especially when these creators can boast verification.

YouTube isn't exempt from this. You just have to look at the comments in almost any video to find something particularly hateful. But while the platform has some measures in place to ban accounts which break their community guidelines or harass other users, in some cases hateful speech just goes ignored. More worryingly, some accounts which openly mock minorities or spread misinformed views have verification badges. Which got us thinking - should YouTube really be verifying these sorts of users?

Verification doesn't necessarily mean YouTube agrees with what the user is saying.

According to YouTube, anyone with over 100,000 subscribers is eligible to apply for a verification badge. And the badge itself simply means that "that the channel belongs to an established creator or is the official channel of a brand, business or organisation." But just because YouTube wants verification to imply one thing, that doesn't stop the badge picking up extra meaning.

Many people see verification as a mark of authority or trust.

It means that a creator has been recognised for knowing what they're talking about. Creators who are seen to be trusted will have their content shared, those shares lead to more subscribers, and the cycle continues. But what if the creator established themselves by sharing ideas that weren't properly informed? Or, worse still, by targeting minority groups, or harassing other users? For example last year a prominent, verified YouTuber shared a video linking American civil rights group Black Lives Matter to a violent kidnapping, despite the fact that police said there was no link between the group and the crime in question. Should YouTube really be giving out verification checks to creators who use lies to spread hateful ideas?

So how should YouTube decide who gets to be verified?

While many people would like YouTube to only verify creators who share certain political beliefs, we're probably not going to see a company as huge as YouTube make such a strong political stand, one way or another. But at the very least, YouTube should be taking a stand against creators who harass individuals or use misinformation to spread hateful ideas. It might not be possible to ban every discriminatory user on YouTube, but there's no excuse for verifying purposefully dishonest and hateful creators. Verification clearly has its benefits, but in the wrong hands, it can contribute to the growing levels of harassment and fake news that's been seen across all social platforms.