Donate
Text Audio
00:00 00:00
Font Size

YouTube said it would revert back to using more human moderators to police speech across its platform after automation censored too much. The slated changes came after YouTube admitted to removing “the most videos we've ever removed in a single quarter.”

Almost 11 million videos were removed via automated flagging from April to June, about double the number taken down the previous quarter, after YouTube gave its machine learning system more autonomy to censor content. 

Brought on by the pandemic, the move to rely on automated systems allowed employees to remain home during lockdown measures.

Neal Mohan, YouTube’s chief product officer, told the Financial Times that the company erred on the side of protecting its users when it came to utilizing machines that aren’t as precise as humans. Yet, YouTube did not protect its users' First Amendment rights.

Social media platforms have long touted the benefit of using artificial intelligence (AI) to referee the massive amount of content pushed out by their users each day. This case is an example of the trouble relying on them can cause. 

In May, YouTube “confirmed,” according to The Verge, that comments containing “certain Chinese-language phrases related to criticism” of the Chinese Communist Party (CCP) were being automatically deleted. The same thing was reportedly happening to comments critical of the Black Lives Matter protests taking place during the civil unrest in 2020.

Claire Wardle is the executive director at First Draft, a non-profit group that claims to address misinformation on social media. She told the Financial Times that AI has seen progress addressing graphic content like violence and pornography. 

“But we are a very long way from using artificial intelligence to make sense of problematic speech [such as] a three-hour rambling conspiracy video,” she said. “Sometimes it is a nod and a wink and a dog whistle. [The machines] just can’t do it. We are nowhere near them having the capacity to deal with this. Even humans struggle.”

While algorithms can identify videos that might potentially be harmful, Mohan said, according to the Financial Times, they are not so good at deciding what should be removed.

“That’s where our trained human evaluators come in,” he said, adding that in conjunction with the speed of machines humans can help “make decisions that tend to be more nuanced, especially in areas like hate speech, or medical misinformation or harassment.”

Conservatives are under attack. Contact YouTube at 650-253-0000 and demand that the platform mirror the First Amendment: Tech giants should afford their users nothing less than the free speech and free exercise of religion embodied in the First Amendment as interpreted by the U.S. Supreme Court. If you have been censored, contact us at the Media Research Center using our contact form, and help us hold Big Tech accountable.