Donate
Text Audio
00:00 00:00
Font Size

Just days after YouTube told the Financial Times it would rely less on automated systems and more on humans to moderate content, the platform made policy changes that do the exact opposite. 

Changes to the platform’s age-restriction policy will increase YouTube’s use of automation once again and notably modify what is considered appropriate content for users under 18. 

YouTube, owned by Google’s parent company Alphabet, will begin using machine-learning to automatically apply age-restrictions. This will require users to verify their age before viewing flagged content and has also changed how that content is identified. Furthermore, when viewers discover age-restricted content on third-party websites, they will now be required to login to YouTube and verify their age before watching.

These changes have angered critics who believe it will interfere with coverage of violent riots. PragerU, a conservative media outlet, had more than 100 videos land on YouTube’s restricted list despite their innocuous content. The list has also targeted videos defending the actions of Kyle Rittenhouse who fatally shot two aggressors during social justice protests in Kenosha, Wisconsin.

Additional concerns have been raised that the move could harm outlets that rely on advertising from their own sites. YouTube said the changes will not affect advertising profits, as the videos it anticipates to receive automatic age-restrictions would also likely violate their advertiser-friendly guidelines. 

YouTube currently has a human team that applies these restrictions when content is deemed unsuitable for children. “Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age restrictions,” YouTube said in a blog post about the policy changes. 

The platform indicated in March, as a result of the pandemic pushing employees to work from home, that it would rely more on its automated systems to moderate content. YouTube eventually admitted there were problems with depending on automation and said it would return to a reliance on humans in order to stem the influx of videos being removed after a quarter with almost double the number of videos removed as the previous quarter.

“Because our use of technology will result in more videos being age-restricted, our policy team took this opportunity to revisit where we draw the line for age-restricted content,” YouTube said, adding that “only minor adjustments were necessary.”

The changes come amid pressure from advocacy groups that YouTube is unsafe for children, even though the platform has its own kid-friendly version — YouTube Kids. 

Additionally, the Federal Trade Commission announced in September that Google would have to pay $170 million to settle a complaint that YouTube illegally collected children’s personal information. The refinement of their child protection policies could be seen as an effort to stem any future consequences of this magnitude.             

Conservatives are under attack. Contact YouTube at (650) 623-4000 and demand that the platform provide transparency: Companies need to design open systems so that they can be held accountable, while giving weight to privacy concerns. If you have been censored, contact us at the Media Research Center contact form, and help us hold Big Tech accountable