Monday, 22 October 2018
Latest news
Main » YouTube To Deploy Reviewers To Filter Extremist, Disturbing Videos

YouTube To Deploy Reviewers To Filter Extremist, Disturbing Videos

06 December 2017
YouTube To Deploy Reviewers To Filter Extremist, Disturbing Videos

This was due to the fact that there were videos with children that were the target of sexually inappropriate comments.

Wojcicki also revealed that 98 per cent of the videos the platform removes for violent extremism are now flagged by its machine-learning algorithm.

YouTube is taking stern actions to protect its users against inappropriate content with stricter policies and larger enforcement teams, YouTube CEO Susan Wojcicki said in a blog post.

The chief executive of YouTube has vowed to hire more staff and use cutting-edge machine learning technology to continue its fight against violent and extremist content. Several reports stated that advertisers were not comfortable in placing their ads on YouTube thereafter. The company said its new efforts to protect children from risky and abusive content and block hate speech on the site were modeled after the company's ongoing work to fight violent extremist content.

In a blog post, Ms Wojcicki explained: "Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualised decisions on content". We want advertisers to have peace of mind that their ads are running alongside content that reflects their brand's values.

According to Wojcicki, YouTube spent previous year "testing new systems to combat emerging and evolving threats" and invested in "powerful new machine learning technology", and is now ready to employ this expertise to tackle "problematic content".

In recent weeks, YouTube has used machine learning technology to help human moderators find and shut down hundreds of accounts and hundreds of thousands of comments, according to Wojcicki. She did not say how many people now monitor YouTube for offensive videos.

She said adding more people to identify inappropriate content will provide more data to supply and potentially improve its machine learning software.

"Because we have seen these positive results, we have begun training machine-learning technology across other challenging content areas, including child safety and hate speech", she said. "But no matter what challenges emerge, our commitment to combat them will be sustained and unwavering".