YouTube has reportedly removed over 58 million videos from the website in the previous quarter due to policy violation issues. YouTube’s report also claims that between July & September, the corporation has taken down 7.8 Mn videos, around 1.7 Mn channels, and more than 224 Mn comments, with machine learning playing an important role in the elimination process.
As stated by YouTube, the company has always employed a blend of human reviewers and technology to manage violative content posted on the platform, and since 2017 the firm has started using machine learning to flag inappropriate content that needs to be reviewed by the teams. YouTube further added that the mix of smart detection techniques and skilled human reviewers has increasingly helped the organization in consistently enforcing the policies at rapid speed.
Reportedly, the channels that were taken down were the ones that had three times violated community guidelines in some way or the other, also channels that featured extreme abuse or were wholly dedicated to violating guidelines were removed. Sources cite that around 80 percent of the 1.7 Mn channels that were removed promoted spam content, more than 12 percent hosted adult content, while nearly 4.5 percent were deleted for meddling with child safety regulations. On account of channel deletions, videos featured by those channels were also deleted, thereby removing 50.2 million additional videos during the last quarter.
If reports are to be believed, out of the 7.8 million videos that were removed on grounds of violating the company’s community guidelines, nearly 81 percent were identified through automated systems. Moreover, most of those videos, approximately 74 percent, did not receive even a single view before removal.
Of over 224 Mn comments removed by YouTube comprised those which violated website’s community guidelines, in addition to comments that were cited by YouTube as “likely spam” and did not attain channel’s approval where they were posted, as per company statement. Reportedly, YouTube’s automated systems detected over 99.5 percent of the comments that violated policies.