YouTube is facing an alarming number of predatory posts and videos targeting young children with a new wave of criticism.
The latest concerns began with a Reddit post submitted to r / Drama and a YouTube video showing a new controversy in a soft-core ring of pedophilia on YouTube, Matt Watson, you know. Watson, a former YouTube creator who came back with a single video and live stream on the topic, showed how a search for something like “bikini haul,” a video post-genre where women show different bikinis they Have purchased, can lead to children’s alarming and predatory videos. The videos are not sexual by nature, but there are many people in the comment section said the girl is very beautiful that seems very disturbing. Many comments include ties to private videos that have not been identified. By advertisements for companies like Grammarly and Google itself, these YouTube videos are monetized.
The Reddit, media organizations and marketers responded shortly after Watson’s video, which showed him expressing disgust at the trend. After Watson’s post went viral and Peloton, Epic Games, the company behind Fortnight, removed their pre-roll advertisements from YouTube, the indoor exercise bike company said they tried to find out why their ads went wrong. YouTube has responded by upgrading its “channel strike system. This program alerts users to behave inappropriately using a three-stroke system. Click here to know about YouTube accounts.
Forbes, Mash able, Business Insider, and The New York Times all pointed out, it was not a solution to an apparent problem of algorithmic, human, or artificial intelligence. Interestingly, many videos viewed by Watson had comments that were disabled, suggesting that YouTube content managers noticed the comment chain and tried to make it more difficult for pedophiles to operate on their website.
The problem, in fact, dates back to at least 2017. In the same month that the “Elsa gate” scandal erupted, the alarm about strange, potentially sexual comments on small children’s videos made for kids had started to ring. Major marketers as AT&T stopped spending on-site ads when BBC reported that offensive comments on monetized children’s video often included a timestamp for second long moments of suggestive position and links to nonlisted video and advertiser did not resume spending until the issue was resolved.
After the twin controversies of 2017, YouTube revised its policies for monetization. Video creators need at least 1,000 subscribers today and have been watching their videos for 4,000 hours. These regulations were only put in place after Elsa gate, and now the company claims that all monetized videos will be reviewed by real people, not Artificial Intelligence.
That’s why so many children’s videos underneath the videos have comments disabled. The 10,000 human reviewers of YouTube, who are tasked with examining the 400 hours of footage uploaded to YouTube every minute, are also responsible for monitoring engagement with said videos and disabling inappropriate comments, links to unlisted videos, or children’s time-stamps in suggestive positions.
While these videos were not monetized for advertising, they were also obviously not being tracked, at least not successfully, by human beings or AI. Some of the videos are not supposed to be on YouTube, but they have millions of views. If comments and rationality of content are as ineffective as it appears, children will continue to be the subject of sexual interest on YouTube, which will remain a tool for predators.
As YouTube is becoming a place where more and more children spend time to watch the video as well as a place to display their creations, it is necessary for those in charge of the platform to decide whether moderation should be pursued in the interests of plausible deniability or in the interests of protecting children. If the company decides and I hope it does, they will have to rethink both their personnel and technology. If they don’t, they will rethink what position on the internet they want to play.