top of page
Writer's pictureWireNews

Why Isn’t YouTube Doing More to Combat Comment Section Abuses?

Updated: Oct 13

by Ram ben Ze'ev


Why Isn’t YouTube Doing More to Combat Comment Section Abuses?
Why Isn’t YouTube Doing More to Combat Comment Section Abuses?

The YouTube comments section has long been notorious for its lack of civility. Unfortunately, it's also become a haven for far more nefarious activities, including commercial spam, harassment, and even child exploitation. Despite these serious issues, YouTube’s management has not taking more proactive steps to address them. Why isn't YouTube doing more to prevent the posting of dangerously abusive messages and commercial spam, or providing more effective tools for creators to police their channels?


YouTube’s comment sections have devolved into a breeding ground for hateful rhetoric, and, alarmingly, criminal activities. Reports of predatory behaviour targeting minors have surfaced, revealing that child predators are using the platform to exploit and groom children.


Additionally, commercial spam and scams flood comment sections, often targeting the most vulnerable users. The consequences of such unchecked behaviour are severe, ranging from emotional distress to real-world harm.


To its credit, YouTube has made some attempts to address these issues. The platform employs automated systems that use machine learning to detect and remove inappropriate comments. It also provides creators with tools to moderate comments, such as filters for specific words and phrases, and the ability to block or report users.


However, these measures often fall short. Automated systems can be easily bypassed, and they sometimes remove benign comments while failing to catch harmful ones. Creators, especially those with large followings, find it challenging to keep up with the sheer volume of comments, even with the tools provided.


---> Follow on Twitter/X @rambenzeev and read all of RAM's articles on X


The question remains: why isn't YouTube doing more to tackle these pervasive problems? Several factors contribute to this conundrum.


  1. Scale and Complexity: YouTube is one of the largest platforms on the internet, with over 2 billion logged-in monthly users and 500 hours of video uploaded every minute. Policing such an enormous amount of content and comments in real-time is a herculean task. The complexity of moderating content across different languages and cultural contexts adds another layer of difficulty.

  2. Resource Allocation: Effective moderation requires significant resources. Employing a large team of human moderators and developing more sophisticated AI systems is costly. YouTube, owned by Alphabet Inc., may be hesitant to allocate the necessary resources, especially if the financial return on such investment isn't immediately apparent.

  3. Freedom of Expression: There is a delicate balance between moderating content and preserving freedom of expression. Overzealous moderation can lead to accusations of censorship, which YouTube is keen to avoid. This balancing act can result in the platform being more lenient with certain types of comments.

  4. Profit Motive: At its core, YouTube is a business driven by advertising revenue. More engagement often translates to more ad impressions and, consequently, more revenue. Controversial and sensational comments, while problematic, can drive higher engagement rates. This creates a conflict of interest where stricter moderation could potentially reduce engagement and revenue.


Despite these challenges, the need for proactive measures is clear. Allowing the status quo to persist not only endangers users but also tarnishes YouTube’s reputation. Here are several steps YouTube could take to improve the situation:


  1. Enhanced AI and Human Moderation: Investing in more advanced AI technologies that can better understand context and nuance is crucial. This should be complemented by a larger team of human moderators who can review flagged content and provide a layer of discernment that AI currently lacks.

  2. Stricter Enforcement Policies: YouTube should implement stricter policies against abusive comments and spam. This includes longer and more comprehensive bans for repeat offenders and more robust verification processes to prevent the creation of multiple accounts by the same individual.

  3. Better Tools for Creators: Providing creators with more effective and user-friendly tools to manage their comment sections is essential. This could include more customizable filters, better analytics to identify problematic users, and easier ways to report and block offenders.

  4. Community Engagement: YouTube should engage more with its community to understand their needs and concerns. Regular feedback loops between the platform and its users can help identify gaps in the current moderation system and develop more effective strategies.

  5. Educational Initiatives: Educating users about the importance of digital etiquette and the risks associated with engaging in or ignoring abusive behaviour can help foster a more respectful community. YouTube can use its platform to promote positive behaviour and awareness about these issues.


The current state of YouTube’s comment sections is unacceptable, with serious implications for user safety and the platform's integrity. While the scale and complexity of the problem present significant challenges, they are not insurmountable. By investing in better moderation technologies, enforcing stricter policies, providing more tools to creators, and engaging with the community, YouTube can take meaningful steps towards creating a safer and more welcoming environment for all its users. The time for YouTube to act decisively is now, before more harm is done and the platform’s reputation is further damaged.


###


Bill White (Ram ben Ze'ev) is CEO of WireNews and is the Executive Director of Hebrew Synagogue

bottom of page