Mind the Gaps: Social Platforms & the Pitfalls of their Moderation Policies

And what you can do to change that

Photo by RoonZ nl on Unsplash


In today's digital age, social media platforms serve as our virtual gathering places. They enable us to connect, share and communicate, bridging gaps and fostering global communities. However, these platforms have also become breeding grounds for hate speech, misinformation and harmful content, raising the question: Can we rely on them to combat online abuse effectively?

Report in a Nutshell

A recent report from the Center for Countering Digital Hate (CCDH) unveils the challenges that social media platforms face in moderating their content. The CCDH's report paints a grim picture of social media moderation. It reveals that X (formerly Twitter), failed to take action against a staggering 86 percent of posts containing hate speech. This alarming discovery prompts us to ask: Are social media platforms equipped to tackle online abuse?


What's even more disconcerting is the nature of the unmoderated content. These posts included Holocaust denial, condemnation of interracial relationships and racist memes. 


Despite their commitment to combat hatred and prejudice, X continues to host 86% of the reported extreme hate speech posts. Even a week after reporting, 90% of the accounts remained active and X had taken action on only a handful of them.


Why Aren't Platforms Taking Action?

To understand this, let's borrow from Adam Liptak on The Daily Podcast and draw a comparison between a bookstore and a newspaper. 

  • A bookstore offers various books but doesn't endorse every word within them. 

  • A newspaper exercises editorial control and assumes responsibility for its content. 

Social media platforms find themselves in between but often lean towards being bookstores to avoid content liability. This leads them to engage in minimal content moderation, failing to address the larger issue of online abuse. This is where third-party supports becomes crucial in combating online abuse and maintaining responsible online spaces.

Status Quo Doesn’t Cut It

X faces challenges in maintaining safe discourse. While its moderation policy aims to foster a safe environment, there are gaps, including:

  • vague definitions of "violent speech" and "hateful conduct" 

  • inconsistent enforcement 

  • need for transparency in the appeals process across diverse cultural and legal contexts 

X aims for a secure and open environment but needs clarity, consistency and enforcement improvements to cater effectively to its global user base.

Facebook shares similar moderation challenges. Their Community Standards prioritize authenticity, safety, privacy and human dignity. However, there are issues related to

  • policy interpretation and safety enforcement

  • hate speech definitions

  • privacy concerns and transparency  

  • the role of algorithms


Why is the status quo dangerous? 

Neglecting moderation has far-reaching consequences. Hate speech and harmful content create hostile environments, discouraging users from engaging with platforms allowing such content. This damages user experiences and affects platform credibility and trustworthiness. Additionally, the consequences for brands can include: 

  • a tarnished reputation 

  • reduced engagement 

  • employee well-being concerns

 

So now what?

Fixing online abuse requires government regulation, simplified reporting mechanisms, increased collaboration with law enforcement, enhanced protection from employers and more education towards the general public. Read more

 

The CCDH report highlights the pressing need for more effective social media moderation. While there is still value in advocating to platforms to enhance their efforts to combat online abuse, users should also explore alternatives like independent moderation services when platforms fall short.

At Areto Labs, we are committed toaddressing these challenges and creating safer online spaces. Reach out to us today to explore effective social media moderation solutions and protect your online community. Together, we can make the digital world a safer place for everyone.

 
Previous
Previous

Naughty or Nice: The Engagement Revolution and Why It’s Time to Ditch the Old Metrics

Next
Next

Ethical Moderation Series Part 2: