Who's in Charge? The Growing Debate over Social Media Moderation Responsibility

In recent years, social media platforms like Facebook and Twitter have been under increasing pressure to address the issue of content moderation. From hate speech to misinformation, these platforms have had to grapple with the challenge of ensuring that their platforms are safe spaces for users. However, it seems like the tide may be turning, with some of the big players in the industry slowly pulling back from their content moderation responsibilities.

One of the biggest indicators of this trend is the recent decision by Facebook to pull out of countries like Kenya, where they have been struggling to effectively moderate content. This move has left many wondering if Facebook is simply giving up on trying to manage the flow of content on its platform. Meanwhile, Twitter has also been struggling with its moderation efforts, as evidenced by the company's decision to lay off a significant number of its moderators last year.

So what's going on here? Why are these big social media platforms seemingly giving up on content moderation?


It’s too big of a task. 

One possible explanation is that the task of moderating content on these platforms has simply become too overwhelming. With billions of users and an endless stream of content, it's becoming increasingly difficult for these platforms to effectively moderate everything that's being posted. As a result, they may be starting to shift the responsibility onto individual users or smaller companies that specialize in moderation.

There are also valid concerns raised about the treatment and logistics of human-driven content moderation offices. Finding content moderators who cover the worlds’ languages and who understand the unique cultural, social and political contexts of communities whose content they’re moderating, and then ensuring they have proper financial and mental health support can be complex. There have been allegations of low pay, trauma, exhaustion, union-busting, and general poor working conditions at several content moderation offices. This underscores the need for greater accountability and transparency in the operations of social media platforms. 


Regulatory Pressure. 

Another explanation could be related to the increasing regulatory pressure that these platforms are facing. As governments around the world start to crack down on hate speech, misinformation, and other harmful content, social media companies are being forced to take action. However, this action comes at a cost, both in terms of financial resources and the risk of legal liability. By stepping back from content moderation, these companies may be hoping to reduce their exposure to these risks.


Leadership changes shift priorities. 

Twitter, in particular, has made culture-changing decisions via CEO Elon Musk's Twitter poll, changes to API access, and frequent additions and removals of features. 

For example, Musk has suggested in Twitter posts how he intends on handling hateful content going forward. “New Twitter policy is freedom of speech, but not freedom of reach. Negative/hate tweets will be max-deboosted & demonetized, so no ads or other revenue to Twitter.” While this may seem straightforward upon initial reading, the definitions of what are considered hateful or negative have not been defined. 

All of these changes have left those who track social media trends scratching their heads.


So, what needs to happen?

Improved transparency from tech companies would be beneficial in helping users understand how content moderation decisions are being made and why certain actions are being taken. This would also give users more control and say in how the platform works for them, which could help improve user trust and engagement.

However, implementing these changes is not always straightforward, and companies need to balance concerns around user privacy, security, and data protection. In addition, the fast-paced and dynamic nature of social media means that content moderation strategies need to constantly adapt and evolve, making it a challenging and ongoing process.

Overall, it is important to recognize that content moderation is a complex and challenging issue, and there is no easy solution to this problem. While transparency and user control could be part of the solution, it will likely require a combination of approaches and ongoing efforts to find the right balance that works for both users and tech companies.

Areto Labs is part of that solution. Much of the world is afraid of AI and machine learning, because we don’t want to lose our humanity to algorithms. However, there is simply too much toxicity on platforms like twitter for it to be feasible for real people to do moderation, even an army of real people. We need to embrace technology to help combat the issue. Areto Labs applies a social science lens to content moderation, and our micro-aggression model has higher sensitivity to racist, misogyny, anti-2SLGBTQ1A+ sentiment, and ableist language than industry standard. 

The problem won’t fix itself. 

Whatever the reason, the trend towards reduced content moderation by social media companies is concerning. It's clear that there is a significant need for effective moderation on these platforms, as evidenced by the many instances of harm caused by hate speech, harassment, and misinformation. While there may be no easy solution to this problem, it's important for these companies to continue to take responsibility for the content on their platforms and work to find more effective ways to moderate it.

In the meantime, individual users and smaller companies will have to step up to fill the void left by the big social media platforms. Whether it's through community-led moderation efforts, innovative technologies, or other solutions, it's clear that there is a need for action to ensure that social media remains a safe and inclusive space for all.


Not sure where to start? Try this free tool.

If you’re scrolling through comments and you’re unsure what is / isn’t abusive, copy and paste the text into www.aretoanalyzer.com! It’s a free tool we built, so people like you can feel more confident in what to block, delete, report or respond to. No strings attached.

And if you’re looking to monitor, moderate and counteract online abuse at scale and in real-time, we’d love to hear from you. Reach out to hello@aretolabs.com.

Previous
Previous

Athlete & Advocate Chris Mosier teams up with Areto Labs to fight online abuse and hate speech

Next
Next

Areto Labs joins Women in Football with a pledge to #GetOnside