Can You Moderate Violence?   

A response to Meta’s Oversight Board Review of Anti-Muslim Content In the Aftermath of the UK Riots 

By Santiago Bracho

Photo by Julio Lopez on Unsplash

On the third of December 2024, Meta’s official Oversight Board has announced they are considering reviewing three cases in connection to the UK riots in Summer 2024. On July 29th, the murder of three young girls in Southport, UK was falsely stated to be perpetrated by a “Muslim refugee”. The spread of online disinformation fuelled riots targeting Muslims, migrant and refugees. The current cases under review by the Meta Oversight Board, involve three posts on Facebook, which were made in support of these riots. After they were reported multiple times, Facebook moderators allowed these posts to remain online.  

Currently, as the Oversight Board reviews these cases, it is undeniable that the spread of anti-Muslim bigotry across the UK cannot just be limited to this case. In fact, the UK riots have shined a spotlight into the effects of festering anti-Muslim comments online, which activists and journalists such as Imran Mulla have claimed the British government “has failed to challenge [these] harmful narratives towards British Muslims.” Furthermore, the direct role online misinformation and racist commentary on social media played in stoking the riots cannot be ignored. As investigations have shown the initial claim that the Southport stabbings were perpetrated by a Muslim refugee, can be tied to three X accounts which received a combined 10 million views. Therefore, this Oversight Board case is bringing to focus, if online moderation has the appropriate safeguards to deal with incendiary content?  

Can Moderators Stop the Incitement of Violence?

The initial review done by Meta moderators regarding these three posts in question – claimed only one of the three posts directly incited “violence” or “spread hate speech”. According to Facebook/META they “favour maximum protection for ‘voice’ and freedom of speech” in their guidelines. As only one of the posts made a direct endorsement of attacks on property it was the only one removed. This has been a common frame for debates surrounding anti-Muslim prejudice. The idea that often these comments cannot be deleted or removed as they would infringe on the commentator’s freedom of speech. Just recently during PM Question Time, on the 28th of November, there was mass criticism of MP Tahir Ali, as he claimed “desecration of religious texts” should constitute an act of religious hatred. Other MPs and commentators claimed this would equate to a “blasphemy law” and infringe on free speech.  

Although, this is part of a lively civic debate, it is not helpful to understand how online misinformation has spread across the biggest social media platforms. Firstly, the debate regarding the Oversight Board’s case is specifically focused on whether or not these three posts violated the terms and conditions of Meta’s platforms. By Meta’s own guidelines hate speech in the form of “threats” is subject to removal. At least one of the three posts, after sufficient review, was removed as it contained calls to “destroy Mosques”. Nevertheless, it is important to note that as the first wave of reports against the content were being delivered “the content stayed on the site and was never reviewed by a human.” Clearly, the framing of “free speech vs Islamophobia” the media seems content to push, is not the primary concern here. Rather, why does one the largest social media conglomerates in the world, delay the use of human moderators?   

Secondly, these posts were made in connection to the UK riots – an explicitly violent and racist act. Therefore, a human moderator perhaps could have assessed how posts such as these are clearly part of a larger web of disinformation and racism. Which directly led to the destruction of communities, which are estimated to have tallied at almost £2 million nationwide.  

The Greater Legal Context

In the 21st century the explicit connection between online misinformation, and direct acts of targeted violence cannot be denied. The spreading falsehood of the Southport stabbings being perpetrated by a Muslim refugee directly led to the UK riots. The connection between online hate speech cannot simply be said to have inspired violence or led to worsening anti-Muslim views. Rather it directly created violence – in some extreme cases even being used by far-right groups to organise riots. The UK Riots are not the first example of this insidious use of social media. The highest profile and violent case remains the 2017 anti-Rohingya killings in Myanmar which were organised openly through Facebook groups, leading to the murder of hundreds of Rohingya Muslims.  

In the face of this new phenomenon, the British Home Office decided to issue arrests to 30 people who posted pro-riot, anti-Muslim content, during the UK Riots.  These arrests were directly geared towards people who called for explicit threats or the targeting of specific groups. For example, Tyler Kay – sentenced to 38 months – called “hotels housing asylum seekers to be set alight.” This call to violence is directly tied to cases such as in Rotherham, where a hotel was burned as the rioters claimed it housed asylum seekers.  

The distinction between actions and words cannot be split into two different binary categories. Meta’s Oversight Board must take this particular case as a greater opportunity to review how tech and social media conglomerates can tackle hate speech and disinformation. Particular focus should be drawn towards the lack of oversight by Meta, and the necessity for more human-maned moderators. Furthermore, a greater emphasis to the online promotion of violent acts must be made. For example, UK Technology secretary, Peter Kyle has asked for Ofcom to update its rules regarding tackling of disinformation and hate speech. Lastly, greater focus must be made towards the tropes of anti-Muslim bigotry. It cannot be that focus can only be brought onto these forms of racism once violence is committed. It falls onto all of society to fight these kind of prejudices, and social media companies must do their part.  

 

Previous
Previous

Study (2024): "Loud Hatred - Quiet Withdrawal . How online hate threatens democratic discourse" 

Next
Next

Selective Narratives: How Hungarian media fuels fear around migration