A splintering society: Polarisation on the internet

ARTICLE BY GERDA KOVACS

Gerda Kovacs holds a BA in Sociology and Criminology from the University of Westminster and an MSc in Human Rights and Politics from the London School of Economics and Political Science. Her research focuses on the intersection of human rights, critical criminology, and technology.

In most conversations around extremism, politics and democracy, the word ‘polarisation’ will ultimately rear its head. It is one of those words that invokes an uneasy sense of unrest and anxiety, and for good reason – we tend to talk about polarisation in the context of divisive and difficult topics, ones that we are emotionally invested in and struggle to find common ground over with those who disagree with us.

Photo by Towfiqu Barbhuiya

We are polarised over whether to vote Conservative or Labour, Democrat or Republican, we are polarised over the conflicts of Israel and Palestine and Russia and Ukraine, we are polarised over the climate crisis, immigration, and whether we support trade strikes. While it is perfectly natural that we all hold different opinions on these issues, and will be in disagreement with those whose opinion differs from ours, we should take a moment to think about how such natural disagreements differ from polarisation, and why it is problematic for a society to become overly polarised. Polarisation is often discussed as if it were a form of  political extremism, but it is important to clarify that these two concepts are not the same and should not be used interchangeably. Extremism, while lacking one universally accepted definition, generally refers to an ideology that promotes extreme views and actions, and rejects the rule of law, democracy, and tolerance for opposing views. While extremism does not necessarily lead to violent action (usually categorised as terrorism), followers of extremist ideologies are often called on to use violence towards those who oppose them, and in order to achieve the goals of the group.

Polarisation, on the other hand, is even more of a broad and diverse phenomenon, which often predates, but does not necessarily lead to, extremism. While a precise definition is similarly difficult to find, polarisation is generally used to refer to ideologies which diverge from the centre, and the tendency of those who follow these ideologies to be intolerant of, and refuse to engage with those on the opposite end of the spectrum.

Polarisation usually does not advocate for the use of violence to reach political goals, but, similarly to extremism, those who are polarised will usually completely reject the opinions of those who disagree with them. Polarisation is often characterised by ’echo chambers’, in which proponents of certain ideologies or viewpoints will exclusively surround themselves with those who think similarly to them, separating from and denigrating those they disagree with. Talking straightforwardly about polarisation is important, partially because polarisation can easily act as a precursor to extremism. It is not difficult to see why. We are all entitled to our opinions, and to form meaningful relationships with those who share our values – but to peacefully co-exist with others, to exercise critical thinking, and to make the compromises necessary for a functioning democracy, we must be willing to engage with those different from us. It is tempting to think that we will have nothing to say to each other, or that it is best to ignore our opponents because they are a threat to our values and way of life. But that is exactly why we must recognise polarisation as the slippery slope it is in its capacity to encourage an extremist mindset, and its potential to intensify already existing social tensions.

According to the Varieties of Democracy Institute, polarisation has been on the rise since 2005 in all of the world’s regions, except for Oceania. But not only has polarisation been increasing according to most accounts, it also plays out quite differently in our current time than in the past – new technologies like social media and artificial intelligence have a significant role to play in supporting polarisation.

Exclusively surrounding yourself with people who support your views and ideology is not easy in real life, where you are still likely to be confronted by people you disagree with in different scenarios. Online, however, it is far easier to only follow and interact with those who share your opinions and values, and those who do not can be blocked with the click of a button. Research has shown that people tend to feel much more comfortable being inflammatory and offensive in online conversations, expressing sentiments and behaving in ways they would find unacceptable in real life – it is not hard to see how this can lead to increased dehumanisation and vilification of ‘the other side’ in an online context, further escalating the polarisation process.

The companies behind these social media platforms, like Meta (formerly Facebook), YouTube or TikTok, benefit from keeping users engaged on their platforms, continuously liking, commenting, and consuming content, so the sites’ algorithms are built specifically to understand what and who specific users like to engage with, in order to ‘feed’ them more of that content. The feedback loop created by these algorithms makes it easier and easier for users to consumer more of the content they already engage with – not a big issue when this means being shown more cat videos, but highly problematic when it means the algorithm is suggesting increasingly extremist and polarising content to its users, effectively creating a perfectly sealed echo chamber.

But the issue is not just that users on social media platforms are being shown content that confirms their existing biases and encourages them to only interact with those posting and consuming similar content. It is also the fact that much of this content is factually incorrect mis- and disinformation, created and spread for the purposes of promoting specific ideological agendas – and technologies like social media and AI are helping both create and spread such content.

The recently escalated conflict in Gaza is a prominent example of online misinformation being spread about an emotionally and politically charged event, with the platform X (previously Twitter) hosting a large number of posts containing misinformation, made worse by the platform’s confusing system of providing verification badges to any user willing to pay for one, and its scrapping of the community fact-checking system previously in place. While X does currently have a Community Notes feature, where users can provide additional context and fact-checking for posts on the platform (which will only be visible once enough users have approved it), Musk did take away the option to report post containing misleading information, abuse, or hate speech, and have it be reviewed by human moderators. In addition, Musk has also fired entire content moderation- and ethics teams,  prompting concerns that the content moderation system currently in place at X is woefully unequipped to handle the amount of mis- and disinformation and other harmful and illegal content proliferating on the platform – for example in relation to currently ongoing global conflicts.

Some of the content created and circulated by X users has included years-old video clips from Aleppo, Syria, footage from the combat video game Arma 3, and footage of fireworks in Algeria, all presented as recent footage from Gaza. Similar misinformation has been, and is being spread regarding Russia’s war in Ukraine. Deep-fakes (hyper-realistic, AI-created videos) of both Volodymyr Zelenskyy and Vladimir Putin are widely circulated online.

According to the Center for Countering Digital Hate (CCDH), Meta failed to flag 80% of posts hosted on its platform promoting the popular conspiracy theory that the U.S. is funding the development of bioweapons in Ukraine, while X had already logged 50,000 pieces of content containing misinformation about the war in Ukraine. 75,000 fake accounts were created to spread such misinformation in March 2022.

Users on social media, whether consciously identifying with an extremist movement or not, clearly have a massive opportunity to reach potentially millions of people with content containing mis- and disinformation, whether through spreading such content themselves, making fake accounts to target others, using text or visual materials created by AI, or simply posting already-existing content out of context. But it is not just individual social media users that drive the polarisation of the Internet – in some cases, online censorship and misinformation campaigns are state-backed efforts.

The phenomenon known as ‘splinternet’ refers to the splintering of the Internet into various divisions, resulting in many detached Internets inaccessible to each other, rather than one united, global Internet. Perhaps the most well-known example of the splinternet is the Great Firewall of China, an extremely strict and heavily enforced set of laws and technologies which blocks most foreign websites, modifies search results and removes content deemed unacceptable by the Chinese state. Similarly, Russia introduced the Sovereign Internet Law in 2019, aimed at increasing Internet surveillance of Russian citizens, increasing state control over Internet infrastructure, attempting to detach it from the rest of the world, and censoring online content.

Residents in countries like North Korea and Iran are almost wholly banned from, or unable to access Internet services as an extension of strict state control and suppression. While it is tempting to place the onus on individuals to ensure that their online and offline circles are not overly polarised, and that they are not consuming or spreading mis- and disinformation, it is important to acknowledge that such efforts will play out against the backdrop of states and tech behemoths, some of whom have a vested interest in fuelling polarisation and propaganda. This context makes individual and collective efforts to fight polarisation highly complex and difficult – but is by no means an excuse not to do this work. Part of this work also involves understanding how technologies like AI can not only play a role in supporting polarisation, but can also contribute to combatting it. While human moderators still play a crucial part in content moderation, curbing the spread of polarising and misleading digital content, algorithms can help optimize this process, and work alongside human staff – potentially shielding them from engaging with traumatizing and harmful content, which has long been an issue faced by moderators.

We need to be mindful of the content we consume online and of how algorithms work to show us more of what we already like. Now more than ever it is important to engage critically with those holding other views than our own and to not cordon ourselves off. Although having a so-called bubble with people who share similar views can provide a safe space for critical thinking, development, community-building, it is important to not content ourselves with this and muster up the courage and energy to learn, and keep learning about the whole spectrum of ideas and experiences.