ARTICLE BY GERDA KOVACS
Gerda Kovacs holds a BA in Sociology and Criminology from the University of Westminster and an MSc in Human Rights and Politics from the London School of Economics and Political Science. Her research focuses on the intersection of human rights, critical criminology, and technology.
Polarisation is an increasing problem in nearly all regions worldwide. This should concern us, since research shows that polarisation threatens democratic values and civic engagement, reduces social cohesion and can even act as a precursor to extremism.
Due to technological advancements, it is only getting easier to produce, share and consume polarising (and often factually incorrect) content, and lock ourselves in perfectly-sealed echo chambers on social media that filter out opposing views. The potential consequences of this are dire, and even more so when the polarisation in question revolves around a serious and divisive global conflict – such as the ongoing genocide in Gaza.
Hamas’s attack upon Israeli civilians of October 7, and the ongoing hostage situation have undeniably revitalised public awareness of the enduring Israel-Palestine conflict; but such awareness has its drawbacks. With widespread activism and engagement around the issue, polarising and fabricated AI-generated content and adversarial bots now threaten to overwhelm online discussions about Israel and Palestine. This poses threats to information integrity, peaceful coexistence and democratic participation within populations who are often naive about the sophistication and prevalence of AI tools.
In May 2024, researchers Ralph Baydoun and Michel Semaan revealed that they had been tracking significant numbers of pro-Israeli social media bots active on multiple platforms. The bots were programmed to detect and swarm pro-Palestinian posts and accounts in order to discredit them through spreading misinformation. In the same month, Meta announced that they had removed hundreds of AI-powered bot accounts linked to the Israeli company STOIC. These bots were part of the misinformation campaign Zero Zeno run by STOIC, which focused on spreading false and discrediting information about the United Nations Relief and Works Agency for Palestine Refugees in the Near East (UNRWA) and labelling pro-Palestinian protesters as anti-Semites. Accounts linked to STOIC were also banned by OpenAI after it emerged that their technology was used by the group to produce misinformation. Yet combatting propaganda on the internet is a constant game of whackamole, reacting to an ever-evolving threat.
Nonetheless, these crackdowns are a step in the right direction, but seem to be the exception rather than the rule, with some companies actively enabling users to generate and spread misinformation. In November 2023, it was revealed that the software company Adobe, widely used by publishers and graphic designers, was selling AI-generated images showing fake scenes from Gaza.
At the time of writing, a wealth of such images is still available for purchase from the Adobe website. Some, but not all, image descriptions include the note “Generated with AI: editorial use must not be misleading or deceptive”, but reliable enforcement of this rule seems highly unrealistic.
In fact, a study found that on X (home to a lot of viral AI-generated content about Gaza) verified users account for 73% of viral misinformation about the Israel-Palestine conflict. While the blue checkmark of verification used to convey the authority of authentic public figures, it became available for purchase after Elon Musk’s acquisition of X. This makes it even harder to verify the identity of sources of information on this platform.
Social media platforms are not the only ones to blame, either – worryingly, online news outlets are also sharing AI-generated pictures without labelling them as such. In addition to pictures, texts and videos are also being used to spread misinformation. In November 2023, an online article was widely circulated, claiming that Benjamin Netanyahu’s psychiatrist had committed suicide. Analysis by NewsGuard revealed that the website behind the article published almost exclusively AI-generated news, and had used AI to rewrite a satirical piece from 2010 about the fictional suicide of Netanyahu’s fictional psychiatrist.
In the same month, a deepfake video of half-Palestinian model Bella Hadid went viral, in which she apologised for and retracted her previous pro-Palestinian statements and criticism of Israel. Such large-scale degradation of information integrity only further inflames the already volatile situation, exacerbating existing polarization and anxiety.
If tech companies and governments don’t crack down on AI-enabled deception across the digital ecosystem, then the misinformation landscape of the Israel-Palestine situation is a bleak preview of our future. In addition to AI-generated fabrications flooding the internet, there is also the issue of legitimate information being discredited as AI-generated misinformation. With advancements in the technology, every piece of information or material could technically be fake, with the average consumer having few tools to determine its legitimacy. This phenomenon is known as the “liar’s dividend”, a term increasingly used to describe the fundamental confusion around information integrity fuelled largely by AI advances. In such an environment of fundamental confusion and mistrust, claims that legitimate information is actually fake are hard to refute and inconvenient or opposing facts are easily dismissed as fabrication, leading to further polarisation.
Misinformation-driven polarisation also poses threats to peace, tolerance and the ability to work towards positive social change which is already showing impacts.
In May 2024, Slovenia, Spain, Norway and Ireland all officially announced their recognition of Palestinian statehood. In the same month, Trinity College Dublin announced that they would meet the demands of their student protesters and divest from Israeli companies that appear on the UN blacklist. Rutgers University also struck a deal with student protesters, agreeing to creating an Arab cultural centre at the university and hiring additional staff with expertise in Palestinian affairs, as well as supporting ten displaced Palestinian students to study at Rutgers. The Canadian government announced a new visa scheme aimed at reuniting their citizens and residents with their Palestinian families and imposed sanctions on extremist Israeli settlers perpetrating violence against Palestinians.
These examples show a range of positive changes achieved by pro-Palestinian activists and civil society – progress that is threatened by the current wave of misinformation about the conflict. Legitimate and positive achievements can be discredited through smear campaigns, which can also target activist organisations and individuals and undermine their position. Equally, those encountering or worried about countering misinformation might be put off from joining or supporting pro-Palestinian activist efforts in the future. Persistent concerns about information integrity could decrease willingness to engage with any information about the conflict, putting a long-term damper on meaningful activism and engagement that could otherwise deliver impactful change. Ultimately, vigilance against AI-driven misinformation is essential for pro-Palestinian activists to ensure that the realities of Gaza are not lost in the digital fog of war.