How is Artificial Intelligence Fanning the Flames of Hate and Extremism?

How is Artificial Intelligence Fanning the Flames of Hate and Extremism?

The proliferation of hatred, violence, and extremism on the internet has gained a formidable new ally: artificial intelligence (AI). Generative AI models, especially large language models like OpenAI’s ChatGPT, have shown an unparalleled capacity to produce human-like text and compelling audio-visual content that can amplify hate speech, extremism, and societal divisions on a large scale. 

Individuals with even minimal literacy or subject matter expertise are generating polished and persuasive content — including blogs, news articles, and social media posts — that promote harmful ideologies. This has accelerated the dissemination of disinformation, hate speech, violent rhetoric, and even terrorism globally. 

Besides generating text, terrorist and extremist groups are also using AI to produce audio-visual content that propagates their ideology and agenda.  In July 2023, Tech Against Terrorism, a United Nations-backed international initiative to combat online terrorist activity, unearthed a messaging app channel widely sharing AI-generated neo-Nazi and antisemitic images. 

In India, which is the world’s second-largest online market behind China, AI-generated images depict anti-Muslim conspiracy theories in visual form. These pictures are then circulated on social media platforms to promote Islamophobia and bogus conspiracy theories like “love jihad,” which falsely accuses Muslims of waging a religious war by entrapping Hindu women into marriage. This theory has been exposed to be false by various credible media investigations.

In 2015, a ground investigation conducted by news websites Cobrapost and Gulail busted the love jihad bogey. In a year-long undercover operation, the ruling Bharatiya Janata Party (BJP) Ministers, legislators, and militant Vishwa Hindu Parishad leaders were recorded on video cameras speaking about the organized use of coercion, violence, blackmail, and false cases to split up Hindu-Muslim couples. During the recent political upheaval in Bangladesh, right-wing social media handles from India and around the world were found circulating AI-generated images that falsely exaggerated reports of violence against Bangladeshi Hindus. 

Some terrorist organizations are issuing guidelines to explain how to use AI for propaganda and disinformation to their followers and members. Tech Against Terrorism has found evidence of a pro-Islamic State tech support group sharing an Arabic-language tech support guide on securely using generative AI tools in 2023. They have also identified AI-produced posters shared by Al-Qaeda support channels, antisemitic AI-generated imagery shared by English-speaking far-right groups, and evidence of pro-Islamic State affiliates using AI to translate propaganda into multiple languages and al-Qaeda affiliates leveraging AI for their propaganda content. 

Daniel Siegel, a student at Columbia University, and Bilva Chandra, Policy Fellow at the RAND Corporation, wrote a paper in 2023 highlighting the use of deepfake content by an online campaign supporting 45-year-old Pakistani Islamist preacher Muhammad Qasim, who alleged that he was directly communicating with Prophet Muhammad and God through his prophetic dreams. He also claimed that an apocalypse was imminent. The audio-visual content not only spread Qasim’s extreme viewpoints but also suggested that anyone who did not accept him should expect divine punishment.

Using AI, Qasim’s content was produced in several languages and disseminated on social media platforms like TikTok, YouTube, and X/Twitter. It was an organized effort “to enhance Qasim’s stature and rally support from fundamentalist Muslims worldwide.” The content identified Qasim as the sole legitimate Islamic leader while denouncing predominantly Muslim nations such as Saudi Arabia and Turkey as ‘apostates.’  Influential Islamic scholars from across the world and prominent politicians like former U.S. President Barack Obama were featured falsely endorsing Qasim as the messianic figure called the Mahdi. 

Blending “authentic audio from various Imams with synthetic voices within a single video”, the group thereby “obscures any inconsistencies in the fake audio, making it more challenging for listeners to distinguish between the real and the synthetic.” Siegel and Chandra argued that this sort of sustained deepfake engagement “represents the new frontier” of digital extremism.

A VOX-Pol report titled AI Extremism: Technologies, Tactics, Actors stated that the November 2023 Dublin riots were “accompanied, and followed, by intense social media activity wherein AI-generated content frequently appeared on far-right accounts and channels.” 

The report also documented the use of generative AI content for disinformation operations conducted in West Africa with the purpose of fuelling radical sentiment against political opponents. In early 2023, during Nigeria’s presidential election, the electorate was bombarded with election-related disinformation. AI-generated deepfakes, as well as paid posts falsely linking candidates to militant or separatist groups, were circulated in high frequency on X, Facebook, and TikTok.

The Global Reach of AI-Generated Extremism

Studies have shown that AI’s ability to craft tailored narratives can deepen societal divisions and fuel extremism, resonating with disaffected populations and furthering the rise of right-wing populism. Research from the Combating Terrorism Center at West Point highlights how AI tools like GPT-3 can mimic far-right extremist rhetoric and further radicalize individuals. An Australian eSafety Commissioner report published in August 2023 raised concerns that terrorists could potentially use generative language models to finance  “terrorism and to commit fraud and cybercrime;” additionally, these models could allow “extremists to create targeted propaganda, radicalize and target specific individuals for recruitment, and to incite violence.” 

The Global Internet Forum to Counter Terrorism (GIFCT) recently published a report highlighting three ways in which generative AI technology can potentially be used for online extremist activities: content creation, deepfakes and misinformation, and automated interactions or bots engaging with users to spread extremist content and amplify their reach

The Path Forward: Regulating AI to Combat Extremism

Extremists and fundamentalists around the world are leveraging AI to craft targeted textual and audiovisual narratives to appeal to specific communities along religious, ethnic, linguistic, regional, and political lines. It has become an effective tool for recruiting individuals, exploiting their psychological vulnerabilities with precise rhetoric. AI has provided these groups with an unprecedented capability to create, produce, and disseminate false and dangerous propaganda at an unprecedented scale.  

The immense potential of AI as a tool for hate at unprecedented levels is a global threat. Nothing less than global cooperation and international governance is required to meet this challenge. A robust and continuous collaboration between tech companies, governments, civil society, research institutes and international bodies at a global scale is required. Governments must establish the legal and regulatory framework to prevent the misuse of AI. Simultaneously, international legal norms and standards must be addressed.

Governments worldwide have just started to recognize the need for legal and regulatory frameworks that prevent AI from being used to spread extremism and fake news. At the May 2023 U.S. Senate hearing, there was widespread agreement about the immense risks of disinformation, mass manipulation, biased decision-making, and privacy invasions. Lawmakers and outside speakers, including leaders from the social media industry, called for tougher disclosure requirements. Some lawmakers argued in favor of setting up a new AI regulatory agency. The Commerce Department’s National Telecommunications and Information Administration (NTIA) is also running a process to solicit ideas about AI oversight.

Big tech, including Open AI, Microsoft, Google, Meta, and xAI, must also be held accountable. They must make substantial investments in human and technological capabilities that monitor and weed out extremist content. 

Robust and advanced technology must be built into AI-enabled platforms to block any prompt or attempt to generate extremist and violence-inducing content. Only stringent legal and regulatory frameworks and constant tech and human surveillance can ensure that AI serves humanity’s best interests rather than becoming a tool for division and violence. 

(Ashish Khetan is a lawyer and investigative journalist. He specializes in human rights law, constitutional law, and public international law.)

Share the Post: