Holistic Regulatory Approaches Rooted in Community Collaboration Can Better Regulate Extreme Speech 

Dr. Sahana Udupa is a Professor of Media Anthropology at the Ludwig-Maximilians-Universität München (LMU Munich), Germany and founder of the Center for Digital Dignity, which aims to collaboratively imagine and foster enabling spaces of digital political expression. Dr. Udupa has conducted in-depth research on online extreme speech, disinformation, content moderation, decoloniality, digital cultures, and platform governance.

In this interview, Dr. Udupa shares her insight on the role smaller social media platforms play in the extreme speech ecosystem, potential alternatives to corporate content moderation systems, and the importance of community collaboration in addressing digital hate speech across the world.   

This conversation has been edited for length and clarity.

Deeksha Udupa: Who would you define as the key actors in the conversation of digital governance?

Sahana Udupa: I would define the key actors as state institutions, independent public authorities, social media companies, so-called traditional media communities, and multilateral international organizations such as the United Nations. 

We must remember that this is a planetary problem. Social media can be as toxic as nuclear waste. Therefore, an institutionalized global structure — under the guidance of organizations like the United Nations, for example — where social media companies regularly convene and address repressive assaults on online speech is imperative. Convening these social media companies on a global scale is especially important during critical events like elections. During elections, disinformation campaigns funded by resource-rich political parties can create havoc. Therefore, the regular convening of social media companies on a global scale is key. 

DU: What role do you think social media platforms play in disinformation and the spread of online hate?

SU: Technology design features and business models not only benefit from but also contribute to the amplification of polarizing discourses and an uneven content moderation system. They have co-created online user cultures of compulsive interactions and confrontations that benefit aggression. This has also benefited certain kinds of political parties. 

Additionally, there is the problem of political advertising that violates election integrity principles and algorithmically mediated recommender systems that platforms have raised — these can deepen the wedges. There is a lack of transparency in corporate practices and a lack of access to data for researcher communities, especially for independent researchers. We are now pushed to a stage where we are constantly grappling with the post-API age. The removal of key access points has also contributed to the rise in digital hate speech. 

DU: What role do you think the state can and should play in digital governance? Do you think laws like the Digital Services Act in Europe can effectively hold platforms accountable?

SU: The DSA has its flaws, but it sets out some strict principles which are worthy of emulation, provided that there are sufficient safeguards to avoid state misuse. This is important because without procedural safeguards against state misuse, these measures can also take a different turn. Assuming that these safeguards are in place, some of these policy proposals are very important; the DSA establishes expectations of proactive risk mitigation and transparency measures, along with the principle of proportionality which assigns greater obligations to larger social media platforms and urges them to remove illegal content in a timely fashion. This principle of proportionality means that larger social media companies have higher obligations to implement these measures, along with regulations on online advertising and recommender systems. 

With the DSA, they also hope to create robust systems for notices and trusted flaggers, who are established through a vetting process. When a trusted flagger reports something to a social media company, the platform is compelled to immediately act on the report. I predict that this draws upon the knowledge of these community intermediaries, and could thus create a system where companies become more accountable. There is also the provision for an independent auditing code of conduct and independent dispute settlement bodies. Additionally, they’ve asked for access to data for bonafide research and have raised questions around the online interface and interactive architectures that can potentially change user behaviors.

Now that we know that some of these design features can actually contribute to polarization, can we establish user features that encourage people to change behaviors? That’s another policy measure that DSA wants to introduce. Finally, we need fair corporate practices for content moderation, which would come with removing the opaque and exploitative arrangements around outsourced labor. These are some measures that are quite important to implement with regards to large social media companies like YouTube, the Meta family of apps, TikTok, and other large platforms. These large social media companies have huge user bases, and thus huge revenue bases, so they have the resources and the obligation to implement these measures. 

DU: Oftentimes, when states enact legislation on digital governance, these laws are used to silence dissent. Why is it so difficult for states to enact thoughtful and effective legislation on digital governance? 

SU: Ultimately, ruling governments always have the incentive to protect their interests. Dissent is not something that authoritarian regimes like to engage with. These regimes believe that they know what is right for the people and generally feel that dissent is misguided. Therefore, there is a hostile framing of dissent as anti-national or gullible and susceptible to liberal values that have no standing in that particular context. This is the way you dismiss dissent. 

To dismiss dissent, some of these measures can be very helpful. The issue is that repressive governments have begun to mirror stricter regulatory models — adopted in countries with stable democratic systems — for the purpose of exerting authoritarian controls over speech in their own countries. These regulatory models like the DSA are seen as benchmarks, so these strict measures are then implemented in other contexts. Therefore, it is very important that we monitor how governments take cues from stricter regulatory models and use them to silence dissent at home. And when you question them, they say that because this country has already done it, it’s become the global standard. So again, I think we have to monitor how repressive regimes are misusing regulatory models. 

DU: How do you think Western advocacy on digital harm and the intersection of technology and democracy includes the Global South/Global Majority? How does it often do a disservice to the Global Majority? 

SU: These discussions around online harm are predominantly top-down and thus come with normative frameworks that do not consider the historical and ongoing complicity of Western power in this crisis. This is what we try to highlight in the extreme speech framework. We call for deliberate attention to people’s lives and the categories and structures of communities to develop policies instead of relying on normative and generalized language imposed from the outside. And that imposition is going to be unsettling, it will not be embraced. 

This is the colonial framework of the problem: the West as the seed of calm rationality that can export these media development and literacy models to the rest of the world. That particular concept does not do justice to the complexity of how people are navigating online worlds. It also very alarmingly conceals the West’s historical and ongoing complicity in this crisis. 

There is another aspect of this conversation that I must speak on: there are many efforts for multi-stakeholder engagements, and there is lip service to include the Global South in these debates, but who is leading the voices of the Global South? What kind of solidarity are we actually seeing in the tech governance space? We have to be constantly aware of who is speaking on behalf of the Global South, especially because in recent years we’ve also witnessed the manipulation of the term ‘decolonization’ for nationalist and geopolitical interests. 

Finally, digital harms and measures against digital extreme speech and disinformation should always have a very detailed and context-sensitive understanding of what’s happening on the ground. The ambiguity of extreme speech is very important for us to understand. If you bring a framework of digital harm without consideration of what is going on in that particular context, this will lead to varying degrees of repression, sometimes even unintended repression. 

Extreme speech can also be a way to challenge the status quo and speak back to authorities. Policy measures should then be sensitive to which groups engage in these speech forms: what are their positions within the specific power structures of society? Without this understanding, a Western normative framework can backfire. 

DU: You are currently working on a cross-national study on contentious speech on small social media platforms. Do you think smaller, less mainstream social media platforms would be easier to regulate and moderate for hate speech and misinformation? 

SU: We are still in the process of finding out more about small social media platforms in a cross-cultural context. First and foremost, we must remember that large multinational social media platforms play a major role in amplifying hate speech and disinformation, and therefore stricter regulations are warranted. The point of this project is to reinforce that we should also be diligently observing smaller and more niche platforms which have become breeding grounds for hateful subcultures. Ethnographic work, for example, has demonstrated that hate speakers have used or repurposed smaller platforms by hopping between them to avoid the regulatory gaze. 

Online games, 4Chan, Reddit communities, and other niche spaces have also spawned toxic subcultures of hate. Migration is very important here: during and after the 2016 U.S. presidential election, there was an exodus of right-wing conservatives and Trump supporters to Parler — a self-styled, free speech platform — because these actors were banned by Twitter (now X). Fact-checkers in Brazil have noted a similar migration of right-wing supporters to Parler in July 2020. However, after Apple and Google blocked the app on their app stores, thus restricting access, these users returned to larger platforms. 

Small platforms are also used for layered recruitment, meaning you attract right-wing users through mainstream platforms by providing politically correct messaging. Once you attract these sympathizers, you draw them to smaller platforms where you can basically say anything. In Europe as well, anti-establishment and right-wing celebrities migrated to smaller platforms when they were de-platformed by major social media companies. This dynamic shifted, however, when Twitter turned into X under Elon Musk. Now, many of these actors are now finding a free platform to say whatever they want to say. With that being said, smaller social media platforms are still important to observe because they still offer unique spaces for mobilization. 

DU: Various people have mentioned to me the role of encrypted messaging apps like Whatsapp in the disinformation ecosystem. How does the spread of disinformation here differ from the spread of disinformation on social media platforms like TikTok and Facebook? 

SU: The most important feature of encrypted messaging services is the way they manifest community-based trust and social intimacy. This has been instrumental in spreading politically charged messages. I have defined this as ‘deep extreme speech.’ Deep extreme speech centers community allegiances and distribution logics and frames extreme content as a part of the everyday. Extreme speech then is no longer abnormal but a part of your daily diet. 

Political parties have enlisted WhatsApp groups to create and disseminate extreme content in intrusive forms, and that is what makes it so different. It’s subterranean, intimate, and intrusive. Exclusionary content is jumbled with the good morning messages, the pleasant messages, the caring messages. Deep extreme speech simulates the lived rhythm of the social.  

These messages become widespread when political parties connect these WhatsApp groups through volunteer networks and party mediators, some of whom are paid. They connect these WhatsApp group discourses with other social media channels and mass media. This particular kind of networking then infuses virality into the architectural features of encrypted messaging apps. We think end-to-end encryption means it’s contained, yet virality is still possible. 

My final point is that there is growing skepticism about what people receive on WhatsApp. This is not happening everywhere, but it is happening in some parts of the world, especially where disinformation and fact-checking initiatives have become quite prominent. Still, these intrusive and intimate networks are available for political parties to offer twisted narratives. It’s the intimacy of WhatsApp and other encrypted messaging apps that infect extreme speech ecosystems in very specific and intentional ways.  

DU: In your paper, “Ethical scaling for content moderation: Extreme speech and the (in)significance of artificial intelligence,” you highlight the limitations of existing machine learning content moderation methods to both record and counter digital hate speech. You also propose “ethical scaling” as an alternative method. Why is AI ineffective in moderating hate speech online?

SU: Well, I wouldn’t say AI is completely ineffective: its use is often justified because of the sheer volume of digital exchanges. One of the key challenges is the quality, inclusivity, and scope of training data sets. Hate speakers evade automated detection through the clever use of words and symbols. Also, these hateful expressions are constantly shifting, which is another challenge. Today, a significant number of hateful expressions and hate-based disinformation do not contain obvious watch words. If these hate speakers feel that the regimes are backing them and that they have a level of impunity, they will be very explicit and even draw upon this hate economy. Yet when there are stricter regulations, or when they’re unsure of how the state would react to their hateful expressions, they generally use indirect expressions, complex cultural references, and local idioms. For example, the hashtag #religionofpeace is tainted in sarcasm and is Islamophobic, but it is very hard for the model to even understand that this can be harmful. 

There is also language-based asymmetry and an uneven allocation of corporate and state resources for content moderation. This affects minoritized language communities especially hard. While there is some progress in the curation of non-English datasets and the development of classifiers in minoritized languages, they still do not compare to those of the major languages of the world. And again, the problem of constantly shifting hateful language adds another layer of complexity because classifiers become outdated very quickly. 

Finally, there is still the foundational question of what constitutes hate speech or problematic content. This is a decision that must be made responsibly and requires intimate cultural context. The field of content moderation requires an understanding of the ever-changing cultural, historical, and political nuances of specific states. This can only happen with consistent collaboration with local communities. 

DU: Can you elaborate more on “ethical scaling” as a potential alternative to current models? How and why is this a more inclusive approach? 

SU: So it is very clear that context is incredibly important for content moderation, but the question then becomes how do you bring this context? In the AI for Dignity project, we have proposed community-centered models by creating dialogue spaces in our project among fact-checkers, ethnographers, and AI developers. We reached out to fact-checkers as community intermediaries, but with robust processes and protocols, we could also connect with other community partners. Community-based iteration is important because this is one way to ensure that context-sensitive and inclusive data sets come into AI models and their categories. Categories are very important in AI development, and developing categories is innately an epistemological and political process. These categories should represent the cultural, historical, and political nuances of individual societies, and for this to happen, community collaboration becomes very important. 

This process is very slow, very expensive, and very exhausting. In our project, there was no agreement among fact-checkers from the same country on how an expression should be delegated. Should it be classified as dangerous speech, or an exclusionary extreme speech expression? Yet still, we must be persistent. I feel that content moderation should be this slow, expensive, and exhausting, which then raises the ethical question of accountability and resource allocation in corporate content moderation.

These are the ethical questions. We’ve proposed this concept of extreme speech to highlight these issues of transparency, accountability, and community collaboration. We call it ethical scaling because I take issue with this concept of scaling. Scaling is generally the foundational justification for AI use — AI can scale everything up. 

We argue that scaling is not just technical but deeply political. It is political precisely because of how and whom it involves as human annotators, the extent of resources and imaginations of technology that guide this process, and the deeper colonial histories of market power, racialization and rationality within which this entire thing is embedded. Therefore, it has to be disrupted. Ethical scaling combines the critique of digital capitalism with a proposal for a responsible community-centric process. 

DU: How would ethical scaling be implemented? 

SU: In theory, social media companies should be investing in these processes. It is also important to highlight that by suggesting community participation and asking for algorithmic auditing for greater accountability, we also should not avoid questions about corporate models and how these platforms operate. Ethical scaling should be an essential process in how these companies operate. It combines this critique of capitalist data hunger and a proposal for a responsible process of community participation in vital areas such as content moderation. Clearly then, social media companies should be implementing ethical scaling. 

DU: In countries where the state is the greatest benefactor of an unregulated social media ecosystem, how can we incentivize social media platforms to be more diligent?

SU: The major issue is that large social media companies monetize online interactivity and engagement. There is proof that social media platforms profit from polarizing narratives. The business model of monetizing user engagement then undermines the rationale of moderation. We have to highlight this. The inherent models of these larger social media companies must be interrogated. 

In the case of state actors benefiting from an unregulated social media ecosystem, it is vital that we consistently apply pressure and guide these social media companies to comply with global standards of content moderation and human rights protection. It is important to convene these social media companies and remind them of global standards of content moderation and human rights moderation because on-the-ground executives of social media companies oftentimes have no idea what to do. They’re operating in the dark and will sometimes escalate it to their headquarters. So we have to ask what are the organizational processes within these companies for escalation and decision-making — researchers have no idea. These processes are completely out of reach for the research community, so access for researchers is also key. 

We should also remember that disinformation and extreme speech are shaped by deeper histories of prejudices, conflicts, and animosities. And therefore, we cannot take a technology- centric analysis. It can only take us so far. Social media companies should move away from technology-centered solutions and should be subjected to holistic regulatory approaches. Platforms should also cooperate in terms of tackling cross-media manipulation, because that’s a major issue as well. Cross-media manipulation cannot be dealt with by one single platform. These hate speakers are often escaping regulatory filters by building networks of migration, so extreme discourse gets amplified across different social media platforms through these proxy workers, clickbait workers, and also volunteers who are very cleverly circumventing the regulatory protocols of individual social media companies. 

Share the Post: