The European Union (EU) wields significant influence on global regulation, a phenomenon often called the “Brussels Effect.” An example of this is held within the General Data Protection Regulation (GDPR), which transformed global norms around data privacy and protection, even for entities outside the EU. The Digital Services Act (DSA) and its counterpart, the Digital Markets Act (DMA), came into effect in early 2024, but there is growing speculation about their potential to reshape platform practices worldwide, particularly in addressing hate speech and disinformation.
On the other hand, some are concerned about a potential dark Brussels effect, wherein companies could prioritize countering hate and disinformation in Europe at the expense of smaller markets. It is paramount to note that underinvestment in content moderation in Global South/Global Majority languages is a problem that precedes the Digital Services Act (DSA), and there is no significant evidence that the problem was exacerbated in 2024.
As Big Tech platforms confront and align—or misalign—with different regulatory regimes across the world, it is critical for human rights, digital rights, and civil society activists to understand the various incentives and disincentives they face. This is perhaps more specifically true for those in the Global South/Majority, where platforms can align with oppressive and authoritarian governments or arm-twist elected officials to enact favorable legislation due to the significant platform power they possess.
Authoritarianism, Online Hate, and Big Tech
In backsliding democracies and authoritarian regimes, marginalized communities and minorities often face dual threats. Governments use regulatory powers to suppress dissenting voices, forcing platforms to remove content critical of those in power while failing to protect these communities from targeted hate campaigns. In countries like India, which has the world’s highest number of internet shutdowns, the government often issues arbitrary takedown orders for social media content. This doesn’t, however, prevent the proliferation of hate and disinformation online, as demonstrated by multiple reports.
Big Tech companies have routinely violated their own terms of use, which has enabled hate and disinformation to thrive on their platforms. In 2023, Meta rejected its own Oversight Board’s recommendation to take down and temporarily suspend former Cambodian Prime Minister Hun Sen. This came after the former PM had uploaded a video on Facebook threatening those alleging electoral malpractice in Cambodia with violence.
Among the many major revelations made by whistleblower Frances Haugen, the lack of resources devoted to the Global Majority by Meta (then Facebook) platforms stood out. Even as the company faces a $2 billion lawsuit in Kenya over the use of the platform for calls for violence in Ethiopia, the platform has refused to accept accountability for its choices. The platform has also been accused of complicity in the genocidal violence against Rohingya Muslims in Myanmar.
X (formerly Twitter) has proved to be even less subtle in its pandering to authoritarian tendencies, especially since its takeover by Elon Musk. It picked a fight with Brazilian and Australian authorities over its proclaimed values of freedom of expression while quietly complying with Indian and Turkish government orders to shutter accounts of opposition and critics.
Debates and discussions on disinformation and hate tend to employ the metaphor of a flood of overwhelming safeguards and moderation capacities that can be reasonably expected from a platform. Yet the evidence shows that when confronted with clear harm emerging from their platform use, Big Tech will refuse even to implement its own platform guidelines.
DSA’s Approach to Systemic Risks
Far from its portrayal as a stifling regulatory regime that prevents platforms from allowing freedom of expression, the DSA largely encourages Big Tech companies to collaborate on key concerns and implement their own policies. It also requires platforms to be transparent with their decisions, which actually gives users the right to challenge arbitrary decisions on content moderation. This is then made available on the transparency database, a public platform that provides access to information about online platforms’ content moderation policies and enforcement actions.
As a whole, the DSA holds two major expectations from Big Tech platforms categorized as Very Large Online Platforms (VLOP) and Very Large Online Search Engines (VLOSE). First, they are expected to perform their own assessments of illegal content on their platforms and outline mitigation strategies. They are also expected to allow external researchers to access data to verify or contradict their conclusions and outcomes. Second, they must comply with the risk mitigation measurements recommended by the European Board of Digital Services.
The question of systemic risks is particularly crucial, as platforms are expected to assess and publish these risks. Many platforms published their risk assessments in December 2024, while the Board of Digital Services Coordinators, alongside the European Commission, will publish their analysis of these assessments sometime in 2025. This provides the opportunity for civil society organizations to intervene directly and hold Big Tech accountable.
Given the current phase of tech platforms rolling back their initial promises and commitments as a public service, the DSA allows us to argue from a position of strength. It is critical that we understand how we can leverage these strengths into meaningful action.
Global South/Majority Perspectives
Researchers and activists beyond Europe should maintain an active interest in DSA as the close alignment between authoritarian regimes and Big Tech makes local action against hate and mis/disinformation extremely difficult. While the DSA is an EU regulation and generally does not apply to non-EU entities, its principles can serve as a tool for civil society organizations worldwide to demand better online governance.
The question of systemic risks is critical as we see the growth of transnational right-wing extremist hate and disinformation. Hindu nationalist groups in India have fueled growing communal tensions in Bangladesh through disinformation. Anti-immigrant rhetoric has flowed freely between Europe and North America. However, these issues go beyond shared narratives and the astroturfed amplification of hateful tropes.
EU DisinfoLab’s report revealed how the India-based Srivastava Group used a wide and complex network of fake websites full of disinformation to recruit Members of the European Parliament to launder its agenda. At the same time, these groups laundered Russian propaganda through content syndication. The direct alignment between Hindu nationalists and the far-right in the U.K. helped fuel the 2022 riots in Leicester.
Diasporic communities, civil society organizations, and scholars from the Global South are uniquely positioned to understand the growing nexus between different hateful narratives operating across borders. The understanding of ‘specific risks’ must, therefore, be multifaceted in order to be effective.
Non-EU stakeholders play a crucial role in addressing extraterritorial violative content by demonstrating how it poses a clear, systemic, and ongoing risk to the EU, whether through its association with established illegal content or its negative impacts on EU citizens and residents. They can support trusted flaggers—experts in identifying illegal online content like hate speech or terrorist material—by highlighting the intersectional and transnational hate experienced by diasporic communities. Additionally, stakeholders can contribute to audits and vetted research under the act by providing preliminary studies that justify specific requests for platform data access, thereby enhancing the evidence base for regulatory action.
Big Tech has started to align with the Trump administration to dismantle its safety measures against hate speech and disinformation. The DSA has become a key target, given the significance of the European market and the act’s stringent transparency requirements for platforms. It is imperative that we act urgently, leveraging the DSA to expose the scale of hate and disinformation these platforms amplify through their inaction.
(Aishik Saha is an Extremism and Disinformation Analyst (EDA) at the Centre for the Study of Organized Hate (CSOH). He specializes in networked disinformation, digital policy, particularly around the Digital Services Act (DSA), Online Safety Act (OSA), and online content regulation.)