Meta’s decision to completely dismantle its fact-checking program with immediate effect will result in the accelerated abuse of the company’s technologies. This abuse will invariably lead to the loss of life, adding to the blood already on Mark Zuckerberg’s hands. This assessment is neither fatuous nor false, and is simply the forward projection of what the Global South has repeatedly confronted in the past. These assertions have been made years before the company’s technology became more commonly associated with significant harm, violence, and democratic decay in the West.
Zuckerberg’s decision is based on the exigencies of American politics and the goal to curry favor with the incoming Trump administration. Ironically, it was President-elect Trump’s role in instigating 2021’s violent and unprecedented Capitol Hill insurrection that kick-started Meta’s investments in fact-checking. Yet now, new appointments to Meta’s Board and Global Policy Team signal the company’s ingratiation towards the incoming U.S. administration, already defined by the production and promotion of falsehoods at an industrial scale.
Astonishingly, the revised policies on Hateful Conduct detach the instigation of hate online with the violent offline consequences, despite over a decade of bloody evidence establishing a strong correlation. Zuckerberg’s announcement gaslights academics, researchers, and civil society— many of whom have been victims of violence themselves— by suggesting, without providing evidence, that the existing architecture of fact-checking is “politically biased,” has “destroyed trust,” and that a system akin to Twitter/X’s Community Notes is the way forward. This is an unprecedented and fundamental policy reboot, presented without any indication to the existing capabilities of the company to carry out what Zuckerberg says it will now do.
For many in the Global South, this expediency is neither surprising nor novel. In January 2019, senior Facebook officials, including the Public Policy Director for India and South & Central Asia, met with former Sri Lanka President Mahinda Rajapaksa. The act caused consternation amongst civil society groups—less than a year after the country’s worst-ever anti-Muslim riots, the meeting gave succor to the ruling party’s violent Sinhala-Buddhist nationalism. Facebook was directly implicated in the spread of content that incited hate and which gave rise to island-wide violence against Muslims and their property. Two were killed, and dozens were seriously injured.
A year later, the same official resigned because of a Wall Street Journal article that alleged ]she had told other Facebook staff in India that punishing violations by politicians from Prime Minister Narendra Modi’s party, the Bharatiya Janata Party (BJP), “would damage the company’s business prospects in the country.”
India remains Facebook’s largest market. In 2021, the New York Times reporting on a tranche of documents surfaced by Facebook whistleblower Francis Haugen revealed that the company lacked sufficient resources to tackle issues it had created, particularly anti-Muslim content. The same piece noted Facebook’s well-documented complicity in the generation of violence against the Rohingya Muslims in Myanmar, automated mass subscription to groups in Sri Lanka that exposed users to hateful content, and how militia groups in Ethiopia successfully used the platform to coordinate violence. The examples are endless, and each features significant violence, including death and destruction.
Meta’s technology occupies an integral role in our lives and is inextricably entwined in almost every aspect of our society: our politics, society, industry, commerce, the arts, and activism. The same technology that fuels violence is relied upon by those who bear witness and document for posterity. The recent announcement, however, will significantly debilitate the use of the company’s products to strengthen democracy. It presents explicit favor to autocracy by enabling powerful politicians and their proxies, who find fact-checking inconvenient and promote falsehoods and disinformation without meaningful guardrails.
The shift is significant, replacing formal systems and professional fact-checking with crowd-sourced moderation and informal feedback. A centralized oversight of platform integrity will fall to a distributed model pegged only to motivated users. These adjustments portray how the sunset of global standards based on rigorous evidence are now held to the standards of purely subjective presentations of reality. Each of these shifts hold a narrative on platform entropy.
Professional fact-checkers operate within established methodological frameworks, employing systematic approaches to verification that include source triangulation, expert consultation, and standardized evaluation criteria. The shift to crowd-sourced moderation introduces significant variability in verification standards, potentially privileging popular opinion over factual accuracy. Centralized oversight, despite its limitations, provides a coherent framework for content moderation decisions, enabling systematic tracking of misinformation patterns and coordinated responses to emerging threats.
The distributed model, while potentially more scalable, introduces significant variability in enforcement standards and reduces the institutional capacity to identify and respond to coordinated disinformation campaigns. The transition from global standards to user-generated interpretation introduces significant challenges for platform integrity, especially in majoritarian democracies where swarms of users can be employed by the state and autocrats to stifle inconvenient truths published by investigative journalists or civil society. These factors will Individually and collectively lead to the deterioration of shared facts and realities, accelerating the fragmentation of public discourse into isolated information bubbles that, in turn, erode social cohesion.
The implications for Global South markets will be profound and immediate, undoing what researchers, academics, policymakers, and professional fact-checkers have worked with Meta over the course of years to support and strengthen. Historically marginalized groups, identities, and communities will suffer the most, and it will be far worse than the past, given that entrenched network dynamics and algorithms now aid the rapid spread of hate and reach of harm. Zuckerberg himself admits that the new moderation framework (or lack thereof) “means we’re going to catch less bad stuff.” What he means by this is that our lives will be at risk because of content; in this regard, the company cannot historically defend itself. It gets worse.
Meta’s new policy of minimal interference is pegged to the maximal use of artificial intelligence and machine-learning for content moderation. This will undoubtedly fail. Doctoral research around the instigation of anti-Muslim hate after Sri Lanka’s 2019 Easter Sunday terror attacks highlighted the widespread use of violative memes to inflame tensions and influence public opinion. Memes require deep socio-political, cultural, and linguistic familiarity to decode, with visual presentations that appear humorous often telegraphing significant violence. Despite billions of dollars invested in AI already, there is no technology at Meta that allows it to understand memes and the intent of producers, especially leading up to and during times of offline unrest when their production is heightened on a massive scale. Coupled with other local language considerations impacting hundreds of millions in the Global South, the new policy pivot will enable the coordinated spread of harm in ways that escape automated oversight, and even manual reporting. Nothing good will result.
We are looking at a near-future where Meta’s platforms are defined by coordinated networks of state-aligned accounts, untrammeled, professional trolling operations with significant resources, the systematic manipulation of platform architectures, and sophisticated falsehoods produced at scale. All of this will amplify, in an unprecedented manner, what’s already very familiar to Global South human rights activists: coordinated disinformation campaigns, manufactured evidence of alleged wrongdoing, false attribution of violence or criminality, and manipulated media targeting minorities. Countries struggling with a democratic deficit will henceforth find that Meta aids the decay of human rights far more than it helps stymie autocracy.
As we witness a deliberate dismantling of safety mechanisms that took years to build— using precious time, labor, insight, research, and data— it’s clear that Meta’s new policy isn’t just about reducing moderation costs or appeasing MAGA in the U.S.; it is about consciously choosing to be complicit in future atrocities whilst maintaining plausible deniability. A company that once promised to connect the world has instead chosen to profit from its fracturing, one life, one community, one violent conflict, and one genocide at a time.
(Dr. Sanjana Hattotuwa was formerly the Research Director at The Disinformation Project and the founder and former editor of Groundviews, Sri Lanka’s first citizen journalism website. He has written about digital security, web activism, new media literacy, and online advocacy in Sri Lanka and beyond.)