How Advertisers and Investors Can Push Social Media Platforms to Curb Hate Speech and Disinformation

Frances Haugen is an advocate for accountability and transparency in social media.

Frances Haugen has become one of the most prominent advocates for accountability and transparency in social media. A former Facebook employee, she joined the company in 2019 but grew increasingly concerned with its prioritization of profit over public safety, eventually blowing the whistle and leaking internal documents that revealed critical issues within the platform. In this interview, Frances shares her insights on the shortcomings of current social media protocols when it comes to combating digital hate and disinformation, and she also discusses Facebook’s conduct in India and its role in hate-mongering, along with the role governments, advertisers, and investors can play in enforcing transparency across the industry.

This conversation has been edited for length and clarity.

Deeksha Udupa: When you first joined Facebook, you were involved with the company’s integrity operations. How would you describe the efforts social media companies make to counter disinformation and hate speech? 

Frances Haugen: I was specifically assigned to work on misinformation in places where there wasn’t third-party fact-checking. Meta loved touting how they had fact-checkers around the world, but in reality, they allocated the vast majority of the fact-checking budget to the United States and then to Western Europe. 

To give you context, Facebook’s misinformation policy was such that journalists had to pick up individual stories. They would find an individual link, fact-check that specific thing, and then look for things that are similar. They didn’t look for patterns of disinformation. I knew a researcher who was working on a book to prove to Facebook’s policy team that certain motifs and patterns repeat, making it unnecessary to fact-check every individual post. Facebook just wasn’t set up to deal with this idea that a lot of what happens in ethnic conflicts is not unique to individual conflicts. Patterns match across conflicts. 

A part of the problem is just resources. A core part of my complaint was that Facebook acts like the internet. It is the internet for large parts of the world that are fragile because Facebook went into these places and paid a lot of money to become the internet but never invested in safety systems. And this is not just a Facebook problem but one that occurs across social media platforms. When advertising supported products — in this case, social media platforms — play the role of critical communications infrastructure, you end up with very dangerous systems that don’t pay for any safety practices because companies aren’t technically required to disclose anything, and countries have little leverage to pressure for more safety measures. 

DU: Why do you think there was such a lack of research within Facebook for misinformation outside of the U.S.? 

FH: Facebook and all of these platforms are companies. Safety is a cost center, and what these companies ultimately get rewarded for is growth. Doing work on safety takes people, people who could be working on growth. Are they taking advantage of every single growth opportunity even when it’s to the detriment of the customer, the user?

What’s happening with TikTok versus Facebook is a great illustration of why we must put pressure on these companies. TikTok is under a huge amount of scrutiny because of policies related to China, which makes TikTok act more responsibly on these issues in the U.S. TikTok is being watched, and people demand more from them because of this.  .

I do a lot of work with investors, and what I’ve gathered is that investors care about systemic risks. They’ll have conversations about what the role of social media is in a specific country because it poses a systemic risk to your portfolio. No one, however, will engage the question of how we actually put pressure on social media companies to be responsible around the world, and not just on specific issues that gain more traction. 

DU: How do Facebook and similar companies benefit from fluctuating “information immune systems” in parts of the world where there is less skepticism about social media content? 

FH: A reason why people are so vulnerable to social media in different parts of the world is because a lot of these places are more family oriented. They trust their families. In the United States, we say we love our families, but we don’t have social networks that are based around families in the way that people do in other countries. When misinformation comes from people you trust, you tend to believe it a lot more than when it comes from people you don’t know. This is not unique to people in developing countries; studies have been done in the U.S. to show this. 

There has been a lot of research done on the practice of debunking misinformation, and the issue with misinformation is that it’s designed to be extremely attractive. I think the real solution is to push for  social media that is structured at a more manageable, human scale. So if you see something on a Whatsapp chat, your first instinct will be to question where it came from.

In terms of Facebook, there are Facebook groups with half a million members, sometimes even more. In a lot of developing countries, the primary source of information for many people might be a Facebook group with 5 million members. You don’t get all the content that is delivered to that group: if the group produces 500 posts a day, you will most likely get sent around two posts. If there are biases in the algorithm that push forward more extreme content, you will end up in a situation where you don’t have any counter voices. That’s part of what an informational immune system is. It is the ability to course correct. As long as Facebook permits these groups consisting of millions of members, we won’t be able to have safe social media. At its core, it’s a design issue. 

DU: How does “engagement-based ranking” exacerbate the spread of misinformation and hate speech? What are alternatives to “engagement-based ranking” that can more effectively curb misinformation and hate speech? 

FH: Internal research has been done at Facebook where they concluded that having a conversation with someone who is really different to you is hard. We should be acknowledging this and rewarding content that pushes for these difficult conversations. It turns out that when you reward this type of content, you end us with less misinformation, less violence, less hateful content. However, it amounts to slightly less engagement. People spend less time on the platform. And Facebook doesn’t want that. 

I think there should be a policy: countries should pass laws saying that any post that gets more than 5,000 views has to be put in a feed. We should be able to see it and it shouldn’t matter if it was in a closed group. If 5,000 people saw it, it is no longer private. Because right now, Facebook will say closed groups are private, but if you have a million members in a closed group, how is it really private? As long as we can’t monitor the most viral content on the platform, there is no incentive to figure out new ways of ranking things. Currently, the only incentive is to make things as viral and engageable as possible. 

If much of hateful content distributed among private groups was more transparently available, these platforms would face more pressure to curtail the problem. Only in developed countries are there good social media monitoring systems that are independent of Facebook. Because we don’t have access to CrowdTangle or Twitter firehose anymore, we end up in a situation where there is no incentive not to do what maximizes the interest of the company. So, the question becomes, how do we set up assistive incentives, which could be things like the state requiring these platforms to publish how much of their content gets reported as hateful. 

DU: You then think about the other side of that coin: just as hateful content gets such extreme engagement, content that is created to debunk that hate and uplift the narratives of those experiencing a lot of this harassment and violence is shadowbanned. How can social media companies shift their fundamental design to fight against shadowbanning as well? 

FH: There should probably be a feed with the content that is getting demoted. This is one of the challenges: because we don’t get to inspect these systems, we don’t get to see what is classified as hate. If you don’t get to see what is defined as hate, then you end up with a situation where, with any given post, there’s a roll of the dice on whether or not the system can actually detect it. 

Platforms should be required to inform users when content is not eligible for amplification and the reasons for its ineligibility. That allows you, as a user, to come back and say, this account is run by peace workers, yet they got shadowbanned because they had too many posts that qualified as being “hateful.” One of the things I published was a report that said 70% of counterterrorism content in Arabic was getting labeled as terrorism content. That’s a real challenge — unless you have real transparency on these systems, you can’t help them get better. 

DU: Shifting the conversation more to India, Facebook didn’t designate the RSS and Bajrang Dal — known extremist organizations — as dangerous organizations in its internal reports. Facebook has repeatedly been accused of acting as an ally of the BJP and pushing its Hindu supremacist agenda. Whether it is Facebook or X, is it their deliberate strategy to reap benefits and profits while risking people’s safety? Are the rules different for Western countries than for developing countries? 

FH: These companies behave in accordance with the level of pressure that they receive. In India, they get a lot of pressure from the government. They don’t receive the same threat from civil society. Whereas in the U.S. or in Europe, there’s a lot more attention and sensitivity to these things. So, inherently, they treat Western activists in a different way. It’s less of a West versus East thing and more of a question of who are they more afraid of. In India, they’re more afraid of the government shutting them out than they are of the activists complaining that they’ve been blocked. 

Remember, they’re also dying in the United States and Western Europe. Facebook recently laid off a bunch of employees. They’re aware of their death spiral in the U.S. Facebook sees the writing on the wall that they’re going to lose the lawsuit against them regarding kids. They made all of the under 16 accounts private, which is basically them coming out and saying that they’re giving up on the next generation. I think that’s why they have invested so much in virtual reality, augmented reality, and AI. They know they have to pivot at some point because they have this dead weight. 

The amount of money they make outside of the United States is nothing compared to what they make in the United States and Europe. They then give a cut rate version of the product in places where they don’t make as much profit. 

DU: Meta has public policy executives for different countries. Its top public policy executive in India opposed applying Facebook’s hate speech rules to a Hindu nationalist politician. What is the purpose of these executives? Is it ethical to have these types of executive positions? 

FH: Well, in a lot of countries, they have to have someone based in the country, which is why they appoint a local person. But you have to remember that the local staff is usually advertising or governmental relations affiliated. They’re not really people who can make any independent decisions. They’re there to either bring in money to the company or to prevent the money from being cut off from the company. It’s just one of these things where they don’t really have any ability to do stuff beyond taking down content or putting content back up. 

DU: Two years ago, it was published that Facebook’s systems promoted violence against Rohingya Muslims. Do you think the company learned from that? 

FH: I think they learned in the short-term. They implemented a program called the Partner Program and I believe that they’ve built that up since 2021. But again, it’s one of these things where they responded temporarily because they got so much attention and backlash around it. If the press doesn’t continue to report on it, they aren’t as incentivized to respond. 

DU: How do you think AI will factor into the spread of misinformation?

FH: AI will hugely lower the cost of misinformation campaigns. It allows you to scale up with a much higher level of complexity. What will protect a lot of developing places is there will not be AI models in their local languages. The issue I am much more concerned with is the loss of language. We will have large language models for the top 20 languages in the world, but not for the next 5,000. And that is simply because we don’t have the training data. 

What’s equally terrifying is that the only entity that will have the data to create models beyond the top 20 languages is Meta, because they really are the internet in the most fragile parts of the world. The idea that a company that plays fast and loose on a bunch of different stuff is now going to be what controls AI for the future of most of the world is terrifying. 

The other possibility I see as more likely is if AI exists in the top 20 languages, or even the top 40 languages, it pushes people to speak one of the biggest languages in the world. I worry that the difference in productivity is going to be yet another factor in killing languages around the world. 

DU: In terms of Europe, how important do you think the Digital Services Act can play in holding platforms accountable? Do you think unregulated social media companies are dangerous to our democracies?

FH: The Digital Services Act (DSA) is super helpful for a limited number of markets. Social media platforms are going to do a better job in Europe because of the DSA, and there will be a carryover that benefits people outside of Europe at a lesser scale. It’s just one of those things where you will get safety if you make big tech companies scared. And the DSA made them scared for Europe, but that doesn’t mean they’re going to offer a similarly improved experience elsewhere. 

It’s a question of who is watching and if there aren’t enough people watching to turn safety into an issue, then social media corporations just don’t care. Let’s consider transparency rights: Europe is the only place that has any transparency rights at the moment. Even before the DSA, when you looked at where platforms spent their money in terms of hate speech, they spent a bulk of their money on English and German. How do you explain that? It’s because Germany has some of the only hate speech laws in the world. It really does matter where you put pressure on things. 

DU: You’ve previously recommended the establishment of a U.S. federal agency that would oversee social media algorithms as a solution to curb the spread of hate speech. How can we ensure that this type of agency would not infringe on freedom of speech?

FH: At the minimum, we need transparency laws. These laws give advertisers real choices on where they spend their dollars. They give communities real choices on where to spend their time. It’s one of these questions of what is the larger danger. I think the idea that activists are safe online and on Facebook is a very dangerous assumption to make. We are now facing a moment where weaponizing social media against vulnerable groups is really easy. We must ask: Who is going to weaponize social media? And what we have repeatedly seen is that the far-right is much more willing to do so a than the far-left is. If we don’t do anything, we should just accept that far-right governments are going to rise around the world. 

And so the question becomes: are more of our rights going to be infringed if we have an agency that — if designed correctly — has a bunch of checks and balances on it, or if we have far-right governments come to power, partly due to their weaponization of social media? 

And to be clear, part of how Modi came to power in India was because he was one of the early ones who weaponized social media.

DU: In these far-right governments, there is no incentive to create these checks and balances. What kind of oversight can countries in South Asia have when the system that would create the oversight is the benefactor of the weaponization of social media?

FH: We can talk about how dangerous the government is but what happens when a foreign entity comes in and weaponizes it further? The question shifts from one concerned with internal affairs to one focused on foreign involvement. What we have learned from Gaza and the Ukraine War is that social media is the new frontier of war. So it’s a question of who do you fear more, the inside or the outside? 

When you have an environment where there is a far-right government and you have increased tensions on the ground, it’s even easier for a foreign adversary to come in and light the match. Other than going to advertisers and investors, you don’t have any option beyond governmental intervention, which comes with some level of complication. 

The question then becomes: do we wait for the other governments to fall as well? That is just a matter of time. It’s a matter of time for all of Southeast Asia to install far-right governments. Places like Malaysia are seeing this right now; they’re trying to pass an online safety law similar to Canada’s bill. I can’t speak for their motivations personally, but I believe that part of it is that they are frightened that we will see people weaponize social media for hateful, nefarious reasons.

For social media companies to actually do something, it will require investors and advertisers stepping in. Without intervention from these two parties, there is no incentive. Countries will have to pass laws that require companies to disclose what they did to keep their constituents safe. We have to create pressure because as long as Mark Zuckerberg thinks he has more to gain from doing nothing, he won’t do anything. 

Share the Post: