India’s Courts Must Hold Social Media Platforms Accountable for Hate Speech

Supreme Court of India. (Photo: Pinakpani)

India has one of the world’s largest social media user bases, with millions actively engaging across platforms. The vast network of these digital spaces makes them powerful tools for sharing information and connecting diverse audiences. However, this accessibility also makes social media highly susceptible to misuse. In recent years, individuals and groups have increasingly exploited these platforms to spread misinformation, disinformation, and hate speech, fueling communal tensions and real-world violence.

According to a 2024 report by my organization, the Center for the Study of Organized Hate (CSOH), during the April–June 2024 general elections, senior Bharatiya Janata Party (BJP) leaders delivered 266 anti-minority hate speeches, which were live-streamed simultaneously on YouTube, Facebook, and X (formerly Twitter) via official party and leader accounts. Of the total 1,165 documented hate speech events, 995 were first shared or streamed on social media, with Facebook (495) and YouTube (211) being the primary platforms. Despite this, Facebook removed only three videos, leaving 98.4% of the reported content accessible online.

BJP leader T. Raja Singh, a legislator from Telangana ong known for incendiary and dangerous speeches, was banned by Meta under its ‘Dangerous Organizations and Individuals’ (DOI) policy in 2020 over repeated violations involving hate speech. Despite the ban, Singh and his supporters found ways to circumvent it by creating fan groups on Facebook such as ‘Raja Singh (Bhagyanagar) MLA’ and ‘Raja Singh (Dhoolpet) MLA.’ These groups collectively amassed over 1 million members. He ran at least four other Instagram accounts, with a combined following of over 150,000, which were used to promote his events, live-stream his speeches, and circulate viral snippets—some of which garnered tens of millions of views. Following the publication of the CSOH report in February 2025 that exposed these violations, Meta quietly took down Singh’s accounts and groups. 

This case underscores the complacency of Meta platforms, where even banned individuals manage to operate groups and accounts with massive online followings. If such high-profile violators can evade enforcement, it raises serious doubts about Meta’s ability—or willingness—to curb everyday hate speech, dehumanizing rhetoric, and dangerous incitement.

Eroding of Platform Commitments

As the Indian government led by Prime Minister Narendra Modi continues to scale its efforts to regulate online content—often selectively and critical of its policies and Hindu nationalist ideology—it becomes even more critical for tech platforms to enforce their own community standards impartially and effectively. 

After Donald Trump’s electoral victory in the US, Meta loosened its rules around hate speech and abuse, dismantled its fact-checking program, and proposed adopting a community notes system (similar to X’s), which crowdsources fact verification as the way forward. 

These changes have global implications—including in India. Meta founder and CEO Mark Zuckerberg admitted that the new algorithm and policy changes will result in “catch[ing] less bad stuff,” thus facilitating the spread of hate. The shift to crowdsourced moderation raises serious concerns: untrained users are more prone to bias, groupthink, and misinterpretation—especially in a politically polarized environment like India’s, where the ruling party has weaponized social media on a massive scale to promote anti-minority hatred. Local language nuance and context are often lost, increasing the risk of harmful content going unchecked. 

Compounding the issue is the monetization model of these platforms. Since advertising revenue depends on user engagement, platforms are incentivized to promote content that evokes strong emotional responses—often divisive or inflammatory. Our 2024 study revealed that Instagram Reels depicting brutal violence—primarily targeting members of the Muslim community and carried out by cow vigilantes—attracted nearly three times more views than non-violent content posted by the same accounts. Political parties have recognized this dynamic and invested heavily to amplify provocative content. During the 2024 elections, Meta even approved political ads that promoted anti-Muslim hate speech with incitement to violence, further incentivizing hate-driven engagement.

India’s Legal Framework

India’s constitutional protections for speech—enshrined in Article 19(1)(a)—are subject to “reasonable restrictions” under Article 19(2), including limitations based on decency or morality, defamation, incitement to an offense, and public order, among others. In Ramji Lal Modi v. State of Uttar Pradesh, the Indian Supreme Court permitted restrictions on speech “in the interest of” the public and not just “for the maintenance of” public order. However, later rulings, including Ram Manohar Lohia and Madhu Limaye, refined this view, holding that public order restrictions must go beyond mere disturbances of tranquility.

While these rulings were rooted in print and protest contexts, courts have since expanded the interpretation of “public order” to justify internet bans and protest crackdowns. Looking at hate speech from the ‘public order’ lens, in Amish Devgan, the Supreme Court clearly stated that freedom of speech and expression cannot be exercised to the extent that it creates public disorder. These rights can thus be restricted to maintain public order, and those who “indulge in promotion and incitement of violence to challenge unity and integrity of the nation or public disorder tend to trample upon liberty and freedom of others.” 

The ‘public order’ assessment e for online speech is primarily shaped by the Information Technology Act, 2000 (IT Act). Section 69A of the Act uses the defense of ‘public order’ as one of the grounds under which the Government can order the takedown of online content. Initially, the Act offered broad immunity to intermediaries. However, the 2021 Intermediary Guidelines and Digital Media Ethics Code (Rules 2021) placed new obligations on platforms, requiring swift takedowns and traceability of message originators upon government request. Rule 3 of IT (Intermediary Rules), 2011 puts responsibility on an intermediary to observe due diligence. They should ensure reasonable efforts are made not to allow users to host content that is grossly harmful, harassing, or hateful, amongst other categories.

In a 2024 Public Interest Litigation (PIL) filed by Rohingya refugees living in Delhi, the High Court urged Facebook to remove hateful content targeting them. The court dismissed the plea, noting that while social media abuse is widespread, legal remedies under the 2021 Rules had not been pursued. The court emphasized that petitioners must first exhaust statutory grievance mechanisms before invoking constitutional remedies under Article 226.

More troubling is the government’s growing authority to block content under the 2009 IT Blocking Rules, which allow content removal without prior notice or hearings—particularly in “emergency” cases. Earlier this month, the Supreme Court sought the federal government’s response to a plea challenging these powers. This legal framework has enabled the government to block over 500 accounts during the farmer protests of 2021 and 2024—often targeting dissenting voices while leaving incendiary content against Muslim minorities untouched. While the government does have mechanisms in place (2009, 2011, and 2021 Rules), it is limited to regulating only certain types of content it deems illegal—often based on narrow or selective interpretations.  It is imperative for the courts to step in, provide clearer guidelines, and streamline the regulatory process. 

Need for Judicial Oversight

The IT Rules, 2021, were challenged for several reasons, including restrictions on freedom of speech and information and overstepping the authority granted by the IT Act, 2000. The rules regulate Big Tech at the expense of user experience. As the case is still pending, the Delhi High Court should ensure a balance between fundamental rights and the regulation of big tech. For instance, the provision to ‘trace the users’ could be deemed unconstitutional while simultaneously introducing provisions that hold companies strictly liable for regulating content. Using its special powers under Article 142, the Supreme Court could establish clear, binding guidelines on content moderation and platform liability, including penalties for non-compliance.

Meta’s “Trusted Partner” program—designed to allow civil society organizations to flag harmful political content—has shown promise but suffers from slow and inconsistent enforcement. As of 2024, Meta had 12 such partners in India, but delays in content takedowns have steadily eroded the program’s credibility. Courts could mandate stronger oversight of this program and ensure it functions independently of government interference, thereby preserving citizen agency and digital safety.

With the decline of mainstream media credibility in India, social media has become a key arena for public discourse. Courts must act proactively to protect it. They can take cues from international precedents: Brazil’s Supreme Court, for instance, fined and temporarily banned X over its failure to act against misinformation favoring Jair Bolsonaro. Initially invoking free speech, Elon Musk ultimately complied. It is understandable that the issue is nuanced and involves technical complexities; the courts could appoint amici curiae (friends of the court) to assist them. For instance, in 2024, in the copyright infringement suit filed by ANI against ChatGPT, the Supreme Court noted that it was a “novel legal issue arising from recent technological advancements” and appointed a professor from NLSIU and an advocate to leverage their expertise and knowledge to aid the court with the case. 

Indian courts have already acknowledged platform responsibility. In a 2021 judgment, Supreme Court Justices S.K. Kaul, Dinesh Maheshwari, and Hrishikesh Roy highlighted the necessity of social media platforms being held accountable for hateful speech. The judges noted that “social media has become a tool in the hands of various interest groups who have recognized its disruptive potential.” They rejected the notion that platforms merely “host” third-party content, instead describing their role as “active,” with real-world consequences.

As the government pushes for greater regulatory control while allowing and enabling hate speech, the burden falls on the judiciary to ensure that both platforms and the state are held accountable. Courts must demand transparency, enforce platform responsibilities, and safeguard citizens from the toxic convergence of political propaganda and digital platforms. 

(Amrashaa Singh is the Impact Litigation Officer at the Center for the Study of Organized Hate (CSOH). The article was first published in the Tech Policy Press.)

Share the Post: