PrivacySocial Media and Cyber Laws

Social Media Infused Hate Speech: A Judicial Paradox

Hate speech despite being a colonial concept and consistently used in the ‘legal circle’ has not yet been statutorily defined. In Pravasi Bhalai Sangathan v. UOI, the Court recognised this issue and recommended the Law Commission to define “hate speech”, which was considered by the Commission in the 267th Law Commission Report. This adoption of a definition comes at a crucial stage as with the wide-scale adoption of social media, it has become the center stage to spread hate speech. This was noticed during the recent Delhi riots where Facebook was used to incite communal hatred. The Parliamentary Standing Committee and the Peace and Harmony Committee constituted by the Delhi Legislative Assembly, based on Wall Street Journal’s report, suspect that Facebook did not duly detect and control hate speech on its platform during the said period. Though Wall Street’s report claims that political bias is the primary reason for the inefficient content regulation, the report also acknowledges that “Facebook faces a monumental challenge policing hate speech across the enormous volume of content posted to its platforms worldwide”. It is important to understand that the claims of bias are not yet confirmed and might be mere politically motivated allegations. However, the inefficiency of Facebook to moderate hate speech is a real issue especially because it has been proven that Facebook’s algorithm has several limitations that have now become the reason for organised hate speech crimes. It is argued that Facebook’s inefficient content moderation system that relies on artificial intelligence is responsible for the communal disharmony spread in Delhi last year, and Facebook needs to take responsibility and develop an ethical system to prevent such misuse in the future.

Last year, the Delhi Legislative Assembly constituted the “Peace and Harmony Committee” to analyse the factors responsible for the riots. During the investigation, the Committee received leads to suggest that Facebook’s platform was used as the primary tool for spreading hate speech against communal harmony. The Committee summoned Facebook India’s Vice President and Managing Director, Mr Ajit Mohan to understand Facebook’s role as a medium of hatred. However, Mr Mohan refused to appear on the ground of lack of jurisdiction and filed an Article 32 petition before the Supreme Court to set aside the summons issued by the Committee. The Court in Ajit Mohan v. Legislative Assembly National Capital Territory of Delhi, ruled that the power of the legislature is not limited to enacting laws and the Committee’s summon is valid to investigate the matter. However, the court noted that the Committee cannot punish or recommend actions against Facebook because such power is limited to the Central government.

More importantly, the Court observed that Facebook has become a platform for “disruptive messages, violence, and ideologies.” The Court also opined that Facebook is not merely a platform for unregulated user content, rather, it uses algorithms for a personalised experience and therefore, plays an active role in regulating the content. The Court’s observation on the use of an algorithm for content moderation is interesting because artificial intelligence and machine learning are still not statutorily recognized terminologies. More importantly, the liability framework on these technologies is still not jurisprudentially established. Certainly, the technology cannot in itself be held liable for its decisions under the current regime. So, does Facebook owe any liability for the hate speech spread on its platform despite being an intermediary? The answer lies in the technicality of Section 79 of the Information Technology Act that exempts the liability of an intermediary in certain circumstances. An intermediary is not liable when it is merely acting as a platform for third-party information. This is exactly what Facebook claimed in defence in the Ajit Mohan case referred to above. However, the section states that an intermediary should observe due diligence while discharging its duties. Arguably, using an inefficient algorithmic system is not ‘due’ diligence especially because Facebook itself claims that more than 90% of hate speech on its platform is detected by artificial intelligence. Further, if Wall Street’s report is trusted, where it claims Facebook intently encouraged hate speech from the ruling party members for economic benefits, which ultimately led to the Delhi riots. Invoking Section 79(3)(a) of the Act, Facebook would be liable to aid and induce the commission of an unlawful act. Therefore, the liability will rest on Facebook either for using such a weak algorithmic software that cannot duly detect hate speech or for purposefully inducing hate speech for an illegal motive.

Why should Facebook be held liable for mistakes of an algorithm?

While holding Facebook liable for spreading hate speech on its platform intently or otherwise is necessary, it is also important to consider if Facebook should be liable for an algorithmic decision at all? Before analysing this question, we need to consider why Facebook relies on algorithms in the first place despite their unreliability. The reason is the sheer number of hate speech content on the platform which cannot be regulated by mere human intervention. Last year, Facebook took down 9.6 million hate speech content which was six times the prior quarter. More importantly, this number represents only the content taken down and not the actual amount of hate speech content on the platform. Further, Facebook has been criticized for human involvement in content moderation which has led to human rights violations of the moderators. Therefore, Facebook has no choice but to prominently rely on algorithms from both an economic and humanitarian perspective.

As is discussed earlier, the liability framework on artificial intelligence is not well-settled globally, even in the present Delhi riots case, Facebook can argue that it took the ‘due diligence’ required under Section 79. However, Section 79 if read with the product liability doctrine, may hold Facebook liable. Recently, product liability was statutorily recognized under the Consumer Protection Act, 2019 where a product manufacturer is liable to compensate a consumer for any harm caused by a defective service. Though it may be argued that a user suffers harm due to ‘defective’ service when Facebook does not duly moderate the content on the platform and therefore, is liable under the Consumer Protection Act. However, Section 79 of the IT Act explicitly states intermediaries are not liable under any law except as provided in this section. However, observing the rationale of the Supreme Court in Ajit Mohan, it can be argued that Facebook’s responsibility to moderate hate speech content inherently makes it responsible for any technique it uses to execute this task. Further, if the product liability doctrine is extended jurisprudentially, Facebook becomes liable for relying on such technology that does not help in delivering its promise and service of a “safe” and “neutral” platform.  

Facebook’s Role in Global Unrests

The scope of misusing social media platforms has been witnessed several times globally when fake information and hate speech have led to situations of national and international emergencies. Even the Supreme Court recognized Facebook’s role in Myanmar’s crisis by not curbing ethnically sensitive content on its platform. In the 2012 communal violence between the Buddhist and Rohingya Muslims, Facebook played a key role in the Rohingya genocide. The United Nations Human Rights Investigators have also confirmed Facebook’s role in escalating communal tensions. Even Facebook admitted that its platform was used to incite violence in Myanmar. Similarly, the Supreme Court also referred to the 2018 Sri Lanka violence where again Facebook’s platform was used to spread hatred against the Muslim community. Facebook admitted and apologised for its role in communal violence and announced to use “detection technology” to prevent such misuse in the future. Despite substantial evidence against Facebook, hardly any actions were taken against the social media platform because of its global identity as an ‘intermediary’. However, Facebook should be held responsible for being an indirect promoter of hate speech, regional unrest, and violence. This is exactly what the Supreme Court acknowledged when it stated, “with their expanded role as an intermediary, (Facebook) can hardly contend that they have some exceptional privilege to abstain from appearing before a committee duly constituted by the Assembly.”

The Way Forward

From the above discussion, two things are clear, first, Facebook needs to take responsibility for being inefficient in curbing hate speech on its platform, and second, only algorithmic moderation promises efficiency on a large amount of hate speech content. However, as discussed, Facebook’s current algorithms have certain limitations and are often found mistakenly taking down content which signifies their unreliability. Since artificial intelligence performs tasks by analysing the patterns in the data fed to its system, it is clear that Facebook’s algorithms are not well-trained to be objective in flagging and moderating hate speech. Therefore, firstly, Facebook needs to ensure that the data scientists that are training the system do not bring in their bias. Secondly, the artificial intelligence software should be governed by the right to explanation whereby the user is informed for the decisions of the moderation, affirmative or otherwise. This would also innovate a system of precedent through which the algorithm can compare its previous decisions before concluding. Such a system would avoid biases that platforms like Facebook are currently accused of. Thirdly, states need to establish and strengthen the AI-related liability framework. Under the current system, it is relatively easy for social media platforms to shift the burden on the algorithms under ‘technical’ error or shortfall. A framework would also ensure that a standardised system of accountability is established that generates a sense of trust on these platforms.

Under the present scenario, Facebook has not duly met its responsibility of providing a platform that promotes both free speech and peace in global and regional space. Platforms like Facebook certainly need to understand their responsibility as the regulatory framework around the globe is destined to comprehend social media infused hate speech with proper liability framework on so-called intermediaries. A proactive step towards a better content moderation system is required because the designated algorithms would require time to train and evolve with the growing need for their reliability, not just from a liability framework but from a human rights angle as well.

The article can be cited as:

Milind Yadav, Social Media Infused Hate Speech: A Judicial Paradox, Metacept-Communicating the Law, accessible at https://metacept.com/social-media-infused-hate-speech:-a-judicial-paradox

Tags

Related Articles

Leave a Reply

Close