The new rules are impracticable when analysed on the basis of the technical parameters of: tracing, automated filtering and content removal.
Aditi Mishra and Kavya Arora
INTRODUCTION
While the world is struggling with a pandemic, India is bearing the brunt of a parallel infodemic. Medical teams have been spat at, pelted with stones and attacked. Communal poison has been spewed. Misleading information about the lockdown guidelines has resulted in mass gatherings, confusion and chaos. Covid-19 has turned out to be a new backdrop against which the destructive potential of fake-news is being witnessed. But make no mistake, it is the same old gremlin that has hijacked politics, sparked riots, caused lynchings and fomented mob-violence in the past. Fake-news has emerged as the omnipresent Gordian knot that constantly aggravates all societal predicaments. Several methodologies have been mooted to tackle this menace. Intermediary liability is one amongst these. Intermediaries are the platforms that host and transfer information. They are not the authors of this content, but mere conduits of the same. This is called the content/conduit distinction. Due to this, they cannot be subjected to complete answerability for this content and are accorded a ‘safe harbor’ from absolute legal responsibility. However, they’re still held responsible for it to a certain extent, on their failure to remove unlawful content when notified, under the provisions of intermediary liability.[1]
This article delves into the most recent Indian rules meant for this purpose, the Intermediary Guidelines, 2018. It analyses these guidelines based on three prongs of tracing, automated filtering and content removal to identify their effectiveness and shortcomings, particularly with respect to WhatsApp, one of the most popular social media platforms in India. It also presents certain suggestions for overcoming the problem of fake news.
CRITICAL ANALYSIS OF INTERMEDIARY GUIDELINES (AMENDMENT) RULES, 2018
On 24th December 2018, Ministry of Electronics and IT released the Draft Rules to amend the existing Intermediaries Guidelines to fight fake news, curb obscene content and prevent misuse of social media. The suggested changes can be classified into three categories: tracing the originator of any information, deploying tools for automated filtering of unlawful content, taking down of illegal content within 24-hours.
Traceability
Section 5 of the Draft obligates the Intermediary to trace the originator of the fake news, if required by the authorised government agencies. We argue that firstly, this provision is technologically difficult to implement with respect to end-to-end encrypted platforms like WhatsApp and secondly, it violates the users’ Right to Privacy.
To enable traceability, petitions were filed in Madras [2], Bombay [3] and Madhya Pradesh [4] High Courts in 2018 for linking social-media accounts with Aadhar Cards. But Madras High Court ruled against the petition. Traceability is an extremely technical issue in case of WhatsApp because of the use of end-to-end encryption technology (‘E2EE’) which ensures only the sender and the recipient can read what is sent, and nobody else, not even WhatsApp can. Any compromise with E2EE feature would mean a compromise with the Fundamental Right to Privacy of the Users. Hence, the primary question before the Court was to identify the originator of a message without violating Right to Privacy.
The Madras High Court sought the assistance of Prof. Kamakoti, member of PM’s Scientific Advisory Committee, to ascertain the technical feasibility of enabling traceability on a platform using E2EE feature. He opined that every time a message is created on WhatsApp, the creator’s name be attached to it in encrypted form. If the origin of any message has to be traced, it can be decrypted by using a private key which remains with WhatsApp only. WhatsApp opposed these solutions on the ground that their platform is not designed to know a user’s nationality, hence selective implementation on Indian citizens is technologically impossible. Every message sent on WhatsApp is secured with a unique lock whose key lies only with the recipient.
Traceability will inevitably compromise the individual’s privacy which was considered an intrinsic part of Right to Life and Personal Liberty by the Supreme Court in the landmark Puttaswamy Judgment. Any invasion in this right should be through a procedure that is proportional to the need of such interference and guarantees against any abuse of this interference. If the E2EE has to be broken, privacy of all users will have to be endangered, which seems disproportionate. Furthermore, no procedural guidelines have been provided to guarantee against any abuse under the garb of traceability.
Automated Filtering
The automated filtering Technology is not new but the debate on this technology was ignited in India after the introduction of the Draft Rules 2018. Section 3(9) of the Draft Rules requires the Intermediary to deploy automated mechanisms to proactively identify and remove or disable public access to unlawful content on its platform. We submit that this provision is not efficient to fight fake news because intermediaries are not empowered to filter content. Moreover, the provision is vague, violates constitutionally protected rights and is not suitable for WhatsApp.
The Supreme Court in Shreya Singhal v. Union of India held that intermediaries cannot be required to judge the legality of the content rather they are required to remove any content on their platform when Government asks them to do so. This principle was reiterated in the recent Delhi High Court judgment of Swami Ramdev v. Facebook. Taking down any content on the discretion of intermediary has a high potential of violating fundamental Right to Speech and Expression.
Section 66A of the IT Act was declared unconstitutional primarily due to its vagueness. It criminalized sending offensive information but didn’t define standards for ‘offensive information’. Similarly, Section 3(9) of the Draft Rules provides for filtering unlawful content but nowhere mentions as to what constitutes unlawful. Thus, it is infested with the same pitfall of vagueness. Further, automated filtering requires the platforms to go through all the content shared on them. This, again, can’t be resolved without tackling the technicalities of E2EE and the Right to Privacy conundrum.
24 Hours Take Down
Section 3(8) of the Draft Rules obligate the intermediaries to remove any content from their platform within 24 hours of receiving such notification by the appropriate government agency or a Court. We argue that this provision is too stringent and may lead to undesirable consequences.
Section 3(8) does not provide for an extension for scenarios where the intermediaries don’t have enough institutional capacity to respond expeditiously. Thus, the intermediary may end up losing the safe harbour position. To prevent this, intermediaries may widen automated filtering resulting in removal of harmless content also. This, again, would curb freedom of speech and expression.
Also, such stringent takedown order may lead to what is called the Streisand Effect. It is a communication phenomenon in which attempts to suppress information results in stimulating greater interest in that information than would have been had no action been taken. The reach, speed and penetration of social-media websites amplify this effect. This phenomenon has been closely related to censorship. When nations censor any content, they have to rationalize such actions to reduce public outrage. Similarly, legal measures targeting fake news can increase the audience’s attention to it. People are likely to trust alternative sources that may be misleading. To prevent such public reaction, the government needs to counter fake speech with more true speech.
THE WAY FORWARD
One of the major causes of the spread of fake news is that people are used to trusting other humans as reliable sources of information and believing something because someone else has vouched for that information. This is what philosophers call testimony. People forward news without verifying it and many people rely on the news because they rely on the person who has shared the news or it is too tempting to be ignored or people want to conform to the beliefs and actions of their peers. Unlike Facebook and Twitter, greater intimacy in WhatsApp’s private communications and personal groups makes fake news even more impactful and emotive. Apart from introducing new laws and features such as limiting the forwards, public awareness has to be raised about the importance of verifying news. Certain actions that can be taken are as follows:
Digital Literacy
Governments and Social-media platforms should take up the responsibility to foster awareness about recognition and verification of fake-news. They should finance educational frameworks developed by NGOs and charities. A UK House of Commons committee recommended that ‘digital literacy should be the fourth pillar of education, alongside reading, writing and maths’. Initiatives in this direction have been seen in WhatsApp’s tie-up with NASSCOM to train people to identify misinformation. WhatsApp also launched its public education campaign to persuade its users in India to “spread joy not rumours”.
Marketplace of Ideas
Market place of ideas refers to the theory that acceptance of ideas is dependent on their competition with each other and not on the opinion of a censor. Emphasizing on the First Amendment rights, US Courts have continued to uphold the view that harmful or extreme speech is best left to the self-correction potential of “marketplace of ideas”. Thus, verified news will be accepted over the fake news if the former is made more readily available and visible to the people. This is the single most effective way to counter the Streisand effect.
Fact Check
Fact-checking must form the foundation of journalism. After the fact-check, the verified story should be spread widely and made as viral as its fake news counterpart was. Ahead of the 2019 Lok Sabha Elections, WhatsApp launched a fact-checking service ‘Checkpoint Tipline’ in India to prevent the spread of misinformation. It classified the messages as true, false, misleading or disputed. More such fact-checking services should be promoted. Collaborations between social-media platforms and fact-checking agencies hold a lot of scope.
In conclusion, instead of excessively restrictive and often flawed legislative solutions, the government needs to rely more on the pragmatic and scientifically endorsed techniques of countering fake-news like sponsoring and backing real news, funding fact-checking agencies and digital literacy initiatives and encouraging the marketplace of ideas.
[1] T. Gillespie, Regulation of and by platforms. In J. Burgess, A. Marwick and T. Poell, The Sage handbook of social media (2018) (pp. 254-278)
[2] Janani Krishnamurthy v. Union of India & Others W. P. 20774/2018.
[3] Sagar Rajabhau Surywanshi v. Union of India PIL/147/2018.
[4] Amitabha Gupta v. Union of India W. P. No. 13076/2019.
Aditi and Kavya are students at the National Law University, Delhi and West Bengal National University of Juridical Sciences, Kolkata respectively.
Click on the book images below to make your Amazon purchases. All affiliate commissions earned are donated to Stranded Workers Action Network.
Categories: Corporate Law, Law and Technology, Legislation and Government Policy