Devansh Kaushik
The multi-jurisdictional nature of the Internet undermines efforts to place virtual borders onto cyberspace. Other limitations include the subjectivity in digital content, censorship concerns and over-regulation to the detriment of the market. The unilateral legislative approach is inherently flawed and has adverse implications on democratic free speech.
INTRODUCTION
The Internet, essentially a globalised, decentralised, egalitarian, electronic communication network, has since its inception in the 1990s, been applauded for its freedom of access and use, and its ability to cross borders and break barriers.
The flip side of the coin is that the tools of anonymity, convenience and immediacy that the Web offers are also susceptible to use by hatemongers and extremists. This is increasingly the case today, with computers and smartphones getting cheaper and internet access becoming ubiquitous. Websites, private message boards, chat rooms and social media groups now serve as conduits through which extremists spread their ideas and beliefs, and mobilise for subversive activities. The Internet has thus become a cheap and convenient medium through which previously fragmented and disjointed groups can connect and interact, engender a collective identity and develop a sub-culture with a common ideology.
Statistical trends reveal a direct correlation between online hate speech and hate crimes in the real world. This connection has been highlighted by recent high-profile incidents such as the Christchurch Mosque shooting in New Zealand, where a white supremacist opened fire in a mosque. The attacker used Facebook to live-stream the carnage and published a hate-filled manifesto online. The perpetrator had also been active on extremist online chatrooms and message boards. In the aftermath, New Zealand’s Prime Minister, Jessica Ardern initiated “The Christchurch Call”, a non-binding multilateral agreement that calls for social media giants to clamp down on violent and toxic content. 17 countries have already signed the agreement. Many nations are thus bringing about legislation aiming to curb online hate speech, specifically targeting mainstream social media platforms.
This article broadly aims to highlight the limitations of regulating online hate speech through the current approach of unilateral national legislation. Towards that end, it will analyse some examples of the recent wave of legislation, such as those enacted in Germany and Australia.
RECENT LEGISLATION
Germany: The NetzDG law
The Netzwerkdurchsetzungsgesetz (NetzDG) law or the Network Enforcement Act, came into effect in Germany in 2018. This law compels social media companies to remove hate speech and other ‘illegal content’. It provides for heavy penalties of up to 50 million euros to compel social media companies to censor on behalf of the government. The problematic part is that what exactly is ‘illegal content’ hasn’t been clearly defined, leading to apprehensions of overboard censorship, which are not unfounded. The twitter account of a German far right politician was recently deleted automatically for supposed racist content, when she criticised a German police department for tweeting in Arabic, which led to a political outcry. This shows how a subjective definition of hate speech, can be used for censorship and curtailing dissent, if too extreme a law is enacted.
The NetzDG law has thus been flagged by the Human Rights Watch for setting a dangerous precedent and calls have emerged to repeal it. This legislation has had an unfortunate domino effect, being cited as a reference for similar legislation being developed in countries such as Singapore and The Philippines.
Australia: Sharing of Abhorrent Violent Material Act
Australia recently enacted the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, requiring social media companies to identify and remove ‘abhorrent violent material’ in an ‘expeditious’ manner. Failure to do so would lead to fines for the companies amounting to ten per cent of annual turnover, and even imprisonment of senior executives, an unprecedented provision.
Enacted within days of the New Zealand massacre, this clearly rushed legislation was enacted without any due consultation. The bill itself is ambiguous regarding the scope of the prohibited content, the manner in which executives would be personally liable and the exact time frame which qualifies as being ‘expeditious’. In this case too, concerns of censorship and misuse remain.
LIMITATIONS OF THE CURRENT APPROACH
The current legislative approach ails from various common defects such as-
The Virtual Turf War
States’ attempts to clamp down on online hate speech by criminalizing dissemination of objectionable content is thwarted by geographically limited jurisdictions. On the very face of it, there lies a clear logical inconsistency in importing virtual borders to the geographically indeterminate landscape of the internet. In reality, the perpetrators of hate speech need not even be in the same country as the victim. The operator of the platform, the user of the service and the host server can all be present in different jurisdictions. Consequently, online hate speech continues to go unchecked. Even if after considerable delay, a government agency does manage to shut down a website, the same will be up in a different country with a different domain name, sometimes in a matter of hours.
Attempts to extra-territorially prosecute perpetrators in other jurisdictions repeatedly fail, given that national laws on hate differ widely due to differences in law, society and culture. Yahoo!, Inc v. La Ligue Contre Le Racisme et, a landmark 2001 case, is an example. In this case, a judicial impasse was reached precisely due to the above discussed reasons, when the Right to freedom of expression as per the First Amendment to the United States Constitution clashed with the anti- Nazi memorabilia laws of France. A French student organisation had filed a suit against Yahoo for marketing Nazi memorabilia on its platform. It was ruled that the United States was not bound to allow foreign regulation of speech of a US resident within the United States on the basis that Internet users in another nation can access such speech.
What is ‘objectionable’?
Normatively, what exactly constitutes ‘hate’ or ‘objectionable content’ remains subjective. Like any other technology, the Internet remains a double-edged sword, capable of both good and evil. The same Facebook live-streaming used in Christchurch, was also used in Minnesota, USA, to expose the police shooting of Philando Castile, a 32-year-old African-American, at point blank range at a routine traffic stop on mere suspicion. The video was initially taken down by Facebook, though it was forced to restore it later after user protests. Similarly, it can be argued that in a strict visual sense, there is little to differentiate the Christchurch massacre from war footage documenting war crimes, like the one which went viral in 2010 exposing US airstrikes on Iraqi Civilians. Such subjectivity creates problems for any regulator and also hinders the use of automated filter technologies for content regulation. It is further complicated by differences in laws, ideology, culture and society across the border-less landscape of the internet. Yet, we keep seeing short-sighted regulation like the one enacted by the European Union in March, mandating automated ‘content filters’ for internet companies, in ignorance of the ground reality and technological constraints.
This subjectivity dilemma has been succinctly described as– “One person’s ‘trolling’, after all, is another person’s ‘good-faith discussion’, and God help the regulator tasked with drawing a line between them.” It is at this altar of subjectivity and context that any attempt by governments to lay down objective laws for content regulation falls flat.
Regulatory Excesses
A drawback of excessive government regulation is that when individual countries attempt to force compliance of multi-national platforms to their own national frameworks, the service delivery of these platforms is affected. It leads to inefficiencies and higher costs as companies are forced to create separate systems, standards and infrastructure.
This also has an adverse effect on free competition in cyberspace. All these laws are drafted to police large platforms, imposing heavy compliance burdens, such as appointing hundreds of moderators, deploying screening technology, submitting compliance reports, etc. These additional costs smother start-ups which do not possess the resources and capability to comply with such regulations. This ‘One-size-fits-all’ regulatory approach for all types of online services and content is arbitrary and harms smaller players disproportionately, being likely to spur market exit and deter market entry of upcoming entrants. Consumers are also indirectly affected by the resulting limitation of services, higher costs and lesser availability of options.
Moderation or Censorship?
Recent legislation incorporating personal liability of executives, and heavy monetary penalties has had the effect of rendering social media platforms over-cautious. The fallout is that free speech suffers as platforms adopt an extremely conservative approach and take down content indiscriminately on the slightest suspicion, rather than risk penalisation. This issue was recently highlighted by Republican Senator Ted Cruz, when he accused social media platforms like Twitter and Facebook of censoring conservative content in the United States.
If the regulatory mechanism is too aggressive, it also has a chilling effect on free speech and expression as individuals are dissuaded from voicing themselves online freely. This same effect has been observed in the semi closed online landscapes of countries like Russia and China.
States’ attempts to regulate online content thus run the risk of encroaching on constitutional rights to free speech and expression. Government content regulations are frequently challenged on this basis in courts.
CONCLUSION
In summary, the phenomenal growth of the Internet as a means of communication has been accompanied by a rise in extremist content in cyberspace. The anonymity and reach afforded by the Web has made expression of hate effortless in an abstract landscape that is beyond the realm of traditional law enforcement.
In response to the increasing trend of hate crimes, states have sought to regulate the domain of the Internet through the conventional strategy of national legal frameworks. Such laws, such as those recently enacted in Australia and Germany are misguided and short-sighted.
The multi-jurisdictional nature of the Internet undermines efforts to place virtual borders onto cyberspace. Other limitations include the subjectivity in digital content, censorship concerns and over-regulation to the detriment of the market. Thus, the unilateral legislative approach is inherently flawed and has adverse implications on democratic free speech.
A future course of action should involve exploring alternatives such as a multilateral approach aimed towards creating a globally harmonised framework for regulating online extremist content; or technological innovations such as user end screening software and Artificial Intelligence enabled content filtration.
The author is a First Year BA.LLB (Hons.) Student at the National Law School of India University, Bengaluru.
Categories: Law and Technology, Legislation and Government Policy