Law and Technology

The Elephant Not in the Room: The DPDPA’s failure to regulate behavioural tracking


Sriya Sridhar*


While India’s Digital Personal Data Protection Act, 2023 (‘DPDPA’) focuses on a consent-based framework for data collection and processing like the GDPR and other jurisdictions, it fails to address the harms that arise due to the collection of certain forms of data. In particular, the usage of behavioural data for tracking, the creation of addictive online experiences and algorithmic recommender systems have led to calls for regulation around the world. However, provisions which could have effectively regulated the use of behavioural data in the DPDPA have been entirely diluted in the final version of the Act. This article traces this dilution and explores the resulting consequences.

Introduction

On 22nd April, 2024, the European Commission (‘the Commission’) initiated action against TikTok under the Digital Services Act, citing concerns on the mechanism of implementing ‘TikTok Lite’ – a new application where users are given ‘rewards’ for performing various tasks, such as liking videos or engaging with other content. These rewards can then be exchanged for real-world benefits, such as gift cards and vouchers.

Among the Commission’s major concerns include the potential for this type of platform to push users towards addictive tendencies, especially minors if they can surpass the age verification. Such a reward driven social media platform would also involve processing large amounts of behavioural data about users’ traits, attributes, characteristics, and precise data points on personality drivers which lead to their engagement.

In another recent regulatory development, the European Data Protection Board (‘the Board’) issued an opinion on Meta’s ‘consent or pay’ model, wherein the platform asked for users to pay to use a version of the platform without behavioural advertising.

What these actions have in common is the underlying concern on the usage of behavioural data, which goes beyond the mere processing of personal data to collect data points on a user’s attributes, with the purpose of driving eyeballs towards a certain platform or advertisement, tailored based on these traits.

Studies have raised concern regarding the potential for consumer manipulation through online behavioural advertising. As the use cases for behavioural advertising and usage of this data move beyond driving consumer purchase intention, to broader ones such as delivering political ads, the effects of such manipulation become more pernicious. These effects range from amplifying the spread of misinformation to increasing the potential for social surveillance. In addition, there is the potential for bias against marginalized communities and reinforcing discriminatory patterns.

While regulators around the world are increasingly becoming more vigilant against the dangers of mass usage and sale of behavioural data and profiling, I argue in this piece that India’s Digital Personal Data Protection Act, 2023 (the ‘Act’), takes a step back.

In making this argument, I will firstly compare the scope of previous drafts of the Act and argue that these were more effective in regulating the usage of behavioural data through (i) robust definitions incorporating behavioural data within personal data, (ii) extending the applicability of the Act to ‘profiling’ and defining profiling, and (iii) including mental harms and behavioural manipulation within the scope of ‘harm’ which could be caused to a Data Principal (as defined therein) under the Act. Secondly, I will highlight how the dilution of these provisions in the final version of the Act leave it inadequate to address the increasing usage of behavioural data or effectively compensate Data Principals for harms which may be caused to them through such usage. Finally, I will conclude by arguing that the impairment of individual autonomy must be a regulatory priority and be upheld through data protection regulation.

I. Tracing the journey of behavioural data regulation

The previous draft versions of the Act were far more effective in potentially regulating the usage of behavioural data and use cases such as targeted advertising. This section will trace the evolution of these provisions throughout the drafts of the Act from 2018 to 2022.

a. The PDP Bill, 2018

The first draft of the Act in 2018 contained three crucial components – firstly, Section 2(2) extended the applicability of the Bill to ‘profiling’ of data principals within the territory of India, with ‘profiling’ defined ‘as any form of processing of personal data that analyses or predicts aspects concerning the behaviour, attributes or interest of a data principal.’ Secondly, Section 3(29) included ‘any characteristic, trait, attribute or any other feature of the identity of such natural person’ within the scope of what constitutes ‘personal data’, which would bring behavioural data within this scope. Thirdly, the definition of ‘harm’ under Section 3(21) included ‘mental injury’, ‘discriminatory treatment’, and ‘any observation or surveillance that is not reasonably expected by the data principal’.

The combination of these three provisions, had they made it into the Act, would have been a positive step towards addressing behavioural data usage, and potentially guarding against its discriminatory effects, when user attributes are used to train advertising algorithms. A platform like TikTok Lite would be squarely covered within the scope of the Act, and addictive behaviour guarded against, since the Act would firstly be applicable to such a platform, it would be considered as processing personal data, and be liable for harms caused due to biased algorithmic recommendation systems or potentially addictive behaviour encouraged through the platform.

b. The PDP Bill, 2019

The 2019 draft of the Act contained the same three components of the 2018 draft. In addition, it added to the definition of ‘personal data’, ‘any inference drawn from such data for the purpose of profiling’. This extension of the definition would have gone even further to squarely cover not only behavioural data, but also, the patterns that are informed on a large scale for big data analytics – what Professor Helen Nissenbaum argues is at the core of information asymmetry between individuals and the companies processing their data. These provisions combined, would have the same effect on a platform like TikTok Lite as the 2018 draft, had such a platform operated in India.

c. Recommendations from the Joint Parliamentary Committee

In addition to the provisions from the 2018 and 2019 drafts, the Joint Parliamentary Committee, which was constituted to review the Bill, proposed further modifications. In the version of the Draft it suggested, it (i) proposed extending applicability of the Act to non-personal data, including anonymized data, (ii) endorsed the extended definition of personal data as under the 2019 draft, (iii) importantly, proposed that the definition of ‘harm’ include psychological manipulation which impairs the autonomy of an individual, drawing from Recital 75 of the GDPR.

The extension of the Act to anonymised data could have been a key regulatory avenue through which to specifically address the issue of algorithmic bias through the usage of behavioural data, since these data points once aggregated, might be considered anonymised since it would not be able to directly identify the individuals involved. Nevertheless, this anonymised version of the data is the key to operationalizing behavioural advertising as a model.

The extension of ‘harm’ to psychological manipulation, would have squarely addressed harms such as the encouragement of addictive behaviour through platforms, and placed India as a jurisdiction which acknowledges the current realities of the advancement of online advertising models, and echoed the concerns that regulators such as the Commission are having.

d. Dilution of previous provisions in the DPDP Bill, 2022

The final draft before the Act, (i.e., the 2022 draft) came with a significant dilution of all of the above discussed provisions which could have targeted online behavioural advertising and manipulation. The applicability of the 2022 Draft extended to profiling of individuals and included the same definition of profiling as in the previous drafts. However, this applicability provision does not have teeth, due to the dilution of the definition of personal data to any data about an individual who is identifiable by or in relation to such data. The absence of an explicit inclusion of characteristics, traits, attributes, or inferences, leaves ample leeway to remove behavioural data from the ambit of personal data, since the argument can be made that the threshold of ‘identifiability’ is not as clear in this definition, as opposed to a data point such as an individual’s name.

Further, the definition of harm was also diluted and restricted to bodily harm, distortion or theft of identity, harassment, or prevention of lawful gain or causation of significant loss. The definitions of ‘gain’ and ‘loss’ are purely limited to the supply of services or remuneration – this commercialized paradigm is outdated in its conceptualization of privacy based harms, which have moved beyond commercial harm to the impairment of autonomy and individual decision making. Moreover, many privacy harms may not be strictly quantifiable, reducing the avenues for Data Principals to bring action before the relevant authorities. Finally, the harms caused due to the misuse of behavioural data are most often not ‘bodily’ harms, leaving the issue of mental harms entirely unaddressed.

II. Further dilution of provisions in the DPDPA, 2023

The culmination of all the above drafts came with the enactment of the Act, in 2023. In relation to behavioural data, the Act as it stands does away with all the provisions from the drafts, including those in 2022. The applicability of the Act no longer extends to the profiling of Indian Data Principals, any extends to ‘any activity related to offering goods or services to Data Principals within the territory of India.’  While profiling might be able to fit within this definition, the absence of the term provides an avenue to argue that it is not related to offering goods and services and is more of an ancillary activity.

The Act in its final form does not include a definition for profiling, therefore, taking behavioural monitoring and tracking out of its scope – since these are largely agreed to fall within the ambit of profiling. While consent is still to be obtained from Data Principals, the additional level of accountability required for behavioural tracking beyond merely obtaining consent, is missing.

Finally, and perhaps most evident of the level of dilution, the definition of ‘harm’ is entirely absent from the Act. Therefore, Data Principals will only be able to bring action when there is a commercial loss or interruption in supply of services – neither of which is the form of harm caused by the usage of behavioural data for targeting, profiling, or algorithmic recommendation systems and for most privacy related issues, is incredibly difficult to prove before an authority.

The effect of all these changes leaves Data Principals in India vulnerable to the adverse effects of the usage of their behavioural data and exacerbates the issues of the lack of transparency on the working of these models. Further, it leaves the current state of data protection law in India outdated, even as other jurisdictions are moving beyond consent based data collection to specifically addressing newer forms of technologies causing privacy-based harms. An example of this, as mentioned above, is the EU’s move to specifically regulate the usage of behavioural data for algorithmic recommender systems through the Digital Services Act, as well as address market concentration and tech monopolies which limit the choice available to users across the internet. These legislations have been brought about to supplement the GDPR while acknowledging avenues for newer forms of data usage to cause harm to users.

Conclusion: the dire need to protect individual autonomy online

It is important to acknowledge that data protection regulation is one among several methods to regulate behavioural advertising. Regulatory pluralism has been suggested as a method of addressing privacy related harms which may have multiple facets – for instance, strong consumer protection laws are crucial to prevent deceptive design. However, I argue that data protection law, which in India is a direct result of the Supreme Court’s ruling on a fundamental right to privacy, must provide a strong regulatory foundation.

Regulators must acknowledge that privacy-based harms and data usage have moved beyond a notice-and-consent framework. Drawing from the Supreme Court’s recent ruling in Association for Democratic Reforms v. Union of India (more commonly referred to as the Electoral Bonds case), the majority opinion of the court acknowledged that the right to privacy must necessarily include the protection of an individual’s autonomy to think and develop thoughts freely. In the context of political surveillance, the Court commented that algorithmic capabilities have made it possible to track activity of an individual such as their purchases and other behaviour to reveal their political affiliation, among other information.

Beyond external surveillance, the journalist Kyle Chayka (author of the book Filterworld) talks about the phenomenon of ‘algorithmic anxiety’, which is the result of an asymmetrical relationship between users and algorithms driven by their own behavioural data – prompting the users to have no choice but to change their behaviour and modes of conducting themselves online to participate. Users are also prompted only to engage with content and ideas which are delivered to them via patterns they are not aware of. India’s data protection framework must account for this phenomenon and address it through regulation which moves beyond being purely focused on consent, rather addresses the adverse ways in which data can be used after the consent is obtained. Aside from the issue of behavioural tracking, dilution of the scope of the Act and of harms which Data Principals can be compensated for, is a net negative for the right to privacy.


*Sriya is an academic at a law school in Chennai, and is currently pursuing her LLM (Master of Laws) in Innovation, Technology and the Law from the University of Edinburgh. Her research interests include data protection and privacy legal theory and compliance, examining regulation and innovation as co-dependent processes, the dynamics between information, power, and society, as well as legal education and pedagogy. She also consults on matters relating to technology and data protection law, regulatory compliance and policy advisory.