Legislation and Government Policy

Countering non-consensual deepfakes: proposing a legal solution


Saumya Ranjan Dixit*


This article examines the legal inadequacies in India for addressing non-consensual deepfakes that cause non-economic harm to ordinary individuals. Current laws, including the IT Act, DPDPA, and IPC, focus on explicit harm or offenses, leaving victims of non-consensual deepfakes without effective remedies. The article argues for a broader interpretation of Section 66C of the IT Act, suggesting that a person’s facial and vocal characteristics be considered “unique identification features,” thus enabling legal action against the unauthorized creation and dissemination of deepfakes, even in the absence of explicit harm.

I.        Introduction

Artificial Intelligence (AI) generated deepfakes are not only disrupting a person’s social identity but also deceiving and terrorising them. Deepfakes are being used for numerous crimes like virtual forgery, hate speech, generating fake porn videos, AI voice fraud, etc. This shows that deepfakes not only victimize celebrities but also target normal civilians. However, the laws are not well-equipped to provide substantial remedies to ordinary people. Even the remedy provided in cases involving celebrities like Anil Kapoor and Amitabh Bachchan cannot be availed by ordinary people. These celebrities succeeded in getting injunctions against the unauthorized use of their personality rights and preserved their individual persona by claiming the defence of the right to publicity. However, ordinary people who are victims of deepfakes cannot avail this defence because the right to publicity requires some kind of likeness, identifiability and commercial gain by the perpetrators to be proved which may be non-existent in their cases.

Therefore, this article considers a scenario where a person’s face is copied and pasted on another person in a video or an image where the latter is engaged in something not offensive. In this scenario, there is deepfake content created by manipulating the image of the former person which can cause annoyance, vexation or any kind of inconvenience to that person. Assuming the affected person wants some damages from the creator of the deepfake, then what can be the legal provisions that the person can use to successfully prosecute the creator of the deepfake in a convenient legal route? Here, it is to be noted that the deepfake content is not offensive and the intention of the creator is not to cause any wrongful loss or make any wrongful gain but mere self-gratification by causing inconvenience or annoyance to the affected person which is non-economic in nature.

In such a scenario, can the victim avail any remedy seeking damages solely on the ground that the image or video was manipulated and published without any consent even though there is no other explicit harm? It is argued that irrespective of the offences ensuing such publication the very act of generating and publishing it without consent is illegal. In this light, Part II of the article elucidates the legal vacuum prevailing in India to deal with the aforementioned scenario. Part III explains exactly what kind of  injury, both moral and legal, a person suffers by creation and publication of a non-consensual deepfake per se. Lastly, Part IV proposes a wider interpretation of Section 66C of the Information Technology Act, 2000 (IT Act) to help ordinary people seek damages in a hassle-free manner in the aforementioned scenario.      

II.         Perusing the Legal Vacuum in India

In India, the aforementioned scenario can be generally dealt with by the provisions of the IT Act, Digital Personal Data Protection Act, 2023 (DPDPA), Indian Penal Code, 1860 (IPC) and defence of the Right to Privacy. However, these provisions are insufficient to provide an effective remedy in this scenario due to the following reasons.

Section 66D of the IT Act punishes cheating any person by personation using any computer resource or communication device. Section 66E punishes publishing or transmitting any “image of a private area of any person without his or her consent”. Here, “private area” is defined as the area showing the genitals or breast area. Sections 67, 67A and 67B punish for publishing any material which is obscene or contains sexually explicit materials. These provisions cannot be availed as defences in the aforementioned scenario as the deepfake material contains nothing obscene, no private part is exposed and no cheating is intended by its publication.

Now considering the DPDPA, the preamble of the Act states about protecting the “digital personal data” of a person and ensuring a lawful processing of such data for lawful purposes. Section 2(t) of the Act defines “personal data” as any data by which an individual can be made identifiable. However, under Section 3(c)(ii) of the DPDPA, this Act does not apply to personal data which is made publicly available. This creates a problem because people use social media to make themselves publicly known by sharing their images and videos. Consequently, these shared data no longer remain personal which cannot afford protection under this Act as is evident from the illustration appended to Section 3 of the Act itself.

Here, the term “personal data” can also be interpreted in a manner to include the deepfake so generated using the publicly available data as through this deepfake an individual can be made identifiable. Hence, it can be argued that if this deepfake is a “personal data” then the protections of DPDPA can be made applicable to it. There seems to be no prima facie lacunae in such an argument, however, treatment of non-consensual deepfakes as “personal data” qualifies it for the exception under Section 3(c)(i) of the DPDPA which exempts “personal data processed by an individual for any personal or domestic purpose”. This exemption to deepfakes is problematic for two reasons. Firstly, there is no definition of “personal or domestic use” under the DPDPA so any future interpretation made for this phrase considering the benefit of the actual owner of the publicly available personal data can provide an unfair advantage to the deepfake creators. Secondly, there is no reason as to why use of non-consensual deepfakes be allowed even for domestic or personal use by the deepfake creators? There is always a possibility, in the world of internet, that such deepfake used for domestic purpose can cross the domestic boundaries and enter the public spaces which could be detrimental to the victim. So, it is rather desirable to completely prohibit generation of non-consensual deepfakes from the very beginning which would eventually close every gap for its dissemination.  Hence, it is better to not consider the interpretation of deepfakes as “personal data” under DPDPA.

Similarly, IPC provisions (now Bharatiya Nyaya Sanhita, 2023) punishing defamation, forgery, criminal intimidation, etc can be used against the effect of deepfakes. However, in all these provisions there has to be proved some kind of actual harm ensuing from the publication of deepfake. The harm can be loss of reputation, monetary loss, threat to life or property, etc. However, in the above scenario, there is neither any such intention to cause any specific offence nor any such specific offence ensued from such publication of the deepfake content. Therefore, the inconvenience caused to the affected person due to tampering with the image of the person without consent cannot be effectively prosecuted through the provisions of IPC.

Moreover, trading on the defence of the right to privacy is not easy to apply in the above scenario because pasting a person’s face in any deepfake material does not amount to infringement of privacy if the source was made publicly available. Further, social media giants have reported that there cannot be a reasonable expectation of privacy in social media. This defence is also difficult to maintain in cases of deepfake pornography as well because merely the face of another person is pasted but other intimate details are not true for that person. So, privacy is infringed only when true private information of a person which is intended to be secret gets divulged without consent. Hence, when the content of deepfake is untrue regarding the person whose characteristics are affixed there cannot be any infringement of privacy. In the above scenario, the details except the person’s face are not true for the person concerned so there is nothing private which needs to be protected. Similarly, if the facial image of the person is taken from any publicly available source there is no infringement of privacy.

Hence, a new weapon should be added to the armoury to pump some air into this legal vacuum which is discussed in the following part.

III.         Moral and Legal Injury Ensuing from Non-Consensual Deepfakes

The prevailing legal framework fails to address the above scenario because the laws are focused on the harm or annoyance ensuing from such publication of deepfakes and not the malicious publication itself. But before moving on to the solution, it is pertinent to understand why generation and distribution of deepfakes are in itself inherently problematic. It needs to be identified as to what kind of moral and legal injury a person suffers if any deepfake is created and circulated using the facial characteristics of that person irrespective of any specific offence like defamation, forgery, etc ensuing from such publication.

With respect to moral injury, Professor Adrienna de Ruiter opines that the deepfake technology can be seen as morally suspect, if not intrinsically morally wrong. Professor Ruiter states that deepfake technology is not inherently morally wrong because the technology itself is not harmful rather its secondary effects resulting in fakeries are undesirable. The inherent technology can also be used for beneficial purposes like recreating a dead artist in films, voice of famous actors who are no more, etc which is welcoming. However, deepfake technology is morally suspect because the technology is utilized towards deceiving public in a susceptible manner which violates fundamental moral norms.

Professor Ruiter  tests the deepfake technology on four fundamental moral norms namely deontology, consequentialism, virtue ethics and care ethics. However, for our discussion test on the anvil of deontology is sufficient. Deontological approach is drawn from Kant’s philosophy which states “we should never treat people merely as a means, but always as ends in themselves”. This statement signifies that people should be treated the way they like and retain the agency with themselves of the manner in which they like to be treated. It is undesirable to achieve one’s goals by using people as instruments where the agency of the people over themselves is disregarded. This moral principle is directly violated by the very use of deepfakes without consent as it only pursues the desires of the creator by obliviating the will of the person whose image or video is manipulated and displayed in the published material.

With respect to the legal injury, Diakopoulos and Johnson state that non-consensual deepfakes can cause harm of misattribution which is independent of the reputational harm as the former violates the ownership rights of the subjects without their consent. They introduced the concept of “persona plagiarism” which is an inversion of plagiarism. Plagiarism involves taking positive credit for someone’s works but in “persona plagiarism” negative credit is falsely given to someone else. They state that even though there is no reputational harm such “persona plagiarism” causes harm of misattribution by obliviating the consent of the victim. Therefore, Prof. Adrienna rightly states“What needs to be protected is therefore not our image and voice as such, but these digital representations, which function as identity markers for our digital persona”.

This analysis depicts that the legal injury suffered by people due to the non-consensual nature of deepfake which infringes their agency and control over the manner they want to present themselves[A4] [A5] .  Although there is no legal protection over the personal data voluntarily made publicly available, dissemination of deepfakes made out of it vitiates the consent of a person and presents the characteristics of the person in a manner for which there was no consent. So, in the above scenario, the legal injury is misattribution caused due to non-consensual representation of the person through a deepfake.

It is to be noted that the author does not propose a blanket ban over deepfakes per se due its beneficial nature in different fields. However, it is the generation and dissemination of non-consensual deepfakes which must be completely blocked. 

IV.         Tapping the Potential of Section 66C of the IT Act: Recommendation and Concluding Remarks

Having established the kind of legal injury inflicted by deepfakes it is required to carve out some legal solution to cure the legal vacuum. It is proposed that a wider interpretation of Section 66C of the IT Act could suffice the need of the hour. Section 66C of the IT Act punishes identity theft which involves dishonest or fraudulent use of passwords, electronic signatures, or any person’s “unique identification feature”. Identity theft means stealing other person’s information which represents the person’s identity like full name, password or social security number. It generally involves fraudulent digital and financial transactions like using someone’s credit card fraudulently, phishing, ATM skimming, etc and such type of digital information is generally considered as ‘unique identification features’ of a person. Section 66C criminalises the very act of using the information dishonestly or fraudulently which is beneficial as it does not require any offence to ensue from such utilization of the information. It is also to be noted that the term “unique identification feature” is used as an ejusdem generis to the terms “electronic signature” and “password”. It is precisely for this reason that Section 66C cannot be used in its current form to remedy the injury inflicted by deepfakes due to the narrow scope of the term “unique identification feature”.

The term “unique identification feature” literally means features that are distinctive, unequalled and characteristic parts. So, the entire phrase “any other unique identification feature” implies the vast and comprehensive scope of the term. It requires no great science to understand that every human face is unique, variable and identifiable. Every person’s face is distinct, unequalled and has a unique characteristic of its own so it can very well fit into the scope of the term. Moreover, a person’s face is used as a marker of identity of that person. Additionally, the function carried by a “password” can also be performed by a person’s face as is evident from many day-to-day uses of electronic devices and apps. This strengthens the reasoning that in functional aspect human’s face is performing in the same manner as any other “unique identification feature” such as “password” so a person’s face can be conveniently read into the scope of this term.

After reading the actus reus in non-consensual deepfakes into section 66C, the mens rea element needs to be established. The term “fraudulently” in Section 66C is defined in Section 24 of IPC as the presence of an intention to “defraud”. The Supreme Court of India in State of A.P. & Anr. v. T Suryachandra Rao has expounded “defraud” that it involves two elements of deceit and injury. The Court in this case further held that the element of ‘injury’ is non-economic in nature which includes any harm caused to the body, mind, reputation, or such others. This implies even in absence of an intention to cause any pecuniary loss, intention to cause injury by causing annoyance or inconvenience to body or mind can fall under the term “fraudulently”.   

Such an interpretation of Section 66C, especially the terms “unique identification features” and “fraudulently”, can provide an effective remedy in the scenario discussed initially. In that scenario, firstly, the utilization of a person’s face to create deepfake content non-consensually will amount to fulfilling the actus reus element of using one’s unique identification feature which in this case is a person’s face. Secondly, such generation with the intention of causing annoyance to the person can be considered as a fraudulent intention as it falls under the ‘injury’ element as described above. Hence, a bit wide interpretation of “unique identification features” in section 66C was successful in providing remedy against mere publication of non-consensual deepfakes irrespective of any specific offence ensuing from it. Moreover, this proposal can be utilised when a person’s vocal characteristics are involved in deepfakes based on the similar reasoning that a one’s voice is as unique as one’s DNA.


*Saumya is a 4th year BBA LLB student at National Law University Odisha.