
Source : IPleaders
Anay Mehrotra and Samriddh Sharma *
Source : Blackbox. AI
This article examines the challenges AI poses within India’s legal framework, particularly under the Consumer Protection Act (CPA) 2019. It analyses how traditional product liability laws are inadequate for AI-related damages due to AI’s autonomous decision-making. The article explores whether AI should be classified as a product or a service, highlighting the lack of legal clarity in India. It also evaluates international approaches, such as those in the EU and the U.S., which distinguish between manufacturing defects and design flaws in AI systems. The article argues for a revised legal framework in India that incorporates concepts like digital harm and clear liability allocation. It suggests adopting a risk-utility test for AI defects and proposes a joint and several liability approach to ensure fair compensation for victims. The conclusion emphasises the urgent need for AI-specific legislation, as current laws fail to provide adequate consumer protection.
Introduction
Before the enactment of the Consumer Protection Act 2019 [‘CPA’], India lacked a specific framework for product liability. However, landmark cases such as Anand Kumar Bansal where “manufacturing defect” was defined using Black’s Law Dictionary, and C.N. Anantharam where the Supreme Court awarded compensation for inherent defects, significantly shaped product liability law in the country. These judicial decisions were guided by the principles of equity, justice and good conscience.
The advent of new technologies, particularly Artificial Intelligence [‘AI’], and the expansion of e-commerce have introduced substantial challenges in the realm of product liability. AI, which automates tasks that typically require human-like intelligence, complicates liability determination due to its autonomous decision-making capabilities.
To address these emerging issues, NITI Aayog proposed principles for ‘Responsible AI’ development in a 2021 report, advocating for AI regulation. The MeitY also suggests that AI may be regulated like other emerging technologies, although it downplays the immediate threat posed by AI.
In light of these developments, it is clear that consumers face significant challenges regarding product safety and quality, particularly in cross-border trade. With the lack of clarity on AI regulation, this situation highlights the need to adapt the CPA alongside other laws, such as the Contract Act and the Sale of Goods Act, to effectively address AI-related product liability. This article aims to propose necessary changes to the current product liability regime in India.
Section I will explore how India’s liability rules can be adapted for AI. Section II will propose a comprehensive framework for assigning liability in AI-related cases. Finally, Section III will conclude the paper.
I. Adapting India liability rules for AI
“Product liability“ refers to the accountability attributed on the manufacturer or seller when there is defective in the product. The core rationale is that if a consumer sustains injury due to a defective product, the responsibility lies with the manufacturer, service provider, or seller. Establishing product liability requires addressing key components: first that the item is a product, second the occurrence of harm, and third defect in the product. These conditions must be met, along with a consideration of causation and the burden of proof. As there is no dedicated framework for AI in India, I will be critiquing the CPA and suggest ways to incorporate AI liability into the Indian framework.
A. Assessing the Classification of AI: Should AI Be Treated as a Product or Service Under the CPA?
i. Is AI an product under the Indian Law
The CPA under §2(33) broadly defines a product as “[…] or any extended cycle of such product, which may be in gaseous, liquid, or solid state possessing […] .” While this definition is comprehensive, it does not account for AI.
Some may argue that the phrase “any extended cycle of such product” could implicitly include AI systems, especially when these systems are integral to the functioning of tangible products. The inclusion of “gaseous, liquid, or solid state” might be interpreted expansively to cover digital entities like AI systems, akin to how electricity is classified as a product in the E.U. These digital products, while not consumed in the conventional manner, are frequently used and provide significant societal benefits, suggesting their inclusion under the product liability framework. However, this interpretation remains speculative and lacks a clear legal basis. Currently, no Indian case law explicitly addresses the classification of AI systems as a product under the CPA or other statutes. Similarly, courts in other jurisdictions have also not conclusively classified standalone AI systems as products.
Internationally, the classification of “thinking algorithms” as products has been debated. Some courts have viewed these systems as analogous to professional services, which are not typically classified as products. In certain instances, product liability laws have been invoked when errors in AI algorithms have led to damage, especially when these errors were linked to a physical object. Courts have considered factors like mass production and the potential danger posed by these products when they are embedded in physical objects that can cause significant harm if they malfunction. Therefore, internationally the policy is such that AI which is tangible into physical form is considered as a product.
One issue with interpreting AI as a product arises when considering whether AI not stored in tangible hardware basically a software can be classified as a product. AI is increasingly deployed in intangible forms, such as cloud-based services or software-as-a-service (SaaS) models. While AI ultimately relies on hardware for storage, processing, and execution, the separation of software from specific physical devices challenges traditional definitions of a “product.” For instance, cloud-based AI systems operate on distributed infrastructure, where users access AI functionality remotely without owning or interacting with specific hardware. Additionally, AI software is often licensed rather than sold, meaning there is no transfer of tangible property, further complicating its categorization.
In Europe, Art. 2(3) of Directive 2019/771 on the sale of goods and Art. 2(3) of Directive 2019/770 on digital content and services classify AI systems as “products,” but they do not consider AI software itself as a product. The distinction between AI systems and AI software lies in their structure and functionality. AI systems integrate hardware and software components, making them tangible and easily classified as “products”. In contrast, AI software, being intangible and hardware-independent, is often treated as a service, falling outside traditional product liability frameworks
Different States have interpreted and applied the concept of software in varying ways within their national implementations of the “Product Liability Directive” [‘PLD’]. Even in India, to include AI we can consider it as a movable property, thereby considering it as a “good” under SOGA.
Considering the different approaches taken by the countries and courts, AI should be considered both in tangible and intangible form as a product. To incorporate AI as a “product” under the CPA, the definition of “product” should be amended to explicitly include intangible digital entities like AI software, alongside tangible goods. This could include language recognizing “products in any form, including digital content or services embedded in physical or virtual environments.” Alternatively, courts could adopt a progressive interpretation, classifying AI software as a “good” based on its functional dependence on physical infrastructure, like hardware. While judicial interpretation provides a short-term solution, legislative amendments ensure clarity, consistency, and adaptability to technological advancements.
ii. Argument against AI as service
The CPA under §85 describes a “product service provider” as a company that offers services related to a product. Courts often use negligence to determine if these companies are responsible for any harm. But when it comes to technologies like AI, the responsibility usually lies with the creator or developer, which aligns more with product liability rules. On the other hand, negligence typically targets the user, whether it is the creator or another company using the technology. While negligence standards are typically applied to service providers, such as doctors or engineers, the dynamic nature of AI complicates this application. Unlike traditional services, AI often operates autonomously or semi-autonomously, reducing the control that service providers (users) have over its actions. For example, a user relying on AI for predictive analytics might inadvertently breach the standard of care by trusting the system’s outputs without fully understanding the underlying risks, even if the AI itself was flawed due to a developer’s error. Therefore, if AI is defined as a service, the user would bear more responsibility for any harm caused by the technology. There are four key issues with using the negligence standard.
First, users often do not fully understand the potential problems or failures, making it hard to set a fair standard of care. This is especially true for predictive AI applications, where even experts might struggle to foresee how the system could misinterpret data. Second, the expected standard of care in human-AI interactions might be too high, requiring constant monitoring for mistakes, which goes against the purpose of using technology to reduce human work. Third, there is uncertainty about whether users should protect the AI from manipulation, especially when they have limited control over the system. Negligence standards assume users can oversee and mitigate risks, but with AI’s complexity, this shifts an unreasonable burden onto the user, rather than addressing the systemic issues caused by the developers or manufacturers. Lastly, AI might complicate negligence claims related to discrimination, as the technology could improve overall results but do so inconsistently across different groups, making it difficult to address responsibility for discriminatory actions. Therefore, AI should be seen as a product and not just as a service.
B. Defining AI Harm under the Indian Law
The meaning of “harm” under §3 of CPA may be stated as any damage to property, physical injury, or mental pain. However, this definition predominantly addresses physical injury caused by a product and does not extend to intangible objects, leading to ambiguity in certain cases. This is particularly concerning as AI devices can cause both physical and non-physical damage.
For instance, while an autonomous car inflicting physical injury would clearly be considered under traditional harm, however an AI tool that disrupts a company’s logistical data would not fall under this definition. Similarly, AI applications vulnerable to cyberattacks, resulting in financial and privacy breaches, also fall outside the current scope of “harm.”
Globally, courts have started addressing this issue. For instance, in the European Union, proposed updates to the PLD seek to include digital damages such as data breaches caused by defective digital products, including AI systems. In the US case of Patchett v. O-Two Medical Technologies, the court recognized harm caused by software malfunctions, highlighting that digital systems could inflict real-world consequences. Similarly, in Amazon v. Oberdorf, the US court discussed the liability of online platforms for defective products, indirectly addressing harm from digital ecosystems.
Therefore, to ensure comprehensive consumer protection in the rapidly evolving technological landscape, it is essential for the CPA to expand its definition of “harm” to include “digital and data-related damages.”
C. AI defects as design defects
§2(10) of CPA defines a “defect as any fault, imperfection, or shortcoming in the quality, quantity, potency, purity, or standard […]. The act further stipulates that a product manufacturer can be held liable in a product liability action under §84 if the product contains “a manufacturing defect, is defective in design, deviates from manufacturing specifications […]”. SOGA defines quality of goods as their state or condition. From these sections it can be interpreted that under the Indian law defects can either be manufacturing or design. The standards applicable to both these defects is the same.
AI systems, designed using machine learning techniques, rely on algorithms and training data. If these elements deviate from the intended specifications, a manufacturing defect could occur. Manufacturing defects usually arise from quality control or problems in the assembly of components. These malfunctions are usually related to the physical aspects of the product and are therefore relatively less common in AI systems.
On the other hand, design flaws are more commonly found in the code of AI systems. These flaws are more likely to originate from choices made during the AI system’s design and development stages, rather than from the manufacturing process itself. Therefore, designs are the appropriate standard to use when determining defects in AI.
Internationally, two major approaches are used to assess defects. In the EU, the test of consumer expectation assesses whether a product aligns with the safety standards expected by the public. Liability may be waived under the “development risk defense” if the defect was unavoidable given the scientific knowledge at the time, with a focus on whether safer alternatives were possible. Conversely, the U.S. applies strict liability for manufacturing defects, treating any unintended flaws as defects, while design defects are assessed using the risk-utility test, a negligence-based approach. This test considers whether a product could have mitigated foreseeable risks through reasonable alternative designs, weighing factors such as the product’s utility and safety.
Indian law should take a similar approach to U.S. product liability law by distinguishing between different types of defects and adjusting the liability standards accordingly, based on the level of control the manufacturer and user have in preventing accidents. This differentiation is necessary because the nature and origin of defects vary, requiring tailored standards to ensure fairness and efficiency. Manufacturing defects are unintended errors that occur during production and are entirely within the manufacturer’s control, justifying strict liability. In contrast, design defects result from deliberate choices during development, making negligence-based approaches like the risk-utility test more suitable for assessing responsibility.
AI systems depend on intricate algorithms, with factors like training data, model structure, and decision-making protocols being key in shaping their performance. Flaws in these areas can lead to substantial defects, often resulting from choices like biased training data or imperfect algorithms.
Thus, using varied standards, especially the U.S. “risk-utility test” for design defects, associates with the “least cost avoider” rule, ensuring that responsibility is placed on the party most capable of minimizing risks. By tailoring liability standards to specific defect types—whether manufacturing, design, or information-related—Indian law can more effectively allocate liability.
D. Challenges in Proving Burden of Proof and Causality in AI-Related Product Liability Cases
Even if AI systems meet the product liability standards under the CPA, the law mandates that victims must prove a causal-link between defect and the harm suffered. Additionally, complexity and intangible nature of AI often create overwhelming challenges for those who have been harmed.
The experts’ groups in Europe have recognised this information asymmetry. To ease the burden of proof, they have introduced measures like requiring manufacturers to disclose relevant evidence and shifting the cost of expert opinions to the defendant. Courts in India hinted towards such approach.
To address these issues in India, I suggest revising the product liability standards by adopting a similar approach to the European Union’s PLD. The PLD introduces a rebuttable presumption of causation, acknowledging the challenges in proving defects in AI systems. This presumption is designed to be fair, ensuring it does not impose excessive liability. This reasoning is also similar to the doctrine of res ipsa loquitur. Combined with AI’s ability to log and record events, this approach would give victims better access to crucial evidence, making it easier to mould a direct-link between the AI’s actions and the resulting harm. Therefore, the Indian government should adopt these changes.
II. Liability in AI-Related Cases: Addressing Accountability and Proposing a Comprehensive Framework
A. Assigning Liability in AI-Related Product Liability Cases
Now if we take up a case where the consumer has successfully established the elements of product liability, the onus is on the court to determine the liability of the producer. Due to the participation of numerous stakeholders throughout an AI system’s lifecycle, from development to deployment, it becomes difficult to pinpoint the primary responsible party. Identifying the key liable entity requires clear legislative direction on which parties will bear this responsibility.
In my opinion, liability should be attributed to the party that benefits the most from the AI system’s operation and has the technical capacity to control and minimize risks at the lowest cost. This recommendation is in line with Prof. Karampatzos’s economic analysis of law, where he asserts that the principle of minimizing risk at the lowest cost, along with information asymmetry, places liability on the party with the most knowledge. Similarly, Prof. Calabresi suggests that when responsibility is uncertain, then the loss should be assigned to the party or activity best equipped to address the issue, thereby encouraging the entity most capable of reducing social costs to take action.
In most cases, the manufacturer of products or digital content involving emerging technologies should be held liable for damages caused by defects, particularly if they maintain control over the product after its release. As the party best situated to assess and mitigate risks, the manufacturer is often considered the “cheapest cost avoider.” However, liability may not reside solely with the manufacturer; other parties, such as AI system owners, developers, engineers, or system operators, might also share this burden. Accurately determining each party’s contribution to a malfunction can be a significant challenge.
To address these uncertainties, I suggest two possible liability frameworks: individual liability and collective liability. These can be implemented either as strict liability, akin to the responsibility placed on animal owners, or based on fault linked to a duty of care.
B. Addressing AI Liability: A case for Joint and Several Liability
One suggested approach is to focus liability on a single individual or entity, allowing them to pursue compensation from third parties. From the consumer’s perspective, it is reasonable to hold the company that sells the product accountable for any defects, even if those defects are caused by a supplier rather than the company itself. This approach allows the company to pursue claims against the supplier based on their contractual agreement. In the case of automated vehicles, it might be fitting to hold the driver accountable, with their insurer handling claims from third parties.The insurer could then pursue compensation from other responsible parties. This approach seems fair, particularly given the varying levels of vehicle automation, as it simplifies the process for third parties to file claims by directing them to a single insurer rather than identifying all possible liable parties.
However, complications arise if the liable party, particularly without insurance, cannot bear the financial burden of compensation. In such cases, joint and several liability might offer a more effective alternative. Under this framework, the responsibility for compensating the victim is shared among multiple parties. This approach has been supported by the Principles of European Tort Law, which allows victims to sue any party within the commercial chain, increasing the likelihood of identifying multiple defendants. This approach has been successfully applied in cases involving major corporations responsible for health or environmental crises, and it could be argued that damages resulting from AI systems may be equally complex.
In certain cases, it may be justifiable to allocate only a portion of the damages to each responsible party, allowing for several liabilities. For instance, if an AI system operator is held strictly liable, they should have the right to seek compensation from other parties, such as the producer, if the wrongful conduct can be linked to them. However, the complexity of AI systems complicates both obtaining compensation for the victim and determining the specific responsibilities of the parties involved.
Thus, holding all involved parties jointly and severally liable appears to be a just, sensible, and efficient approach. This approach enhances consumer protection by broadening the scope of potentially liable parties and achieving greater harmonization in the allocation of responsibilities within the supply chain. As highlighted in the White Paper on Artificial Intelligence, establishing trust in the market requires resolving the uncertainty around the distribution of responsibilities among different economic actors within the supply chain.
III. Conclusion
AI poses significant challenges to legally recognized rights due to its widespread use, despite offering substantial benefits. It is crucial to develop a strong legal framework to ensure that victims of defective AI systems receive fair compensation. They should be afforded the same level of protection as those affected by traditional products; otherwise, public confidence in AI technologies could be undermined. Companies need to proactively identify and address foreseeable risks associated with AI to meet consumer expectations and reduce liability.
However, the current legal framework, particularly the CPA, is inadequate for addressing AI-related product liability in India. A review by Niti Aayog highlights the need for significant revisions to effectively incorporate AI into existing legal structures. Key concepts such as ‘product,’ ‘producer,’ ‘defect,’ and ‘harm’ require precise definitions, which should ultimately be enshrined in new legislation. As primary beneficiaries of AI, manufacturers bear the responsibility to ensure that AI systems do not cause harm. Until comprehensive legislative measures are implemented, case law will be instrumental in addressing these issues, particularly as many cases involving defective AI systems are resolved out of court, leaving critical legal questions unanswered.
* Anay Mehrotra is a fourth-year B.A. LL.B. (Hons.) student at the West Bengal National University of Juridical Sciences (NUJS), India. Samriddh Sharma is a fourth-year B.A. LL.B. (Hons.) student at the West Bengal National University of Juridical Sciences (NUJS), India.
