Mustafa Rajkotwala & S.V. Ghopesh

Source: ChatGPT
Abstract: This article examines the application of intermediary liability to artificial intelligence under India’s 2026 IT Rules (as part of the latest Amendment Rules). It analyses Section 2(1)(w) of the IT Act as a function-specific definition anchored to activities performed in relation to pre-existing electronic records. While certain AI services may align with this framework, generative systems such as chatbots present interpretive challenges. By extending the intermediary framework to AI-enabled activities, the Amendment Rules raise questions regarding the scope of safe harbour protections. The article adopts a functional approach to distinguish between different modes of AI integration within existing legal structures.
Introduction
The rapid integration of artificial intelligence (“AI”) into digital platforms is reshaping how content is created, disseminated, and consumed. Unlike traditional intermediary models that were built around the passive transmission of third-party information, AI systems now actively generate and transform content, raising new questions about the adequacy of existing regulatory frameworks.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (the “Amendment Rules”) (note: read the consolidated IT Rules here.) reflect the Government’s attempt to address this shift by introducing a regulatory framework for synthetically generated information (“SGI”). By imposing due diligence obligations on intermediaries offering AI systems capable of producing auditory, visual, and audio-visual outputs, and by mandating labelling requirements for such content, the Amendment Rules seek to extend the intermediary framework to AI-enabled activities.
This article examines the limits of this approach. It argues that the extension of intermediary liability to AI systems rests on an assumption that such systems can be accommodated within the existing statutory framework. However, even under a functional interpretation of Section 2(1)(w) of the Information Technology Act, 2000 (the “IT Act”), only certain forms of AI integration can plausibly satisfy the definition of an intermediary. The Amendment Rules, by contrast, adopt a uniform approach that does not distinguish between different modes of AI deployment.
The consequence is not merely doctrinal. While the Amendment Rules may enable regulatory control over AI-generated content, they risk creating uncertainty in the application of safe harbour protections, particularly in private disputes where intermediary status is directly contested. The issue, therefore, is not whether AI should be regulated, but whether the intermediary framework is the appropriate vehicle for such regulation in its current form.
Section 2(1)(w), Section 79, and the Functional Structure of Intermediary Liability
The exhaustive portion of section 2(1)(w) provides that an intermediary is any person who, “with respect to any particular electronic records… on behalf of another person, receives, stores or transmits that record or provides any service with respect to that record”.
Two features of this definition are critical. First, intermediary status is not a constant attribute of an entity, but arises from the functions it performs in relation to a particular electronic record. This understanding is reflected in the UNCITRAL Model Law on Electronic Commerce (“Model Law”), on which the IT Act is based. Vasudev Devadasan similarly notes that Section 79 reinforces this structure by limiting safe harbour to “third-party information” dealt with by an intermediary in its capacity as such and has noted how the Government has often overlooked this distinction by treating intermediary status as entity-based rather than function-specific. This functional structure is significant. Safe harbour operates as a conditional immunity tied to specific activities, not as a blanket protection. It preserves the distinction between intermediaries, who facilitate third-party content, and primary actors, who create or originate it. Once this distinction is diluted, safe harbour risks extending beyond its intended scope.
Secondly, the definition is double-limbed. The first limb covers entities that receive, store, or transmit records on behalf of another person. The second covers entities that provide “any service” in respect of “that record”. The phrase “that record” is crucial. It presupposes the existence of a determinate record capable of existing independently of the service performed. It links the second limb back to the first, ensuring that services remain tethered to pre-existing third-party content rather than constituting a free-standing category. Judicial interpretation supports this reading. In Snapdeal v. GoDaddy, the Delhi High Court held that “with respect to that record” must be interpreted broadly, recognising services such as domain name brokerage and registration. However, these services remained anchored to pre-existing records. Similarly, in Amazon v. Amway, warehousing and logistical services were treated as consistent with intermediary status, despite their active nature, because they were performed in relation to products listed by third parties.
These cases demonstrate that while intermediary functions may be expansive, they are not unbounded. A structural constraint remains: the existence of a pre-existing electronic record dealt with on behalf of another person. This requirement operates as a threshold for intermediary status and ensures that liability remains tied to the mediation of third-party information. Once this anchor is removed, the distinction between intermediary activity and primary content creation begins to collapse, disrupting the statutory allocation of liability under the IT Act.
Before examining how the Amendment Rules depart from this structure, and arguably perpetuate the same misunderstanding noted by Devadasan, it is necessary to consider the regulatory context in which they emerged.
The Amendment Rules and the Problem of Application
In Structuring Techlaw, Rebecca Crootof and BJ Ard identify two forms of uncertainty in regulating emerging technologies: application uncertainty and normative uncertainty. Application uncertainty concerns whether existing legal frameworks apply to new technologies, while normative uncertainty concerns whether those frameworks, even if applicable, produce desirable outcomes.
The Amendment Rules proceed on the assumption that the intermediary framework applies to AI-enabled services and prescribe obligations within that framework, particularly in the form of due diligence and labelling requirements. This assumption, apart from being implicit in the structure of the Amendment, has been explicitly iterated by the government. For instance, in submissions before the Upper House , the Government stated that India remains “well-equipped” to address the threats posed by AI deepfakes, and specifically identified provisions relating to intermediaries, including section 79, as part of that legal arsenal. The Government also emphasised the importance of compliance with its advisories, which treated platforms offering AI services as intermediaries and advised them to adopt certain due diligence measures in relation to AI-generated content to retain safe harbour. Thus, from the Government’s perspective there is no application uncertainty in transposing the intermediary framework to AI-enabled activities.
That said, the Government’s concern predominantly appears to have been normative uncertainty. This is because, notwithstanding its preference for a relatively laissez-faire approach to AI and its stated belief that the existing framework can promote innovation, the Government also wishes to ensure that providers of deepfake-enabled services comply with specific due diligence obligations. Those obligations could not be adequately imposed through the existing law alone. The pre-Amendment Rules did not explicitly refer to AI-generated content and could not readily be used to enforce measures specifically directed at the harms posed by AI deepfakes, such as labelling requirements intended to counter their apparent authenticity. The advisory route was similarly inadequate, since advisories lack binding force and could not, by themselves, secure the desired regulatory outcomes.
However, this approach assumes away the underlying application uncertainty, as the Amendment, as a delegated legislation, must remain consistent with the parent Act. The following section, therefore, considers whether the Amendment’s conception of intermediaries offering AI models can, even on the most liberal construction and with a presumption in favour of validity, be reconciled with the statutory definition of an intermediary.
The “Tool Provider” Analogy and the Scope of “Any Service”
Rule 3(3), inserted by the Amendment Rules, imposes due diligence obligations on intermediaries offering AI models capable of producing SGI. The Rule provides that “[w]here an intermediary offers a computer resource which may enable, permit, or facilitate the creation, generation, modification, alteration, publication, transmission, sharing, or dissemination of information as synthetically generated information”, it must comply with obligations relating to unlawful SGI, labelling, and provenance mechanisms.
This reflects what may be dubbed as a “tool provider” analogy. In this view, where an entity is already an intermediary and offers an AI system on its platform, it may bear responsibility in relation to that system. There are, in substance, two possible constructions of this relationship.
Under the first construction, an entity that qualifies as an intermediary in respect of some functions is treated as an intermediary for all platform activities, including AI services. Intermediary status thus operates as a general attribute of the platform. Under the second construction, intermediary status remains function-specific. AI services must independently satisfy section 2(1)(w) of the IT Act to fall within the framework. On this view, the Amendment Rules do not expand intermediary status, but apply only where the underlying activity already qualifies.
The first construction is doctrinally unsound. As noted earlier by Devadasan, intermediary status is function-specific, not entity-based. This concern is now more acute because the misconception is no longer confined to interpretation but is reflected in delegated legislation itself. The question of ultra vires therefore arises directly. Even as a lex specialis, rules framed under section 87(1)(zg) of the IT Act, which empower the executive to prescribe guidelines under Section 79 of the IT Act, cannot operate contrary to the scheme of section 2(1)(w). To the extent that they do so, they are liable to be held ultra vires.
The second construction is more defensible. Intermediary status depends on functions performed in relation to particular records, and due diligence obligations under Section 79 of the IT Act apply only when acting in that capacity. Courts have also relied on regulatory materials in determining intermediary functions. In Amazon v. Amway, for instance, the Delhi High Court referred to a government press note recognising logistical services by e-marketplaces. A similar interpretive approach may be adopted here.
This analysis proceeds on the second construction. However, this carries an implication: if AI services are to be treated as intermediary functions, they must fall within one of the two limbs of Section 2(1)(w) of the IT Act. The first limb is difficult to reconcile with AI systems, which are characterised by creation and generation rather than the handling of pre-existing records. AI outputs are therefore more closely aligned with originator activity than intermediary conduct.
The second limb is more promising. It allows “any service” to be provided in respect of a pre-existing record. This article advances the claim that AI services may, in limited cases, fall within this limb, provided they remain tethered to such records.
A common objection is that intermediaries must remain neutral conduits and cannot engage in content production. However, this overlooks the textual distinction between the limbs. The first limb is narrow, while the second permits “any service”. The Model Law in its definition of an intermediary, uses “other services”, which the Guide explains to be in the nature of formatting, translation etc (¶39). The IT Act, apart from deviating from “other services” also contains no comparable limiting explanation, suggesting deliberate wide breadth of “any service”.
The judiciary has also not meaningfully examined whether particular services fall outside the scheme of section 2(1)(w) and has never limited the kind of services that may be offered. In Snapdeal v. GoDaddy, the Delhi High Court recognised that domain name registrars could provide services such as domain brokerage and alternate registrations. In Amazon v. Amway, it accepted that intermediaries may provide warehousing, packaging, and logistical services. These cases demonstrate that intermediary functions may be active and value-added. However, in each instance, the services remained anchored to pre-existing records created by third parties.
This reveals a structural constraint. While intermediary functions may be expansive, they are not unbounded. The existence of a pre-existing electronic record dealt with on behalf of another person operates as a doctrinal threshold. Without this anchor, the distinction between intermediary activity and primary content creation collapses.
A further objection arises from the concept of “originator”. Section 11(3) of the IT Act provides that where a system “automatically” produces data messages, the offering entity may be treated as the originator. This raises the possibility that AI outputs should be attributed to the platform itself. In absence of judicial interpretations, the Model Law and its Guide can be useful here. The Guide clarifies that automatically generated messages are attributable to the offering entity only where there is no direct human involvement, and that attribution ultimately depends on applicable legal rules (¶ 35) .
The Amendment is arguably in alignment with the above, as it states that the AI model here merely “enable, permit, or facilitate” (s) the outputs. This clearly signals that the Amendment views AI outputs not as purely automated messages, but rather as outputs in response to human intervention. This is supported more by the placement of “may” before “enable”. If the originators of AI content are indeed users, and not the platform, then such content is “third-party” information and thereby qualifies platforms for safe harbor protection under §79.
This conclusion, however, is conditional. Human input does not eliminate platform responsibility, and safe harbour depends on whether the underlying activity qualifies as an intermediary function.
AI Services, “That Record”, and the Limits of Intermediary Status
Even if AI outputs are new, this does not necessarily exclude them from the second limb. The relevant inquiry is whether the service is provided “with respect to that record”. The phrase “that record” imposes a structural limitation. It requires the existence of a determinate record that exists independently of the service. Where the record exists solely to trigger the service, this requirement is not satisfied, as there is no stable referential object to which the service can relate. In record-linked AI services, the underlying content exists independently. The AI operates upon it and generates a derivative output without altering the original. The service is therefore being provided “with respect to that record”.
This is best illustrated through the integration of Grok as a social media handle, which can be prompted to respond to tweets by “tagging” it under the tweet. Here, Grok’s output is in response to a tweet, which Twitter, as an intermediary, has already received, stored, and transmitted on behalf of another person. When AI services are offered in this manner, they could fit under the second limb’s scheme of a service in respect to “that record”. There are three essentials to note here: (i) Grok’s output does not alter the original tweet; (ii) the tweet exists independently of the AI service; (iii) and the response is generated pursuant to user instruction. The position is more difficult where the AI generates hallucinated or defamatory outputs not reasonably traceable to the underlying record or user instructions. In such cases, the output begins to resemble independently generated content rather than a service performed in relation to “that record”, strengthening the argument that the platform is acting as a primary publisher rather than an intermediary. Similarly, when an AI system responds to existing product listing, as done by Amazon’s Rufus AI which can be found under each listed product and can offer clarifications in respect to that particular product, the service remains anchored to a pre-existing record.
By contrast, chatbot-style systems sit less comfortably within this framework. Although prompts may qualify as electronic records under the IT Act, they function primarily as generative inputs rather than independently circulating third-party records of the kind contemplated by section 2(1)(w). Unlike tweets, listings, or hosted content, prompts do not ordinarily operate as stable referential objects transmitted through the platform on behalf of another person. The chatbot’s output is instead generated through probabilistic synthesis rather than the facilitation or servicing of a pre-existing record. Treating prompts themselves as “that record” would therefore considerably expand intermediary status, potentially allowing any system that processes user inputs to claim protection under the intermediary framework.
Thus, even on a charitable reading, only a limited class of AI services based on their function can be accommodated within the intermediary framework. If this limited justification is rejected, the consequence is more far-reaching: platforms offering AI services may not qualify as intermediaries at all.
Practical Implications and Doctrinal Instability
The practical significance of the foregoing analysis depends on the nature of the dispute. In disputes between the Government and an intermediary concerning SGI, the preceding doctrinal questions are unlikely to arise in any meaningful way. Neither party has an incentive to contest intermediary status or the validity of the Amendment Rules. The Government would undermine its own regulatory framework by doing so, while intermediaries risk jeopardising their safe harbour protections if they challenge the applicability of the intermediary regime. As a result, such disputes are likely to focus on compliance with the obligations imposed by the Amendment Rules.
Even this limited contestation may rarely arise in practice. Platforms are likely to comply (¶81), with regulatory directions given the risks associated with the loss of safe harbour. As noted in Amazon v. Amway, intermediaries have strong incentives to remove or restrict content when faced with regulatory pressure. The rational response is therefore compliance rather than resistance. In this respect, the Amendment Rules are likely to achieve their immediate objective of regulating SGI with minimal resistance.
The position differs in disputes involving private parties. In such cases, the intermediary’s primary interest lies in preserving safe harbour, while the opposing party’s interest lies in defeating it to establish liability. Historically, one of the most common strategies adopted by plaintiffs has been to challenge intermediary status itself. There is little reason to believe that disputes involving AI-generated content will depart from this pattern, particularly given the persistent application uncertainty surrounding such technologies.
In these disputes, intermediaries may rely on the Amendment Rules to argue that the provision of AI services forms part of their intermediary functions. However, opposing parties can invoke the doctrinal concerns outlined earlier to challenge both the applicability of the intermediary framework and, where necessary, the validity of the Amendment Rules themselves. Courts are thus placed in the position of resolving the very application uncertainty that the Amendment Rules assume.
This creates two possible outcomes. The first is that the Amendment Rules are read down or held ultra vires to the extent that they extend intermediary status to activities falling outside section 2(1)(w). The second is that intermediary status is denied in specific cases, resulting in the loss of safe harbour. In either scenario, the stability of the intermediary framework is undermined.
This problem reflects a broader concern identified by Rebecca Crootof and BJ Ard, who caution that analogies used to extend existing legal frameworks to new technologies must be carefully constructed. Poorly aligned analogies risk producing outcomes that are either ineffective or counterproductive. The “tool provider” analogy underlying the Amendment Rules illustrates this difficulty. In pursuing immediate regulatory objectives, it overlooks the structural foundations of intermediary liability and the distinctions between different modes of AI integration.
The consequences of this misalignment are significant. If the analysis advanced in the previous section is accepted, safe harbour may be available for outputs generated through record-linked AI services, but not for outputs generated through chatbot-style systems. If it is rejected, the implication is more far-reaching: the Amendment Rules themselves may be rendered ultra vires in their application to AI services.
This outcome runs contrary to the purpose of section 79, which is to ensure that intermediaries are protected in respect of third-party content. Where delegated legislation under section 79 is used as the primary regulatory mechanism, the initial inquiry must be whether the framework preserves the availability of safe harbour. Only thereafter should questions of regulatory control be addressed. If this order is reversed, intermediaries may be subjected to extensive obligations without a corresponding assurance of legal protection.
The uncertainty is further amplified by a further proposed amendment within the broader package of changes to the Rules, which would incorporate compliance with Ministry-issued “clarification(s), advisory(ies), order(s), direction(s), standard operating procedure(s), code(s) of practice or guideline(s)” into the intermediary’s due diligence obligations under section 79 itself. This effectively allows executive guidance relating to the “implementation, interpretation or operationalisation” of the Rules to become part of the safe harbour framework. Although the proposed provision requires such directions to remain consistent with the Act and the Rules, the formulation introduces a dynamic and potentially unpredictable layer of compliance. It increases the burden on intermediaries while doing little to resolve the underlying uncertainty regarding their entitlement to safe harbour in private disputes.
While the Government’s objective of regulating SGI is legitimate, it cannot be pursued at the cost of the structural coherence of the intermediary framework. A regulatory approach that remains attentive to the statutory architecture of Section 2(1)(w) and Section 79 of the IT Act would better serve both intermediary protection and the broader objective of fostering responsible AI development.
Conclusion
The Amendment Rules represent a significant development in the regulation of AI-generated content. However, the current approach raises fundamental doctrinal concerns. By extending the intermediary framework to AI-enabled activities without adequately engaging with the structural requirements of Section 2(1)(w), the Amendment Rules risk exceeding the statutory scheme of the IT Act.
This tension is not merely theoretical. While the Amendment Rules may function effectively in government-facing regulatory contexts, their stability is uncertain in private disputes, where intermediary status and safe harbour are directly contested. Courts may therefore be required to resolve the very application uncertainty that the Amendment Rules assume.
The analysis in this article demonstrates that only a limited category of AI services, namely those that remain functionally tethered to pre-existing electronic records, may plausibly fall within the intermediary framework. More general forms of AI integration, particularly chatbot-style systems, fall outside this structure. Treating such systems as intermediary functions risks collapsing the statutory distinction between intermediaries and originators.
The issue, therefore, is not whether AI should be regulated, but whether the intermediary framework is the appropriate vehicle for such regulation in its current form. Without engaging with its structural limitations, the Amendment Rules risk achieving regulatory control at the cost of doctrinal coherence and the stability of intermediary protections.
Mustafa Rajkotwala works on AI, Strategy and Legal Engineering at NYAI. He is a commercial and technology lawyer based in Mumbai, India.
S.V. Ghopesh is a third-year student pursuing B.A. LL.B. (Hons.) at Tamil Nadu National Law University (TNNLU), Tiruchirappalli, India.
Categories: Legislation and Government Policy
