Rishi Anand, Dhruv Bhatnagar and Riddhi Alok Puranik

This comment examines MeitY’s Draft Amendments to the Information Technology (IntermediaryGuidelines and Digital Media Ethics Code) Rules, 2021, which propose India’s first regulatory framework forsynthetically generated information (“SGI”), including deepfakes and other AI-generated content. Whileintended to address harms such as non-consensual imagery, impersonation, misinformation, and fraud, theDraft Amendments adopt an overbroad and ambiguous definition of SGI and impose technically andconstitutionally problematic obligations. Proposed Rules 3(3) and 4(1A) create burdensome labelling duties,conflate the roles of intermediaries and content originators, and risk compelled speech, proactivemonitoring, and privacy intrusions inconsistent with Shreya Singhal. The comment argues that SGIregulation should be anchored to clearly defined harms, incorporate proportionate and technologicallyfeasible intermediary obligations, and focus on user empowerment and capacity building. It recommends adopting harm-linked thresholds, strengthening media and information literacy, and promoting provenance-based verification to balance innovation, accountability, and fundamental rights.
Introduction
On 22 October 2025, the Ministry of Electronics and Information Technology (“MeitY”) released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“Draft Amendments”) for stakeholder consultations. Anchored in MeitY’s vision of an ‘Open, Safe, Trusted, and Accountable Internet,’ the Draft Amendments mark India’s first attempt to establish a regulatory framework for “synthetically generated information” (SGI) encompassing deepfakes and other AI-generated or modified text, images, audio, and video content. Building on MeitY’s 2024 advisory directing intermediaries to prevent unlawful or discriminatory AI use and to label synthetic outputs, the Draft Amendments seek to formalise platform duties for detecting, disclosing, and moderating such content, particularly when it may mislead users.
According to MeitY’s explanatory note accompanying the Draft Amendments, the amendments respond to the growing misuse of generative AI. Such misuse includes production of non-consensual intimate imagery (“NCII”) or obscene imagery, fabricated political or news content, impersonation and financial fraud, and other forms of misinformation capable of eroding trust in information ecosystems. Shortly after their release, the Election Commission of India issued an advisory requiring political parties and candidates to identify and disclose synthetically generated campaign material. More recently, following directions of the Madras High Court in an NCII matter, MeitY issued a Standard Operating Procedure, aimed at providing guidance on the procedure to be followed by an individual for curbing dissemination of their NCII content in cyber space, and ensuring consistent and effective implementation of intermediaries’ due diligence obligations with respect to such content under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“2021 IT Rules”). This parallel regulatory action reflects an emerging consensus among Indian institutions on the need for transparency and accountability in AI-generated content.
Situated within a wider global regulatory shift, including the European Union’s AI Act, U.S. provenance initiatives, and China’s AI labelling rules, the Draft Amendments offer an opportunity to shape a governance model for synthetic media in India. This comment examines the Draft Amendments in four parts: (i) their scope and applicability; (ii) a critical evaluation of the proposed obligations and key definitions; (iii) reform recommendations; and (iv) concluding reflections on aligning innovation with accountability.
Scope and Applicability
The Draft Amendments extend the due diligence framework under the 2021 IT Rules to cover SGI, creating new compliance obligations for all intermediaries that enable the creation, alteration or modification of SGI, as well as enhanced responsibilities for ‘significant social media intermediaries’ (“SSMIs”) – intermediaries with over 50 lakh registered users in India – that host or display such content. An overview of key definitions and obligations under the Draft Amendments is as under:
- Definition of SGI: The Draft Amendments define SGI as “information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information reasonably appears to be authentic or true” (emphasis supplied). This definition is broad, as it can cover AI-generated or digitally altered text, images, audio, and video, including through VFX, colour grading, filters, or comparable techniques. Its reliance on whether content “reasonably appears” authentic imbues subjectivity and could lead to over-classification of legitimate material such as satire, minor edits, or AI-assisted writing. Importantly, the Draft Amendments clarify that all references to “information” under the 2021 IT Rules, including Rules 3(1)(b), 3(1)(d), 4(2) and 4(4), will now include SGI when used to commit an unlawful act, bringing SGI squarely within the existing content takedown framework under the 2021 IT Rules.
- Obligations on all intermediaries (Proposed Rule 3(3)): Intermediaries that enable, permit, or facilitate the creation, generation, alteration, or modification of SGI through their computer resources must ensure such material carries a permanent, tamper-proof label or embedded metadata identifier. The label or identifier must cover at least 10% of a visual surface or be audible for the first 10% of an audio clip, and intermediaries must prevent its removal or alteration. Importantly, these obligations apply only to SGI created, altered, or modified using the intermediary’s own systems, not to synthetic material merely hosted on their platforms. This distinction is significant; for instance, a platform such as Instagram that hosts AI-generated Ghibli-style portraits created on a separate application may not fall within the scope of proposed Rule 3(3), as its role in this scenario is limited to hosting, rather than creating or generating SGI.
- Additional obligations on SSMIs (Proposed Rule 4(1A)): SSMIs must require users to declare whether uploaded content constitutes SGI, verify such declarations through reasonable technical measures, and ensure all verified SGI is clearly labelled. Knowing inaction of undeclared or mislabelled SGI constitutes a breach of due diligence, exposing the SSMI to loss of safe-harbour protection under Section 79 of the Information Technology Act, 2000 (“IT Act”). By contrast, under the existing framework, even after the recent amendment to Rule 3(1)(d) of the 2021 IT Rules, intermediary liability is triggered only upon receipt of actual knowledge of unlawful content through a court order or reasoned intimation from an authorised government agency. The model remains fundamentally notice-and-takedown, with no expectation of proactive monitoring by intermediaries.
Although MeitY’s explanatory note indicates that the above obligations apply only to “public-facing” content, the text of the Draft Amendments contains no such limitation, potentially extending such regulation to private or encrypted communications. Moreover, the absence of clear penalties for users places the primary compliance burden on intermediaries and SSMIs.
Analysis of the Draft Amendments
This section outlines concerns with the Draft Amendments, with respect to their breadth, internal coherence, and practical viability. It is argued that, collectively, these issues may give rise to regulatory and constitutional challenges.
- Overbroad and ambiguous definition of SGI: The definition of SGI under the Draft Amendments is framed so expansively that it risks encompassing content far beyond the harms that MeitY seeks to address. By covering any algorithmically created or altered material that “reasonably appears” authentic, the provision could capture satire, memes, creative adaptations, or other forms of expressive or artistic content. More importantly, the definition creates a loophole – if disclosure is triggered only when content appears authentic, then algorithmically generated or manipulated material that does not ‘reasonably appear’ authentic may escape the requirement altogether, creating a regulatory blind spot around content that is algorithmically produced but does not resemble real-world material.
Further, the broad sweep of this definition bears little connection to the harms identified in MeitY’s explanatory note, including NCII, misinformation, impersonation, or fraud, and it could lead to over-classification and unnecessary compliance burdens. Clear thresholds or harm-based qualifiers are absent, and the definition risks chilling legitimate expression. It may fail the test of proportionality for speech regulation recognised by the Supreme Court in Anuradha Bhasin (¶70 ,152(b)), which inter-alia, requires restrictions on speech to be narrowly tailored and bear a rational nexus to the harms they seek to address.
- Blurring the boundaries between ‘originators’ and ‘intermediaries’: The proposed Rule 3(3) in the Draft Amendments conflates two distinct legal categories – ‘intermediaries’ and ‘originators’. Under Section 2(1)(w) of the IT Act, an intermediary is one who “receives, stores or transmits” third-party information, serving essentially as a conduit for user-generated speech. By contrast, an AI system that generates or alters content functions closer to an originator under Section 2(1)(za) of the IT Act, which defines an originator as any person who “sends, generates, stores or transmits” an electronic message while expressly excluding intermediaries. Thus, the proposed rule imposes obligations, meant for content originators, on intermediaries that function differently. This structural mismatch means that the proposed Rule 3(3) effectively regulates intermediaries through a framework designed for publishers/generators/originators of SGI, exceeding both the definitional and functional limits of an intermediary as envisaged under the IT Act. Indeed, the Supreme Court has drawn a clear functional line between intermediaries, viewed as neutral conduits without significant editorial control over user-generated speech, and originators, who originate or curate content and bear direct liability. As affirmed in Shreya Singhal (¶110, 116-119) and Google India (¶52-54), safe-harbour hinges on this divide – a platform acting as a conduit is protected, but one behaving like a publisher is not.
3. Limits of labelling as a policy solution: The Draft Amendments place emphasis on labelling as a safeguard against the misuse of SGI. However, this approach has several limitations:
a. First, mandatory labels are a weak deterrent against misinformation. Research indicates that users continue to believe and share labelled AI-generated messages at similar rates as unlabelled ones.
b. Second, the requirement of embedding ‘permanent metadata’ is technically unworkable, as neither visible nor invisible watermarks or labels are tamper-proof. Research suggests that visible marks can be removed with simple editing, while invisible ones, though more durable, remain vulnerable to advanced manipulation or re-encoding. The lack of standardisation compounds the problem, since each platform would likely employ proprietary watermarking methods tied to its own model, making cross-platform verification tedious for users.
c. Third, proposed Rule 3(3) of the Draft Amendments appears to conflate two distinct mechanisms in service of MeitY’s transparency goals – metadata and visible labelling. However, metadata is inherently non-visible and functions as a technical marker for authentication or traceability, while visible or audible labels serve a transparency function for users. By treating the two as interchangeable, the draft obscures their distinct roles.
d. Fourth, as pointed out by a commentator, the 10% labelling requirement under the proposed Rule 3(3) for visual and audio SGI is excessively prescriptive, disregards contextual differences across media, and risks disrupting creative or commercial content. For example, an advertisement or film trailer could lose narrative coherence if a compulsory disclaimer dominates its opening seconds, and a stylised music clip or artistic reel could become visually cluttered or aesthetically distorted by a label occupying a tenth of the screen. Moreover, while the definition of SGI covers text, the Draft Amendments do not specify how textual AI outputs should be marked, leaving compliance for written or hybrid content uncertain. This omission could lead to uneven enforcement and ambiguous compliance for chatbots, summarizers, and AI-enabled writing/ editing tools.
4. De facto proactive monitoring by SSMIs: Proposed Rule 4(1A) requires SSMIs to take “reasonable and proportionate measures” to verify user declarations on AI-generated content, effectively imposing a proactive monitoring duty. Though framed as a user-led obligation, liability arises if platforms knowingly permit unlabelled SGI, compelling them to pre-screen uploads to avoid risk. This shifts the regime from actual knowledge-based to surveillance-based moderation by SSMIs, running contrary to the Supreme Court’s rulings in Shreya Singhal and Google India, where it was held that content liability for intermediaries arises only upon receipt of ‘actual knowledge’ and that intermediaries are not legally obligated to proactively monitor their platforms for unlawful user-generated content.
5. Compelled speech, compliance fatigue, and risk of inadvertent non-compliance: Proposed Rule 4(1A) compels every user to declare whether uploaded content is synthetically generated, amounting to State-mandated speech. Further, the proposed rule offers no guidance on the form this declaration must take. The Draft Amendments’ user declaration requirement is best understood as a form of compelled speech under Article 19(1)(a), which is not unconstitutional per se. In Union of India v. Motion Picture Association (¶15) and in Justice Nagarathna’s opinion in Kaushal Kishor (¶202-204), the Supreme Court explained that certain compelled disclosures, or ‘must carry’ provisions in statutes, can be permissible if they facilitate informed decision-making. The Court illustrated this with examples such as mandatory ingredient and weight disclosures on food products and statutory health warnings on cigarette cartons, noting that these measures help consumers make informed choices. But the Court also warned that if a ‘must carry’ provision compels a person to carry out propaganda or project a partisan or distorted point of view, contrary to their wishes, it may amount to a restraint on free speech.
By analogy, SGI disclosures may be defensible in principle as an informational safeguard. However, Article 19(2) requires that such compelled speech also be proportionate by actually advancing its stated purpose and be no more restrictive than necessary. If SGI labels are easily stripped, do not persist through editing or re-uploads, or fail to affect user understanding or behaviour, the rational connection and necessity requirements weaken, especially where less intrusive alternatives like provenance signals (discussed below) exist. The constitutional defensibility of compelled SGI disclosures will therefore depend on its demonstrated effectiveness and practical durability.
The obligation imposed on users under Rule 4(1A) is also problematic for two additional reasons. First, in cases where users themselves are unaware whether the content they upload is AI-modified, they risk inadvertent non-compliance. Second, as pointed out by a commentator, the user declaration requirement could quickly lead to compliance fatigue as constant self-certification may burden users and discourage spontaneous participation online.
Recommendations
To ensure that the regulation of SGI is both effective and rights-consistent, the Draft Amendments would benefit from targeted refinements. The recommendations below propose clearer definitional boundaries, more proportionate compliance tools, and supportive systemic measures that address the risks posed by synthetic media.
- Providing clarity regarding the definition of SGI:
The Draft Amendments should link the definition of SGI to the specific harms it seeks to regulate, like impersonation, deception, electoral interference, non-consensual explicit imagery, or financial fraud. Merely defining it as content “appearing authentic” but algorithmically generated risks sweeping in benign and artistic uses. In this regard, the EU AI Act may be considered as a comparative reference since it provides a more calibrated model under Article 50, requiring disclosure only for AI-generated or manipulated content that is likely to mislead viewers, while expressly exempting “evidently artistic, creative, satirical, fictional or analogous works” from strict labelling obligations. This ties regulation to the risk of deception rather than the mere use of AI tools, ensuring that creative expression is not unnecessarily burdened. MeitY may consider adopting a harm-linked definition, coupled with an indicative list of what qualifies as synthetic content and explicit exclusions for innocuous editing, meme creation, use of assistive tools like grammar correctors or AI-enhanced filters, and other artistic and satirical content. - Focusing on ‘media and information literacy’:
The United Nations has urged countries to adopt comprehensive media and information literacy (“MIL”) frameworks to strengthen democratic resilience against misinformation. The goal of MIL is to cultivate critical thinking and verification skills so that users can evaluate credibility, detect manipulation, and engage responsibly online. Several jurisdictions have begun institutionalising MIL through formal education. For instance, in the U.S., Illinois requires media literacy instruction at the high-school level, and New Jersey mandates K-12 information literacy education to help students identify misinformation and assess digital sources. India could consider similar approaches across school curricula, teacher training, and public awareness campaigns. Inculcating verification habits within civic life would address the root problem – information vulnerability – more effectively than relying solely on reactive measures.While MIL offers a compelling long‑term strategy, its limitations must also be acknowledged, particularly in the Indian context. Implementing MIL at scale entails sustained curricular reform, systematic teacher capacity building, and coordination across multiple education boards and state governments – each posing significant institutional and resource challenges. Even in jurisdictions where MIL has been introduced, evidence of real‑world impact is mixed: gains in critical‑thinking or evaluative skills do not reliably translate into durable behaviour change within fast‑paced digital environments. In India, heterogeneity in digital access, connectivity, and baseline literacy further complicates equitable uptake and consistent delivery. MIL should therefore be treated as a complementary intervention rather than a singular remedy – useful for strengthening societal resilience and improving individual judgment, but insufficient on its own to eliminate vulnerabilities to synthetic or manipulated content.
- Content provenance as an alternative policy approach:
Rather than relying solely on reactive takedowns or permanent watermarks, an alternative approach is to focus on content provenance, which seeks to build trust into digital content from the point of creation. This means embedding verifiable metadata and cryptographic signatures at creation, ensuring the “content credential” stays attached across platforms. For example, the Coalition for Content Provenance and Authenticity (C2PA) has developed an open standard called “Content Credentials,” described as a digital “nutrition label” for media, recording who created a file, when, how, and what edits occurred. These provenance tools build a transparent infrastructure that travels with the content, offering a stronger foundation for trust and accountability compared to visible labels or watermarks.
Conclusion
The Draft Amendments mark a consequential turn in India’s regulatory posture, from a relatively hands-off model premised on platform neutrality to a more interventionist approach to algorithmic governance in which the State more actively shapes how digital content is produced, signposted, and consumed. Although motivated by legitimate aims, including curbing deception, addressing NCII, and safeguarding electoral integrity, the proposed obligations raise unresolved tensions familiar from global debates on synthetic media. In particular, by requiring intermediaries to verify, label, and police synthetic content at scale, the Draft Amendments risk expanding intermediary responsibility in ways that could blur the line between conduit and originator. A further concern is an implicit tilt toward AI exceptionalism, treating AI-generated content as uniquely suspect rather than calibrating obligations to concrete, demonstrable harms. Comparative experience, including the EU AI Act’s transparency provisions in Article 50 and the surrounding policy debates, increasingly favours harm-based, context-sensitive guardrails over tool-specific burdens, precisely to avoid chilling beneficial uses of generative tools in journalism, education, accessibility, and creative practice.
A future-ready framework for SGI governance in India should therefore rest on clear normative anchors: targeting deception rather than digitisation; preserving the conceptual identity of intermediaries as conduits with duties tailored to notice, process, and due diligence rather than general monitoring; strengthening transparency through provenance-based measures and content authenticity infrastructure; and embedding user literacy, procedural safeguards, and institutional accountability into the architecture of enforcement.
Mr. Rishi Anand is a Partner at DSK Legal and leads the Firm’s Technology Law Practice,
spanning corporate and transactional advisory, dispute resolution, public policy, and government
affairs.
Mr. Dhruv Bhatnagar is a Principal Associate in the Technology Law and Dispute Resolution
Practise Group at DSK Legal, New Delhi.
Ms. Riddhi Alok Puranik is a current student at NLSIU and was an intern in the Technology
Law and Dispute Resolution Practise Group at DSK Legal in the month of October 2025.
Categories: Legislation and Government Policy
