Legislation and Government Policy

India’s Soft Law Approach Towards AI Governance: Strategic Choice or Potential Oversight?

Vrinda Pandey


IndiaAI Platform


Abstract: India’s soft-law approach to AI governance puts flexibility and innovation on a higher pedestal than binding regulation. This article critically examines whether such an approach is strategic or shortsighted in a high-risk digital ecosystem, highlighting gaps in accountability, enforcement, and rights protection, and drawing comparative lessons from the UK and Singapore to strengthen responsible AI governance without losing the intent of creating an AI innovation and development fostering regulation.

Introduction

The Ministry of Electronics and Information Technology (MeitY) constituted a drafting committee in July 2025 to develop an AI governance framework for India. The framework is guided by two objectives: first, to harness the transformative potential of artificial intelligence for inclusive development and global competitiveness, and second, to address the risks AI may pose to individuals and society. The framework was officially released in November 2025 and is structured into four parts. Part I outlines seven key principles, or sutras, intended to guide India’s overall approach to AI governance. Part II sets out key recommendations across six governance pillars, followed by Part III, which identifies short-, medium-, and long-term action plans along with their respective timelines. The final section provides practical guidance for industry actors and regulators. Notably, the Indian government has chosen to govern AI rather than strictly regulate it, adopting a “third way” approach through soft law instruments. At this stage, MeitY has decided not to introduce an AI-specific law, opting instead to amend existing laws where necessary. The stated intent behind this approach is to avoid premature regulation and instead adopt a phased response to an emerging and rapidly evolving technology. This positions India’s framework distinctly apart from the European Union’s risk-based and stringent regulatory model, as well as the United States’ largely laissez-faire approach to AI governance.

This piece argues that while India’s soft-law approach towards governing AI represents a strategic choice for innovation and adaptability, it constitutes a potential oversight given the country’s high-risk and high-impact digital ecosystem, weak enforcement history, and absence of binding accountability mechanisms. Section II justifies the government’s approach, emphasising flexibility over premature regulation. Subsequent sections (III-VIII) question its suitability, dissect operational gaps in accountability, liability, and rights protections, explore missed opportunities with lessons from the UK and Singapore, and conclude with recommendations for enforceable safeguards.

PJUSTIFICATION BEHIND A SOFT LAW APPROACH

India’s decision to adopt a soft-law approach to AI governance has been framed as a strategic, intentional policy choice rather than a regulatory oversight. The government has repeatedly emphasised that introducing an AI-specific statute at this stage could result in a premature or “half-baked” law, particularly in a domain that is evolving rapidly and remains difficult to define precisely. From this perspective, hard regulation is seen as potentially stifling innovation, discouraging experimentation, and limiting India’s ability to build globally competitive AI capabilities or to be a pro-innovation country. Instead, a light-touch framework is seen as enabling regulatory learning, allowing policymakers to see the real-world deployment of AI systems before forming binding legal obligations. This approach reflects a preference for adaptability and innovation over early regulatory rigidity.

Alongside these concerns, India has justified its soft law approach on the grounds of flexibility and regulatory sufficiency. The government has indicated that existing legal frameworks, including the Information Technology Act, 2000, and the Digital Personal Data Protection Act, 2023, can be amended to address emerging AI-related risks as they arise. Sector-specific regulators such as the Reserve Bank of India, the Securities and Exchange Board of India, and the Telecom Regulatory Authority of India are expected to manage domain-specific AI harms in their respective jurisdictions. In addition, the framework proposes a three-tier institutional model, namely the AI Governance Group (AIGG), the Technology & Policy Expert Committee (TPEC), and the AI Safety Institute (AISI), to coordinate policy formulation, implementation, and oversight. Taken together, this approach reflects a phased approach to AI governance, where phased adjustments are preferred over the immediate introduction of an AI-specific statute.

Overall, India’s soft law framework places emphasis on responsible innovation, industry cooperation, and self-regulation, prioritising trust-building over precautionary restrictions and sanctions.

QUESTIONING THE APPROACH AND SUITABILITY IN LIGHT OF INDIA’S DIGITAL ECOSYSTEM

India’s digital ecosystem presents a paradox that complicates its approach to AI governance. On one hand, the country has witnessed rapid technological adoption, the creation of new governance bodies, and global recognition for its digital public infrastructure, including Aadhaar, UPI, and Digi Locker. Technology is deeply embedded in India’s civic, economic, and social life, from cloud-first enterprises to the growing use of AI in healthcare and public services. On the other hand, this scale of digitisation has also produced a high-impact risk environment. In 2025, Indian organisations reportedly faced over 2,000 cyberattacks per week on average, while AI-enabled harms such as identity theft, deepfake abuse, political manipulation, and online safety threats, especially targeting women and children, have increased sharply. This duality raises serious concerns about whether a non-binding, guideline-based approach is adequate for such a high-risk digital context.

The effectiveness of soft law frameworks depends heavily on strong enforcement mechanisms, a high compliance culture, and institutional capacity for monitoring and redressal. However, India has struggled with weak enforcement, delayed regulatory action, and low levels of compliance when obligations remain voluntary or non-binding. In such a context, governance through non-binding guidelines risks becoming aspirational rather than effective. The assumption that industry actors will internalise ethical obligations without enforceable consequences may not align with India’s regulatory realities. These gaps raise a critical question: whether soft law, in the absence of clear enforcement structures, can meaningfully mitigate AI-related harms in India’s digital ecosystem.

The framework’s emphasis on trust and self-regulation further intensifies these concerns. While the principle that “trust is foundational” reflects an intent to promote responsible innovation, it places a heavy reliance on corporations to govern themselves. In practice, however, companies remain profit- and commercial-incentive-driven actors operating under intense market pressures to deploy systems quickly and at a large scale. A “trust first, verify later” model risks turning citizens into inadvertent test subjects for high-risk AI deployments. Without clearly defined accountability mechanisms, penalties, or sanctions, self-regulation may remain minimal or symbolic, undermining the very objectives the guidelines seek to achieve. Whether responsible innovation can be operationalised without precautionary safeguards is, therefore, a question that warrants serious reconsideration.

These concerns are further amplified by India’s weak enforcement, reporting, and monitoring capacity in the digital governance space. While the framework incorporates high-level principles such as trust and responsible innovation, the absence of clear operational mechanisms risks reducing these guidelines to aspirational statements rather than enforceable norms. Delays in accountability, coupled with the lack of well-defined liability and sanctions, may lead to diluted responsibility when AI systems cause harm. In a governance environment where verification is the step after deployment, the burden of risk is implicitly shifted onto citizens rather than regulated entities. Without robust monitoring structures and timely enforcement, a trust-based, self-regulatory model risks normalising harm before corrective action is taken.

 ACCOUNTABILITY AND LIABILITY: OPERATIONAL DEFICIENCIES IN THE GUIDELINES

The guidelines repeatedly emphasise accountability, both as a governing principle and across multiple pillars, stating that AI developers and deployers must remain visible and accountable for their systems. They suggest that accountability should be ensured through a mix of policy, technological, and market-led mechanisms, with firms facing meaningful pressure to comply with liability obligations. However, the framework stops short of explaining how accountability is to be assigned, enforced, or monitored in practice. Instead, it relies heavily on voluntary moral commitments, despite acknowledging that self-regulatory frameworks lack legal enforceability. The proposed alternatives, such as transparency reports, internal policies, peer monitoring, and audits, remain recommendatory in nature and fall short of providing concrete, enforceable accountability mechanisms.

Although the guidelines acknowledge the need for liability and recommend a graded system of liability, they simultaneously caution against mechanisms that may stifle innovation. This balancing act results in diluted clarity on liability standards, as no clear thresholds, consequences, or strict liability regimes are articulated. In prioritising innovation protection, the framework appears reluctant to impose firm obligations on developers and deployers, even in high-risk contexts. As a result, accountability is framed more as an ethical rather than a binding responsibility. This raises concerns about whether meaningful deterrence can exist in the absence of clearly defined penalties or enforceable liability structures.

To address governance and implementation, the guidelines propose a three-tier institutional structure comprising the AI Governance Group, the Technology and Policy Expert Committee, and the AI Safety Institute. While this architecture signals an intent to coordinate AI governance, it remains largely concentrated within the institutional capacity of MeitY. The framework does not meaningfully incorporate multi-stakeholder participation, including civil society organisations, marginalised communities, or independent public-interest representatives. Nor does it clarify how sector-specific technical questions and liability disputes will be resolved across domains. In the absence of statutory backing, mandatory oversight powers, or an independent regulator, these institutions risk functioning as advisory bodies rather than enforceable accountability mechanisms. Consequently, the question of liability, who is responsible, when, and with what consequences, remains unresolved.

By remaining consciously light on accountability and liability, the guidelines risk reducing governance to ethical discourse rather than applied regulation. Without statutory force, enforceable tools, and independent oversight bodies, the framework may struggle to move beyond aspirational intent. In a high-risk digital ecosystem such as India’s, the absence of binding accountability mechanisms raises serious concerns about whether harm can be prevented rather than merely acknowledged after the fact.

MISSED OPPORTUNITIES IN OPERATIONALISING SOFT LAW: LESSONS FROM THE UK AND SINGAPORE

Soft laws are not weak laws by default; their effectiveness depends largely on how they are operationally designed and implemented. India has chosen the right principles, and the framework clearly acknowledges both the benefits and risks associated with AI systems. The intent to secure citizens from harm while simultaneously avoiding barriers to innovation is evident throughout the guidelines. However, this intent has translated into an over-reliance on high-level principles and symbolic ethical commitments rather than concrete enforcement tools. As a result, the framework prioritises articulation of values over the creation of mechanisms capable of translating those values into practice.

Even within a non-binding framework, the guidelines could have incorporated conditional and enforceable safeguards to better balance innovation with harm prevention. In a digital ecosystem as complex and high-impact as India’s, context-based and continuous risk categorisation could have helped operationalise the framework’s stated objectives. While the guidelines do gesture towards a graduated liability system, they stop at a high-level recommendation and do not spell out enforceable accountability mechanisms, such as which actors become liable for which categories of harm, the thresholds at which different tiers of liability are triggered, and the corresponding range of penalties that are imposed. In this piece, a graded penalty structure envisions clear ex-ante criteria that specify when accountability attaches (for instance, at the stage of model design development, deployment or post-deployment stage), for what types of failures (inadequate risk assessment, non-compliance with the requirement of audits or repeated violations) and with what concrete consequences (from warnings to monetary penalties and operational restrictions).

While concerns about rapidly evolving technology justify caution against rushed AI-specific legislation, high-impact and high-risk use cases could still have been addressed through targeted regulatory interventions. The guidelines can be seen to rightly prioritise adaptability, but even an AI-specific law could have incorporated mechanisms to avoid prematurity and rigidity. Sunrise clauses would delay enforcement until governance prerequisites are met, like staffing the AI Safety Institute (AISI), funding oversight bodies, consulting stakeholders on compliance burdens, and clarifying technical requirements for regulated entities, which would do the dual work of having an AI-specific legislation and making sure of not just passing laws but building legitimacy before activating them. Complementing this, another issue that India aims for, adaptivity, can be resolved with the incorporation of sunset clauses that would mandate expiry after 2-3 years unless renewed following mandatory reassessment of effectiveness amid AI’s rapid evolution, preventing obsolete rules while allowing proven measures to persist. Together, these tools would have enabled targeted, high-risk interventions (e.g., graded audits for critical deployments) that remain agile: sunrise for competent rollout, sunset for disciplined review. Such provisions would address India’s valid concerns while delivering enforceable safeguards and introducing an AI-specific law.

The United Kingdom offers a useful illustration of how soft law can function alongside empowered regulators and central oversight mechanisms. Rather than relying solely on voluntary compliance, the UK’s approach is backed by regulator-led monitoring, sectoral oversight, and coordinated risk assessment. Regulatory sandboxes and testbeds allow innovation to proceed under supervision, ensuring that risks are identified before large-scale deployment. This design ensures that soft law principles are supported by institutional capacity, giving them practical force. As a result, innovation is encouraged without abandoning accountability or enforcement.

Singapore’s AI governance framework further demonstrates how ethical principles can be translated into concrete and testable standards. In the Singaporean model, voluntary or self-regulatory approaches do not imply the absence of measurement or scrutiny. Ethical principles are operationalised into checklists and assessment criteria that must be satisfied before deployment. This approach ensures that responsibility is not left to interpretation but evaluated against predefined benchmarks. In contrast, India’s principle-heavy and tool-light framework lacks similar mechanisms.

LACK OF A RIGHTS-BASED APPROACH AND LACK OF A CONSTITUTIONAL FRAMEWORK

The Indian AI governance guidelines display a clear inclination towards a techno-legal approach, inclining towards governance mechanisms and innovation facilitation over a rights-based framework. While the guidelines acknowledge that AI systems are agentic and probabilistic and therefore capable of causing harm and posing innumerable risks to people, this recognition does not translate into adequate protection of citizens’ rights. The framework relies heavily on principles or sutras and self-regulatory recommendations, without centring the right of individuals to be digitally safe or free from unjust technological harm. As a result, citizens are positioned more as subjects of innovation than as rights-bearing individuals requiring protection. This imbalance raises concerns about whether governance focused primarily on technological management can sufficiently address AI’s social and constitutional implications.

Although the guidelines employ value-laden terms such as responsibility, fairness, non-discrimination, and equity, these remain largely aspirational in the absence of legal and constitutional grounding. AI-related harms are framed as failures of responsible innovation rather than as potential violations of fundamental rights. Notably, the framework does not meaningfully engage with constitutional guarantees under Articles 14, 19, and 21, nor does it anchor AI governance in principles of constitutional morality. This omission, whether intentional or not, weakens the force of the guidelines, as rights protection is reduced to ethical discourse rather than an enforceable obligation. Without recognising AI harms as potential fundamental rights infringements, the framework risks normalising harm instead of preventing it by posing AI-enabled harm as a result of irresponsible innovation.

CONCLUSION

    India’s decision to govern AI through soft law reflects a conscious attempt to balance innovation with caution in the face of a rapidly evolving technology. The framework adopts sound principles and recognises the risks posed by AI, yet its heavy reliance on aspirational ethics, voluntary compliance, and institutional intent raises concerns about real-world effectiveness. In a high-impact digital ecosystem like India’s, where AI harms are no longer speculative, governance without clear accountability, enforceability, and constitutional grounding risks remaining symbolic. Soft law, by itself, is not inherently weak; its strength lies in operational design, oversight, and safeguards. However, by prioritising innovation over restraint without adequate rights-based protection, the guidelines leave critical gaps unaddressed. Moving forward, India must complement principled governance with enforceable mechanisms that place citizen safety and constitutional values at the centre of AI regulation. The most important thing to watch out for is how the recommendations introduced are enforced and how the institutes work. The governance scaffolding is functioning if it is. We have an issue if it is not. Similarly, the critical worry is that voluntary promises mostly remain unfulfilled. The industry would wait to see if the government is sincere, and the government waits to see if the industry will self-regulate. The citizens are the ones who will ultimately suffer the consequences.


    Vrinda Pandey is a third-year law student at the National Forensic Sciences University, Gandhinagar (NFSU), Gandhinagar. Her academic interests include constitutional law, Technology Law, and Artificial Intelligence and Cyber policy, with a focus on their evolving relevance and intersection with multiple disciplines.