Podcast

Code and Counsel #1: Navigating AI Regulation in India with Nikhil Narendran

*Keshav Soni and Sriram Adithya

In this episode, Keshav Soni and Sriram Adithya sit down with Nikhil Narendran (TMT Partner at Trilegal, Bangalore) to discuss the future of AI regulation in India. The conversation covers AI copyright challenges, liability frameworks across developers, deployers, and users, and India’s infrastructure gaps including data centers and compliance burdens.

Mr. Narendran shares insights from Trilegal’s AI tool usage, emphasizing treating AI like supervised junior lawyers. He supports AI assisting courts while maintaining human decision-making and explores “rules as code” concepts with human oversight for fundamental rights protection.

LISTEN TO THE PODCAST


Sriram: Welcome to the Code and Counsel podcast by the Law School Policy Review. I am Sriram Adhithya and I have Keshav with me. We also have Shauryaveer and Arya assisting us today. We have in this episode our guest Mr. Nikhil Narendran with us. He is the partner of technology, media and telecommunications at Trilegal, Bangalore. He focuses on the interplay of technology and commerce. He advises new age e-commerce, fintech and tech companies in India on their business models and regulatory issues, including data protection. Thank you for coming on this episode, sir.

Keshav: My first question to you would be: we have seen a lot of rules and laws being made in the space of technology. We have seen the recent DPDP Act and also the SEBI rules which may come up on the use of AI by intermediaries. Now my question to you would be: how do you think such laws should work? Should it be a central level law or should it be state-specific? And should there be a separate forum like CCI or should it be left to self-regulation, such as in the case of ASCI?

Nikhil: It is a very complicated question to answer because it has multiple facets that we need to examine. For instance, regarding the regulation of technology itself, state level legislation probably—most of the time, states do not have the mandate under the Constitution to regulate technology. In most instances, for example, communication is a feature which needs to be governed by the Union government and not necessarily the state government. Therefore, naturally, if there are regulations that need to come, that needs to come from the central government.

At the same time, there are several things that we need to keep in mind when we talk about regulating technology. One of the facts is that when we examine regulating technology, the focus should be to ensure that we regulate the harms that are caused by technology and not necessarily the technology in itself.

Regarding the Internet, one of the reasons for its growth is because we were focused on regulating the harms that are caused by the Internet and not necessarily the Internet itself. Though because we have had sovereign-related functions relating to communication, the licensing of Internet service providers was done. Essentially, the Internet was a free and fair world for a lot of us to access and play around and tinker with. The large amount of growth that we have seen worldwide and the economic growth and inclusivity and access to knowledge that has happened is because the Internet is largely regulation-free, and we only came in and ensured cyber security regulations or data protection regulations because we figured that using the Internet, there could be some harms caused. That should be the ideal approach when it comes to regulating any technology, including AI.

So naturally, when that happens, it makes sense for the central government to regulate. I am not personally a fan of self-regulation because when you have corporations which are answerable to shareholders when it comes to profits, when they self-regulate, naturally profits would be in their mind. Self-regulation is something which has not necessarily worked in most instances. But what the industry could do is—if they want to ensure that the government does not regulate them—there are areas where they could come into an understanding between themselves and adopt a standard.

For instance, one case I could tell you about was what happened with cameras. Any camera has a digital fingerprint which tells you from which camera a particular photo was taken. So that is not something which is enforced by any regulation. That is something that Sony, Nikon and the likes created—that framework coming from them—by way of an understanding, the responsible understanding, that there is actually a potential negative use of a camera that could be possible. If they do this, then we can find out who is the perpetrator. If somebody uses a camera to commit a crime, we can identify them.

Similarly, AI companies can come together, or technology companies can come together and figure out standards for them that work for them. It is always better when they come out with the standards rather than government coming out with the standards. But pure self-regulation—I have not seen any instance where it has worked.

You asked about a regulator—whether we need to have a SEBI-like regulator for technology. I think we are going to have a data protection regulator because it is a specific subject matter within technology itself, which is probably the right approach. So if you ask me, do we need an AI regulator? Too early. We will see maybe in years to come.

Currently—and mind you, one thing to keep in mind is—when we talk about AI, while the European Union is largely talking about AI from the perspective of including all sorts of AI, when we talk about AI generally, whether it is in India or USA, we are largely examining AI from the perspective of generative AI. We are not necessarily examining some of the impacts when it comes to predictive AI. We are not examining some of the impacts when it comes to agentic AI or kinetic AI and the real-world implications of these models. These kinds of AI are very different from generative AI.

So we may need to examine all these things differently. For instance, regarding generative AI, largely what we are dealing with is content-related harm because it is nothing but a content generation tool. Whereas when we examine predictive AI or when we examine probabilistic AI, we are actually dealing with selection and deselection, which includes aspects relating to bias and other things coming into play much more than aspects relating to content generation.

So we need to also have a narrow focus and examine what are the harms that are caused by AI. In a couple of months, we will see a lot more use cases of agentic AI being out there in the market. So we will see more real-world implications of that happening. So we need to remain nimble when it comes to regulation. At this point of time, I will say let us examine the existing laws that we have and try and see how we can implement them best to address the harms caused by generative AI.

Sriram: On the note of content-based harm, especially with generative AI, what sort of litigation do you think will arise in the TMT space, particularly with this new age of technology? Do you think new opportunities are going to arise in accordance with that?

Nikhil: I think the opportunities will really depend on how creative we as a civilization are. We are really known to create new mischiefs and harms when we play with new technology.

From that frame, it is very difficult to even predict where this is going. I mean, if I try and predict, unfortunately I will be wrong. But having said that, at least in the short term, I could say that there will be cases relating to copyright infringement. There could be cases relating to disinformation. There will be cases relating to fake news, deepfakes causing all sorts of harms. We are seeing some different forms of this happening right now, and we will see more cyber criminals taking advantage.

There is actually another interesting phenomenon that could happen with generative AI, which is the fact that you could flood misinformation in databases and drown out the actual information. So there are all sorts of social harms which could come out of misuse of this technology. When there is a misuse of technology, there will be enforcement, there will be disputes, there will be litigation, and that is the sort of future I see for a lawyer who wants to focus on the tech space.

Sriram: So adding to that point, especially on regulation, my question is: could there be promotion in the industry? For example, China has DeepSeek and AI is also a hardware-intensive industry. So from your perspective, do you think that this promotion could be in the form of subsidies and grants to data centers, graphics card producers? How do you feel that the government could aid instead of focusing only on the regulation perspective?

Nikhil: So far the government has not really focused on regulation, which is actually a good thing. It has largely played a role of supporting innovation rather than just regulating for the sake of regulating. The government has been very smart about how to deal with AI so far. I hope that continues.

At the same time, there is a lot more that we could do with respect to promotion of AI. There is a whole range of things. For instance, there is this conversation about sovereign AI, which is an important conversation that is happening. So we are examining having models in India, having our own models, having existing models being deployed on Indian servers. That actually requires a lot of capacity, a lot of compute, a lot of telecom networks. We do not have that kind of telecom infrastructure. We do not have enough submarine cable landing stations in India.

Bombay is a fairly crowded subsea cable landing station, whereas we have some of the other ones that are underutilized. For instance, Cochin, Hyderabad, Tuticorin, and Chennai and other areas have cable landing stations which are not necessarily that utilized. We need to find new landing stations. We need to have more fibers coming in and landing in India for hosting AI in India and doing large amounts of processing.

We need to have an approach towards data centers which is much more reliable. We need to unlock more renewable energy sources because what you need to understand is—if you speak to someone who is a domestic worker in India, they will still come back and tell you that there was no power for four hours yesterday night. Then we are drawing their power to actually power AI. So which means that the power generation needs to really go up. We need to focus more on renewables, and even the technology companies need to come with better technology to suit a market like India. For instance, the kind of token burns that we are having—even if you just say “hi” to ChatGPT, it is just incredible. That is just a waste of resources and power, in a place where we are facing such drastic effects of global warming.

So the tech companies also need to come up with better innovation to ensure that AI is suited for India. Are we looking at more caching to ensure that we do not have to burn a token when somebody says “thank you”? Is that the model to examine? These are some of the things that the government and the tech industry need to do, and the government needs to do a lot more in terms of—just giving subsidies does not really work.

Removing the blockers from business is probably the most important thing. As you know, it is not just with respect to AI—it is very difficult to do business in India. If you run a responsible business and you want to achieve a decent level of compliance, you need to dedicate around 30% of your time, capital, and effort into ensuring that compliances and regulations are followed. This is very high for a growing country like India.

Those kinds of blockers need to be removed, not just for the AI industry but for all industries, because ancillary industries are also fairly important. Power is important, telecom is important, data is important, cyber security is important. So AI is not just a standalone project. It requires all these allied industries also to function very well for the country to really take advantage of what AI could offer us.

Keshav: One point that you mentioned was the issue of copyright. We have seen lots of cases come up on this, and you know, the case in the Delhi High Court on the whole matter of OpenAI raises many questions. So, in this aspect, what do you think? Can we consider an AI as an author, and can we also bring in the fair use exception?

Nikhil: So, the copyright and AI discussion is very complicated, and I am just going to comment on it as an observer for two reasons. One is the fact that it is sub judice right now. Second is the fact that one of the parties in that is a counterparty in another litigation where I am representing another party against them. So, from a pure academic perspective, the way I examine the copyright versus AI debate is slightly different.

So I think that one aspect that we need to understand is that the aspect with respect to training is not an AI-invented problem. It is not a problem due to AI; it is actually a step which is taken by the developer. So the developer decides to take content which is available in the public domain or otherwise—we do not know—to train the AI model so that it can get better.

Similarly, deployers could also take AI models and train them on this data to fine-tune their models that they want to deploy. When they do that, the question really is: first question is, how do they get access to that copy, and did the folks who put the data or information out there in the first place intend that training to happen for someone else to take commercial benefit? That is one question.

I think that question needs to be examined from the perspective of a public benefit point of view, in the sense that the reason for copyright—the reason for copyright to exist—is not just for the innovator to be incentivized to create more content. It is also for access to knowledge. And the provisions that we have, including the fact that copyright is for a limited term, the fact that there are fair dealing provisions or fair use provisions in the West, the fact that we have compulsory licensing—all these point to the fact that the reason why copyright exists is not just to reward so that Mickey Mouse can rent-seek for 90 years. It is to ensure that large numbers of people in this world get access to information and enjoy the fruits of the work that happens so that we as a civilization get progress.

I believe in everything being a copy of a copy of a copy. If it is good art, you will naturally be inspired by the art. And if you are an artist yourself, every creation that you make will also be inspired by someone else’s art. I mean, it is not just me who says it. I mean, you know, folks like even Tarantino says to that extent—he says that he is inspired by every movie that he has ever seen. So, the great artists are the ones who understand this and get inspired.

Similarly, we want the knowledge—with respect to knowledge, we want all the information that is out there to be made available to human beings in a different form for the advancement of civilization. Now, there is another aspect here as well. So, when they do that, should we let the labor that is put in by these creators go to waste, or should we give them an opportunity for a fair return on the revenue? That is another question.

So we need to come up with a balancing, and the balance that I believe, to a large extent, which could work is—I like what Singapore has done with respect to it, which is to say that if you access the copy of material in a lawful manner, then you can use it to train your generative model. So which means that if you are putting the content behind a paywall, and if you pay for that content, or if you are entering into a license model with somebody and getting that content to do this, then you can use it to train AI.

So at some level, they are viewing AI training very similar to human learning, and it is similar with respect to the act. It is human cognition versus machine cognition, and I do not think law should make a differentiator between the two. At the same time, it is very important that we have a large number of creators who should also be rewarded adequately, and it is a difficult question to answer.

We have had these questions answered many, many centuries back. Let us not forget the fact that copyright is a product of the industrial revolution. It is not something which we inherently had, which we enjoyed all throughout human civilization. It is as recent as, 300, 400 years and not more than that, at least the form that we see right now.

So at all points of time, there has been displacement of labor. There has been unfair treatment of labor which has happened. We should not do it this time because we have all the examples of history to show how labor has been treated over the years now.

The reason why it has become such a big issue is also because of the fact that at some level, it has actually touched the highest echelons of society. Because it has touched the knowledge society. Earlier, you had artisans being replaced by factory workers. Right now, it is the lawyers and consultants and engineers—the ones who sit on the top of the ladder—who are being displaced. And actually, their voice is louder. And that is why there is so much hue and cry now.

Of course, does that mean that lawyers will die? I do not think so. I mean, even now we have excellent shoemakers, excellent chefs, great playwrights, et cetera, who have not necessarily been replaced by the industrial revolution. So it is possible that different models can coexist, and our society is going through a transition in time when it comes to this, and we will get used to it. And I believe finding a fair balance is the right way to do it.

Just to answer your last question: no, the generative AI model cannot be an author because the author concept is related to a human, and I do not think we should move to a direction where we grant legal authority or legal standing to AI models. There should always be human accountability or organizational accountability behind any AI model that you are talking about.

Sriram: So on the point of balance and coexistence, how do you think that we can accommodate an equitable ecosystem for smaller startups that train this data? Like, as you know, there are larger companies that are able to have more access to better resources, and they have access to better infrastructure, which allows them to have that edge. So, in the sense, in the perspective of corporate law also, how do you think we can make it so that this system will benefit smaller entrepreneurs?

Nikhil: So I think for that to really happen, we should probably move away from capitalism as a system. That is how I would answer that question because this is the way markets work. The first person who moves takes all the advantage, and they reap the benefits of it. Now that may create some power centers. It may create roadblocks for newer entrants to play.

Then there are questions that we should ask. When there are AI models which have been trained on this data and there are large commercial benefits that some of these developers and deployers are taking, should they have another obligation with respect to the data that they have trained on? Should there be an obligation on them to actually give that data to someone else who is doing something? So that is a discussion around access to non-personal data per se, which is about how do we use data.

Largely the concept around data has been that there is no ownership in data. There is only a right to access the data or a right to use the data. Then should we say that data silos should be broken? Should the data come out? Should the data be allowed to be used? But at the same time, how do we know that that data will be used in a responsible manner? So, we need to think about frameworks.

So we have already—the non-personal data framework in India—it was a pre-generative AI discussion that started thinking along those directions. So similarly, the altruistic data sharing project in the EU also starts thinking about these directions. But it is time to start thinking about it from the context of generative AI, from the context of training data, and probably we will find answers along the way. So, I do not have all the answers right now.

Keshav: One point which you discussed was harm. So, if we have to use such models in legal systems, we would also have to deal with the whole question of who should be held liable. Now there has been a lot of talk on this whole concept. Where do you think the whole liability should lie? Should it lie with the person who made it, or let us say, you know, someone who used it, or somewhere in between these two?

Nikhil: So again, this is a very case-to-case, fact-based determination. So what we need to do is we need to understand that when law examines a multi-party or multi-stakeholder liability situation, the law essentially examines it from the frame of: who is responsible for doing what and how, and adequately apportions the liability among the parties at work. And where the fault is lying, is a question of fact, it is not an easy answer to give. Like, one cannot say that the developer should be responsible or the deployer should be responsible.

But there is always a sliding scale of liability depending on the role performed by each of these stakeholders. For instance, a developer can only do so much in terms of developing the model, training the model with responsible data, putting in the right kind of filters. But once it goes to the deployer, then the deployer is free to train the model in the way they want. And if there are some calls that they have taken with respect to irresponsibly training the data, it should be the deployer who is responsible. But at the same time, in many instances you will see that the deployer and the developer are the same, especially when it comes to foundational models of generative AI.

For users, there is a lot which lies on the user. For instance, you can actually deceive AI to actually produce illegal content. It is possible; it is theoretically possible. There are many instances where that has been done. You can trick it; you can play the emotional card to get things done. And if you produce infringing content on that basis, why should the deployer and developer be liable? It should be the user.

Also, let us examine another aspect. For instance, let us say I am producing copyright-infringing material. The fact that the model has produced infringing material is similar to—if you try to regulate that and say that that copyright infringement is wrong or that is copyright infringing and the developer should be held liable, then we are examining a model where we are controlling or regulating machine cognition, which is similar to the way human cognition works.

So for instance, we all are inspired by art. Let us say that I draw the Mona Lisa—I draw the Mona Lisa and keep it at home, which is my own version of Mona Lisa. I am sure it will not be as good, but if I publish it and try to make money, then of course—Mona Lisa is not the right example because copyright has lapsed. But if I were to do it for a recent painting, then of course, if I publish it, that is when the harm is really caused.

So then, who is responsible now? Is it at the generation stage, or is it at the dissemination stage? When is harm caused? I believe that it is at the dissemination stage, which means that the user should be held liable or accountable for the acts.

All AI discussions—even the responsible AI paper that we did in 2018—it has at its center this core concept that there should be humans behind the machine who are accountable for the acts. We cannot just blame the machine saying that the machine did everything wrong.

So from that perspective, in those instances, it is the user who should be held responsible. Similarly, we need to examine it on a facts basis, on a case-to-case basis, and come to a determination on where liability lies.

Sriram: On that note, what is your opinion on the existing laws for liability in India, especially Section 79 of the Information Technology Act? Of course, when the drafters were making that particular section and provision, they would not have predicted this. So how do you think that these sorts of legislations can accomodate these new situations that we are facing?

Nikhil: So there are two aspects. One is the AI user prompt which is an input – 79 should apply. But when it comes to a response which is generated in response to a user input, it is very clear that 79 does not imagine those situations because it deals with third-party content. Then it is a question as to whether an output which is a result of an input and also the training and the changing of weights and the various modifications that the developer or deployer has done to a model – is that third party content that is generated or is it their own content?

To my mind, I would deal with it as third-party content. I will treat it like third-party content, but the law is currently unclear from that perspective, so we should probably examine expanding or clarifying 79 to actually state that if a deployer has put out a model and the user is putting some inputs, if there are certain due diligence or duty of care which is an established principle that we have always followed, then the developer should not be held liable.

For a couple of reasons, one is the fact that he has taken the duty of care. Secondly, he cannot always predict how the model responds. He will not be able to say that it is a deterministic technology where just like in a Word document, if I keyword-search a document you are very sure that the key string will definitely throw out results. It is not the case with AI. It could actually give a completely different answer if you give it a slightly different prompt. So it is a non-deterministic technology. It is a probabilistic technology so it could give a different output based on the approximations that it makes based on the database that it works on. Given that fact, if there is due care and standard that is taken, then the developer should not be held liable.

But having said that, if it is a recurring harm and if they are put to notice of the fact that this is the sort of harm that is being caused, then there should be a remediation that should be done by the developer. This is a notice and remediation measure that we have proposed in the Trilegal White Paper on regulating AI. And, of course, there should be a higher standard of care when we examine extreme content such as, for instance, child abuse material, which are of extreme harm. This means that they probably could be a different form of liability when it comes to those sorts of models. And at some level if the AI companies come together and implement these standards of duty of care in the way that they think appropriate, that itself will form the basis of the regulation going forward. So, this is the framework that I think should be the right framework when it comes to generative AI. I will wait to discuss the liability framework for similar sort of model for agentic model or a kinetic model. Let us see the real-world examples of that play out.

Keshav: As you know, a lot of good has come out of tech in law, especially. We no longer have to go through case law in the form of printed journals. We now have legal databases. Now your firm has recently started using an AI agent called Lucio. How has your experience with it been?

Nikhil: So the firm has been using several AI tools. Even before Lucio, even pre-generative AI LLM models, we had tried out and tested tools in the past. We are a fairly innovative firm not just in terms of technology but in terms of the whole equity model that you know about. Our management structure is really different compared to many of our peers. So we are a truly innovative firm. We use leading technology when it comes to us and we are always the pioneers when it comes to doing this.

And Lucio is only the latest example of us doing that, and it is not just Lucio. We in fact have at least four other AI tools in the stack, but Lucio is something which has been used the most within the firm. Our experience when it comes to using models such as Lucio – the advantage that we find with Lucio is that Lucio’s team has spent a significant amount of time within the system to understand the requirements and the needs of Trilegal lawyers and many of these aspects have been customized for specific practice areas depending on their needs and demands. So it has been a great journey.

And at the same time, some folks examine AI tools from the perspective the way I talked about the Word document – which is that they want that certainty from an AI tool which sometimes surprises people. That a computer, which almost always gives them yes or no answer, which is either right or it does not give you any answer, has a potential to give you an answer like a human being which could even be wrong. When you work with a lawyer, you do not always get the right answer always. You sometimes get wrong answers when you work with a trainee, when you work with a junior associate, when you work with an intern. There is always a possibility that you will get a wrong answer. But how do you figure it out? If you are a good lawyer, you can examine it and figure out, “Okay, this is where this thinking process has gone wrong,” and that is why you catch these mistakes.

Similarly, you should approach AI with the same level of intuition, and those folks who approach AI with a similar kind of intuition have been using it very successfully, whereas the others are learning how to use this. So that is the experience of using AI so far.

And we have found tremendous significant gains in terms of time when it comes to using the AI in the right way. We use it for creating lists of dates when it comes to disputes. We use it for quick summaries. We use it for drafting. We use it for searching larger databases. We find these of incredible value. These are some of the use cases where we think Lucio has been incredibly helpful. Similarly for research – that is the other area where there is a lot of time saved.

Sriram: On the point of judicial application, one big issue that we have at the District Court level is that most district courts are not staffed with enough registry workers or non-judicial staff. So do you believe that AI can be used to alleviate those pressures on them, especially manpower shortages? And how do you think would be a responsible way to apply them?

Nikhil: So it is a larger project and I do not think it is just at the lower judiciary level. It is something that we need to do throughout the judiciary, not just throughout the judiciary, throughout the executive itself. But it is a longer term project and it requires a lot of work and at some level, when you use AI tools which work on approximation and probabilistic basis of coming to conclusions, I would say there are only so many things that we can do at this current stage of development of technology. At the same time it is going to be an ongoing process.

At some level, judiciary’s main problems are not just solvable using AI. For instance, many of the courts in the country do not accept UPI as payment. You still need to pay cash for making filing fees within the court. So it needs to start fixing those things first. We have e-filing systems where we still need to e-file and still again print out and submit the documents in court. We need to get rid of things. We need to have better filing systems. We need to have intuitive design for our filing systems. It is a huge problem that we have. We have achieved progress in the last decade or so when it comes to using technology within the judiciary. There is a long way to go. AI has a significant role to play but AI is not where we need to start.

And at some level, what we need to also understand is using AI – as I said, we are finding that even in the best of the law firms, not just in India but across the globe, there is actually a problem with adoption because of the mindset.

So how do you think we are going to empower lower-level judicial officers and bureaucrats to actually leapfrog from using no tech at all to using AI? I think that is going to be quite disruptive if we try to do it that is going to shake up the system in probably not so nice a manner. So we need to start slowly. We need to find champions of use of technology and then try and find out use cases which work.

For instance, the younger generation of officials who are joining will probably be more inclined when it comes to learning technology as opposed to the older generation. Is there a point in enforcing technology on the older generation when they have found their own ways of achieving efficiency? So we need to have a slightly different strategy when it comes to this, and that is a big challenge that we need to solve.

Keshav: We talked about what good could come out of using AI but let us also discuss the impact of the bad side of things. One thing which you just mentioned was in future we can see the younger judges using these models. Now one concern which a lot of people might share is to what extent should we give away the whole process of judicial reasoning? To what extent should we, in some sense, automate it? Do you think that can pose a concern?

Nikhil: Just to correct – I did not mean just younger judicial officers when I was referring to lower-level officers in the judiciary. There are some judges, and very senior judges, who already use AI tools for their own research and other aspects when it comes to doing their daily functions. What I was saying is that we should never – at least I hope I will never see this in my lifetime – ultimate judicial decision making being delegated to technology. Instead, technology should aid judicial decision making.

In fact, we could examine training models which aid judges, but the judges should be in control and they should be the human in the loop to ensure that the appropriate decision, judicious decision is being made. So, I do not see, at least in the near future, moving to a direction where AI is making automated decisions, whether it comes to judiciary or whether it comes to lawyering. I hope I will not see it.

Sriram: So, in your experience, as things stand today, how reliable are the current legal AI models in your usage and your experiences, researching or drafting? How far can you properly rely on them? Where do they fall short and how can they be improved?

Nikhil: So I think AI technology, at least currently the tools that we are using, are largely only as good as your foundation model and how it behaves at a point of time. From that perspective, there is a possibility of error and that is something that puts people off from using technology, and that is a scary scenario if tech goes wrong.

So, the way that I would approach technology is that you need to approach technology from the perspective of how you approach another lawyer, another human being. When you work with them, what is the level of supervision that you need to have over that lawyer? If a lawyer is an exceptionally good lawyer, your level of supervision will be less compared to a lawyer who is just learning the ropes. Similarly, you need to treat AI like a lawyer who is learning the ropes and start questioning it in your mind with respect to every answer it gives.

Most of the legal tools that we see nowadays have got multiple layers of fact checking. For instance they have multiple levels in which different models are called by different models to do a particular function. So most often, most of the products that we see in the market – at least most of them that we use – are based on multiple models. In the sense that there are certain models – for instance, Gemini has a large context window, so that has been used for processing which requires large context window, whereas GPT has better reasoning, so GPT is being used for better reasoning. Claude is better – Sonnet 4.5 is better at writing so Sonnet is being used for better writing. There are ways in which we can fine tune and augment the capacity of the tool that you are using depending on the actual function that it needs to perform.

And there are multiple parallel calls which happen at the same time where this process is happening in a smooth manner as well so that you get a near-instant result. And when you work with providers who are able to understand the nuances of it, you get a reasonably good answer. But at the same time you need to use your intuition, to use your gut feeling as a lawyer while reviewing the work to ensure that the work is accurate. That is the only way to go about using this.

And the level of accuracy – it is very difficult to put a number to it. For instance, we have seen very silly errors when it comes to numbers. If you ask an LLM to do sometimes even a basic number calculation it could be completely thrown off. I personally have not seen great results when most of the tools when it comes to case-law research. Probably legal research is better. But most of the tools that we see, sometimes they may not hallucinate the case, they may just find the right case but just completely come up with a different interpretation, which arguably is not even a subsidiary holding in those judgments. So we have seen instances where that has happened.

So somewhere, the legal reasoning that is often required is also not coming out in most of the instances. But I think that will improve. We need to understand that this technology is developing and we need to ensure that we give it the right kind of guidance for it to grow, and that is what I think I hope we are all doing.

Keshav: One of the aspects which we can use tech for is for the good of the larger masses. We have seen that in case of UPI. Now there is this concept of rules as code which has gained traction worldwide, which thinks of coding the law into code to help build infrastructure. Now one major issue which I see in India is that we do not have any authoritative online consolidated source for laws, even the site India Code has a disclaimer that this might not be fully accurate. So how do you think we can reach the point where we have a wholly consolidated source for acts and rules?

Nikhil: You have hit the nail on a very significant problem that we face. In fact, one of my partners, Jaideep Reddy, was involved in one of the aspects where he was asking for some of these various versions of amendments to ensure that these various updated versions of laws are being put out. So, this is a major problem. But let me just start with rules as code.

In India, we do not have the correct example. We do not have the good example like what has happened in India is that—and that is the beauty of the UPI infrastructure that we have—in a typical model, what we could have done is we would have come out with a new model of rules-based or legislation-based governance of a particular system, and would have imposed that as a license condition on a player. We have done it in the past when it comes to some payment service providers. We have done it in the past when it comes to telecom service providers. And we tell them that there are rules under the act. There is a license issued. You play by those rules. Let us enforce this on the basis of it. You build system, you build networks.

Instead, when it comes to UPI and the models like ONDC which we have been working a lot on, the interesting thing is instead of that legislation which is constraining it, instead you build those legislative aspects into code. That is basically a new model of rule by code, not exactly converting existing rules into code and enforcing it.

Converting existing rules and enforcing it as a code is also a very interesting project. We do not have good examples of that. I am sure we will see that in future.

For that matter, the most simple example where you could enforce it is by having an oracle and having the oracle to record the aspect and then ensuring that we impose fines or penalties based on certain acts. For instance, in traffic violation, that is a classic example. At the same time, aspects relating to parking fines and other things. Traffic is a good example where we can actually do rule by code.

But even when you do rule by code, what we need to understand is that we need to think through a couple of aspects. One is the fact that rule by code should only currently deal with implementation where there is at least not an immediate threat on human dignity, human freedom, and or fundamental rights for that matter. For instance, it is easy to enforce it when it comes to traffic violations. But when we try and do somebody’s exit and entry into a public place, for instance, what we tried to do with Aarogya Setu, it would have had disastrous consequences. Luckily, the government just pulled back Aarogya Setu. And one of the reasons they realized is the fact that in theory it works very fine that this may actually give an approximation of the fact that somebody could have had COVID. But there could be all sorts of errors which are happening because it was based on Bluetooth technology and approximation based on cell phone data, and there could be all sorts of errors which could arise on the basis of it. And to deny people from entry into airports, into railway stations, into workplaces was actually cutting off people’s fundamental rights. So the government did not enforce it in the way—despite the fact that there were circulars and rules. They understood that if they enforce it, they could actually be causing disastrous consequences.

So when you try to do that to the extent where it actually curtails our freedom of movement, then we have a problem. Now the point is that when we try and impose rule by code, one aspect that we need to keep in mind is that it is completely okay for—jumping of signal is a problem, whether you have jumped it because of the fact that you are rushing to meet your girlfriend by speeding through a signal or whether you are jumping it to buy medicine for someone in your family who is under critical care. It is still a crime. But at the same time, if it always goes to a judge, a judge may view you favorably on latter rather than the former. That is where human discretion comes in when it comes to these things.

Another example where rule by code has—it is a very interesting thing that happened also using AI cameras is where Kerala tried an experiment with imposing cameras on speed limit and traffic violations. So there are some interesting errors which happened. Like for instance, a bike was fined for crossing 1000 kilometers per hour. A person was shown to be wearing no helmet because of the fact that his hair was glaringly black on a very sunny afternoon in Kochi. And he was misfined, and there was no human in the loop to actually check this. And instead, it was just the code which imposed a fine, which led to criticism; which led to big public outcry. And then they had to scale it back and put a human in the loop to ensure that things were fine.

I think these are all interesting questions. I am a fan of the concept of rule by code. But there are checks and balances that we need to do, whether it is human agencies involved. And that is very important when it comes to this.

Now when to do it? Even from an access perspective, regardless of the rule by code, having access to updated legal documents and not just in English language, not just in Hindi language, it is important that it is made available to the citizens of India in the language that is understood by them. Otherwise, how can we as a country say that ignorance of law is not an excuse? And sometimes we find this problem because even at a lower-level bureaucracy, the lower-level bureaucrat is unable to examine English, interpret it and apply it. He relies on a translation. Who comes out with that translation? Is that available to him? Is the latest amended copy available to them?

And in this country, remember when IPC Section 66A, which is decriminalized—even for at least three to four years, people were booked under 66A. So access to law is a problem. Access to latest law is a problem. The government came out with a repeal and amending act. Unfortunately, they should have used that as an opportunity. They said we removed many laws. What they removed was amendment laws. The Second Law Commission Criminal Law Amendment was removed. Instead of that, if they had compiled it and came out with all the translations in various languages, that would have been of incredible help to our society.

Keshav: Also one point that you mentioned is that there is a lack of actual usage of this concept in India. Do you think that is because of lack of initiative by the state? Because let us say in France, the French government coded the whole tax code and made it open for all the people to see how the taxes actually would be imposed on them. But do you not think that we see such systems which a lot of state officials have? Let us say if we go to RTO, there is a system for them to register you for the license. Do you think making such software open source and making it open for the masses is the way to go?

Nikhil: We cannot definitely compare French government with Indian government. We may say that we boast that we are a bigger economy than France. But per capita income is very low and we have large number of people that we need to lift out of poverty. So, the state capacity is different. We are such a huge country. We have had a history of colonization and plunder from this country, so it is unfair to compare us with erstwhile colonial state, which actually extracted a lot of wealth and is sitting on a lot of wealth so they can afford to do it. Unfortunately, if there was a way in which we could bring a lot of wealth back, then I am sure the government will also focus on it. So, I would not blame the government.

At some level, the government needs to start thinking about a policy-level guidance on how this could be made. Access could be made, the government has started doing many things. There is a portal. There are portals, but the problem is now there is portal for everything. We have reached a situation where everything is done by portal. And for doing everything – there is one portal for death registration, one portal for birth registration, there is another portal for license, there is one portal for RTO, there is another portal… But are all these portals really working?

Every year, every company in India loses a significant amount of time filing their annual compliance forms because of the fact that the Ministry of Corporate Affairs website crashes when it comes to annual filing date, which is typically around the October-November time of the year, which also coincides with Diwali or Pooja holidays, and it causes a lot of frustration. And then sometimes things get delayed, they end up paying fine, then they have to go to compounding. And the interesting thing is in India after you go for a compounding, you get an order. You pay the fine, then you will have to file another form to tell the same ministry that you have compounded, paid the money which you got. You gave me a receipt. I need to file another form to tell them that please remove my name from the list of defaulters. That is how complicated our systems are. These problems are extremely complicated.

So what the government should examine is removing those complexities from the portal to make life easy for our citizens. Tax laws are a good example. Now, another place where we could do is, in fact, by way of simplifying those portals, focusing more on the design aspects of the portals, because some of these portals are designed so poorly, that even if you are the most tech-friendly person, you will struggle. And the official staff, everyone struggles in the process, and sometimes there are loopholes in design of these portals.

I will give you a good example. I got an electric car thinking I was being a good citizen and because of the fact that it is an electric car, I did not have to pay road tax on it. Now it has been three months and I have—I got the hypothecation certificate removed from the bank. I need to remove the hypothecation lien and get an RC book saying that there is no charge by the bank because I finished my car loan. I am unable to do it because the official is unable to give me a clearance certificate because I had not paid road tax three years back when I got the car. He is saying I can only show the tax paid receipt and only then give that clearance. But there was no tax paid at that point of time.

So my RC book was stuck in the Bangalore RTO for the last four months. Is that the official’s problem? No, it is actually the portal. This portal does not let the person move from one point to the other point without necessarily doing this. And on average, in an Indian’s life, average business’ life, these are the kind of challenges that we face.

Open source is a completely different discussion. I know that there are many government programs which are run on open source. I am always in support of open source, but I do not think that is the right focus for the government. And even in this area, even for this portal itself, where it is lacking is the fact that the design thinking is lacking when it comes to this portal, so it is only adding more complexities and making things cumbersome. Government should enable more private participation because government obviously does not have the budget and capacity to solve all the problems that we see around them, and encourage the private participation in this and come out with better designs to ensure that citizen-related problems are solved.

Sriram: Thank you so much for your time.

Nikhil: Thank you very much

Categories: Podcast