Amit Kumar*

Automated decision-making systems are increasingly animating various facets of human activities. These systems, powered by artificial intelligence (AI), hold the promise of streamlining processes, enhancing efficiency, and driving innovation. However, they also raise significant ethical and legal concerns, particularly regarding data privacy, algorithmic bias, and accountability. In response to these challenges, regulatory frameworks such as the European Union’s General Data Protection Regulation (GDPR) and the European Union Artificial Intelligence (AI) Act, both avant-garde and prescient in their own respect have emerged as key instruments for governing the ethical and responsible use of AI technologies. This article delves into a comparative analysis of the GDPR and the EU AI Act, exploring their salient provisions, similarities, divergences and interoperability as well as their ability to inspire analogous legislations beyond the European Union. By examining the intersection of data protection and AI regulation regimes, this analysis sheds light on the governance framework that acting in tandem seeks to regulate the shifting sands of ADM technologies, the context sensitivity of their myriad deployment scenarios and their ramifications for societies worldwide.
Introduction
Even in the fast-paced world of technology and innovation the vertiginous rise of Artificial Intelligence (AI) has been singularly dramatic. Artificial Intelligence has ushered in transformative, often disruptive advancements across various sectors, promising unprecedented innovation and efficiency. However, as AI becomes increasingly integrated into our daily lives, concerns surrounding data privacy, ethical implications, and regulatory oversight have come to fore. The all-pervasive nature and cross-border applicability of AI has opened a pandora’s box of questions related to ethical ambiguities, harm, biases, trust and accountability in the collection, processing and utilization of data.
Artificial intelligence heavily depends on extensive data sets, often obtained thorough data mining procedures. However, this data can potentially contain sensitive information engendering the risk of exposure of individuals’ private data in outputs generated by AI systems. Furthermore, the “Blackbox” nature of data processing within the AI operational frameworks has also raised concerns. Black box refers to a system or model whose internal workings are opaque or not easily understandable to humans. In other words, while the inputs and outputs of the system are observable, the process by which the system arrives at its outputs is not transparent or interpretable.
AI models, particularly deep learning models such as neural networks, can be highly complex, with millions of parameters and intricate mathematical operations. Understanding how these models arrive at their decisions or predictions can be challenging, even for experts in the field. The internal mechanisms of black box AI systems may be inscrutable, making it difficult to interpret why a particular decision was made or to trace the reasoning behind a prediction. This lack of interpretability can be a barrier to trust and accountability, especially in high-stakes applications like healthcare or finance. Black box AI systems can inadvertently perpetuate bias or discrimination present in the data used for training. Without visibility into how decisions are made, it can be challenging to detect and mitigate biases, leading to unfair or inequitable outcomes for certain groups of people.
To unveil the opaqueness inherent in the Blackbox AI, explainable AI (xAI) is being projected as an alternative. While xAI seems to offer promise, it grapples with numerous problems itself at least at the current state of the art. One significant challenge here is navigating the balance between complexity and interpretability. This is particularly evident with complex AI models like deep learning, where simplifying for interpretability risks compromising accuracy. Moreover, there’s a persistent trade-off between accuracy and explainability. Simplifying a model to make it more interpretable often leads to a decrease in performance while highly accurate models may be less interpretable. Another obstacle is the context dependency of explanations. The validity of an explanation can vary greatly depending on the context, making it essential to accurately represent contextual information. User comprehension poses another issue. Even if explanations are provided, users may struggle to understand them. Thus, xAI methods must be designed in sync with users’ cognitive abilities and domain expertise to ensure that explanations are meaningful and useful. This may be extremely challenging to homogenize. Such challenges become more acute when AI is deployed in macro level public policy contexts to address wicked problems which are often ambiguous, vexatious and multivariate hence difficult to circumscribe or define. These challenges may lead to distrust in the reliability, performance and outcome of decisions even xAI arrives at.
In view of such issues and challenges, building effective as well as reliable regulatory regimes around AI has been the central concern in Automated Decision Making (ADM) jurisprudence which is still nascent, dynamic and rapidly evolving. This throws up regulatory challenges at multiple levels. Alongside the AI legal regime, data protection and privacy laws have emerged as a key mechanism to address the inherent risks associated with AI’s involvement in making decisions that hold legal and societal significance. These laws aim to safeguard individuals’ privacy rights and mitigate the potential negative impacts of AI on personal data protection.
In this backdrop two recent EU enactments are worthy of mention. The European General Data Protection Regulations (GDPR) and the proposed EU Artificial Intelligence act (AI Act), both avant garde in their own right. These legislations also find a common ground. They have been set in motion in pursuance of the Article 16 of the Treaty on the Functioning of the European Union (TFEU), which mandates the EU to lay down the rules relating to the protection of individuals with regard to the processing of personal data. The General Data Protection Regulation and the proposed European Artificial Intelligence Act represent key pillars of digital regulation within the European Union (EU). While they address distinct facets, they share common objectives of safeguarding individual rights and promoting accountability in the digital sphere. GDPR stands as a comprehensive framework dedicated to preserving data protection and privacy rights. It mandates stringent protocols for the collection, processing, and storage of personal data. GDPR’s principles encompass consent, data minimization, purpose limitation, and robust security measures.
The AI Act targets the burgeoning realm of artificial intelligence (AI) and seeks to regulate its development and application. This legislation introduces requirements specific to AI systems, particularly those deemed high-risk. It mandates risk assessments to identify potential harms, such as discrimination or threats to human rights. Transparency emerges as a central tenet, compelling AI developers to provide clear information regarding system capabilities, limitations, and purposes.
Both GDPR and the AI Act emphasize transparency and accountability. GDPR mandates organizations to demonstrate compliance through transparency measures and accountability mechanisms. Similarly, the AI Act imposes transparency obligations on AI systems, necessitating clear communication with users and robust documentation practices, especially for high-risk applications. In terms of enforcement, both regulations empower supervisory authorities within the EU to oversee compliance and enforce penalties for violations. They exhibit extraterritorial reach, extending their jurisdiction beyond EU borders to entities offering services to EU residents or monitoring their behavior.
The implication thus is that, the AI Act can also coopt the protections afforded to data subjects under the GDPR. This provides a basis for complementarity between the two legislations mandating the AI act to comply with and subscribe to the relevant provisions of the GDPR. The cumulative impact of juxtaposing these acts can result in a more secure and reliable AI deployment framework.
In the continuously shifting and evolving landscape of critical technologies it is interesting to look at the intersections between the two laws . The pairing of the two acts in data privacy and securitization contexts pertaining to AI governance can go a long way in allaying the fears and apprehensions of AI deployments in multiple functional domains.
These are well crafted legislations that have envisioned strong protections and provide a rights-based framework to data protection and AI regulation respectively. Operating in tandem they can serve as benchmark laws in multiple AI deployment contexts.
The article attempts to circumscribe opportunities as well as the attendant challenges thrown up by the proliferation of Automated Decision Making (ADM) technologies. It delves into two recently promulgated salient European Union Regulations, the General Data Protection Regulations (GDPR) and Artificial Intelligence (AI) Act). It analyses their respective mandates and objectives and demonstrates how acting in tandem these enactments can pave the way for a more secure and reliable AI regulatory framework. In examining the anatomies of the two statutory frameworks in terms of their scope, operational field and stipulations as well as their salient complementarities and distinctions, the article argues that GDPR can effectively serve as a set of enabling rules and supply the AI act with complementarities that facilitate a more effective regulatory control over the ADM systems, processes and deployments. Additionally, the twin enactments can also potentially induce the proverbial “Brussels effect” and inspire related legislations that many other jurisdictions are formulating or contemplating including India.
Synergies between GDPR and AI Act: Enhancing Data Protection in ADM Deployment
Automated Decision Making (ADM) involves the use of computer algorithms or systems to make decisions with minimal human involvement, relying on predefined rules, statistical models, or machine learning algorithms to process data and predict outcomes. While ADM offers efficiency, concerns arise regarding transparency, fairness, accountability, and potential biases in data or algorithms used. ADM jurisprudence thus necessitates employing regulatory arrangements such as the EU GDPR and the EU AI Act to govern these processes and safeguards. The GDPR grants individuals the right of being informed when their data is being extracted and used. It extends data protection principles also to ADM, ensuring lawful and purposeful use of personal data. The AI Act categorizes AI systems by risk levels and mandates transparency and traceability for high-risk ADM systems. These regulations promote responsible and ethical use of ADM technologies, emphasizing transparency, accountability, and fairness in automated decision-making to safeguard individuals’ rights within the EU.
*Amit Kumar has been a fellow with the Max Planck Institute for Social Law and Social Policy, Munich. He has post graduate degrees in Law as well as French literature from the Indian Law Institute and Jawaharlal Nehru University respectively. He currently teaches Public Policy, Human Rights and Jurisprudence at Maharashtra National Law University, Mumbai.
Categories: Law and Technology, Legislation and Government Policy
