Amit Kumar*

Automated decision-making systems are increasingly animating various facets of human activities. These systems, powered by artificial intelligence (AI), hold the promise of streamlining processes, enhancing efficiency, and driving innovation. However, they also raise significant ethical and legal concerns, particularly regarding data privacy, algorithmic bias, and accountability. In response to these challenges, regulatory frameworks such as the European Union’s General Data Protection Regulation (GDPR) and the European Union Artificial Intelligence (AI) Act, both avant-garde and prescient in their own respect have emerged as key instruments for governing the ethical and responsible use of AI technologies. This article delves into a comparative analysis of the GDPR and the EU AI Act, exploring their salient provisions, similarities, divergences and interoperability as well as their ability to inspire analogous legislations beyond the European Union. By examining the intersection of data protection and AI regulation regimes, this analysis sheds light on the governance framework that acting in tandem seeks to regulate the shifting sands of ADM technologies, the context sensitivity of their myriad deployment scenarios and their ramifications for societies worldwide.
GDPR Framework: Protecting Personal Data
The GDPR, enacted in 2018, represents a landmark regulation designed to protect individuals’ personal data and privacy rights within the EU and European Economic Area (EEA). It establishes strict guidelines for the collection, processing, and storage of data that flows between the end user and service providers navigating through multiple intervening intermediaries. The various chapters and provisions clearly and exhaustively set out inter alia the scope, principles, rights and conditions for data, its usage and protection etc.
Article 4(1) of GDPR for instance provides the definition of personal data as “any information relating to an identified or identifiable natural person.” If the person from whom the data originated can be identified either directly or indirectly then that qualifies as personal and calls for placing adequate safeguard mechanisms. GDPR specifically provides for utilizing data anonymization and pseudonymization to camouflage personal data and to ensure that they are not traced back to the source from where they originated. Judicial interpretation of GDPR has further broadened the definitional ambit and scope of personal data. For instance in the case of Peter Nowak v. Data Protection Commissioner, the broad nature of the definition of ‘personal data’ in the General Data Protection Regulation (GDPR) is highlighted. ‘Personal data’ under GDPR encompasses any information relating to an identified or identifiable natural person (‘data subject’). This includes not only obvious identifiers like names and addresses but also less apparent data such as location data, online identifiers (e.g., IP addresses, cookies), and factors specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of individuals. The definition of personal data is also not limited to only sensitive or private information but can include various other types of data, including subjective opinions and assessments, as long as it has a connection to the data subject.
Additionally, Article 24 of the GDPR requires controllers or agencies responsible for determining how personal data is processed, to establish and demonstrate compliance with data processing regulations through appropriate design and organizational methods. This also imposes compliance requirements for AI developers. Such measures can include guaranteeing the sufficiency, comprehensiveness and neutrality of training data, evaluating the validity of inferences made, and pinpointing sources of bias and inequity. Since the AI systems also often extrapolate big data and process large amounts of personal information, organizations developing or deploying such systems have the responsibility to lawfully process the data in order to adequately ensure that the data remains anonymized and is not reconnected or traceable to the individual from whom it is sourced. Analogous requirements are also stipulated in the AI act as well. For instance, provisions related to training data, bias monitoring, and post-market monitoring have been made obligatory under Articles 10 and Article 61 of the AI Act.
An example of such complementarity in the context of AI deployment would be designing of a healthcare AI system developed by a medical research institution. Such a system must be designed to comply with the GDPR’s requirements for processing health data of the patients, their medical history, the methods and procedures for ensuring transparency in collection and storage of data as well as designing safeguards and data security. Furthermore, if patient data is to be aggregated and processed to extrapolate any patterns of trends (disease prevalence for example), the system design must ensure adequate levels of anonymization or pseudonymization as the case may be.
Another illustration could be a process wherein user data is collected by digital advertising platforms which may include browsing history, search queries, app usage, and demographics. Personally identifiable information (PII) like names and addresses contained within such datasets can be then either pseudonymized or removed to protect user identities. Each user’s data is assigned a unique pseudonym or anonymized ID for privacy. Using the anonymized dataset, the platform trains machine learning models to predict user interests and preferences based on behavior. These models associate anonymized user profiles with specific product or service categories, enabling personalized ad targeting. When a user visits a website or app integrated with the platform, the platform receives anonymized signals indicating user interests or intent. Leveraging its trained models, the platform selects and serves relevant ads based on the pseudonymized user profile. Privacy is maintained throughout the ad targeting process by pseudonymizing user data. The platform respects user consent and adheres to data privacy regulations like the GDPR, providing users with options to opt out or manage their data preferences. This approach ensures personalized ad delivery while safeguarding user privacy and compliance with privacy laws.
Another crucial area of interface between the two enactments is at the stage of system design. Article 25 of the GDPR provides for data protection by design and default. This consequently entails obligations to embed data privacy and data screening mechanisms in the process and structural design stage itself. This stipulation requires organizations to consider data protection and privacy concerns from the very inception of any new system, process, or technology that involves the processing of personal data. The article mandates that core data protection measures should be integrated into the design stage itself and not as a post-facto measure. This involves incorporating highest levels of privacy-enhancing features such as data minimization, pseudonymization, encryption, access controls, and data retention limits, into the very architecture and functionality of the system to be preconfigured by default.
The underlying objective is to limit the processing of personal data only to the extent necessary to achieve the specific intended purpose. Compliance with this article is essential for organizations to demonstrate their commitment to data protection and fulfill their legal obligations under the GDPR. This also applies to AI systems obligating them to comply with these principles by embedding privacy safeguards into their design and default settings and keeping data privacy and protection at the core of their structural and processual design.
Furthermore, the GDPR specifies that organizations and systems should only handle the minimal amount of personal data required for each particular purpose, including considerations such as the volume of data, the scope of processing, duration of storage, and the level of accessibility. This provision emphasizes the importance of carefully managing personal data in AI applications to ensure compliance with privacy regulations. Data minimization and purpose limitation which are also key elements of GDPR thus also fit in neatly within the AI deployment contexts. AI developers and systems are required to limit the collection and processing of personal data to the extent necessary to achieve their intended objectives. Furthermore, the processing of personal data must be confined to essential purposes and should only occur for legitimate, explicit, and clearly defined reasons [Article 10 (2)].
Equally central to the both enactments is the emphasis accorded to transparency and informed consent. GDPR stipulates that data can only be appropriated and utilized with the explicit and deliberated consent of the data subjects. This is required to be achieved through granular consent settings and dynamic consent management so that individuals could exercise greater control over their data and how it is to be utilized. Some of the more salient provisions include lawful, fair, and transparent processing of data [Article 5(1)(a)], informed consent [Article 6 and Article 7]. The GDPR grants data subjects a suite of rights to exercise control over their personal data. These rights include the right to access, rectify, erase, restrict processing, object to processing, and data portability (Articles 15 to 22).
Transparency and consent are also key to the Artificial Intelligent Act which obligates the providers to ensure that the AI systems that have a direct human interface clearly and distinctly inform their users that they are interacting with an AI device or system (Article 52).
Another crucial aspect where the two acts align relates to data protection impact assessment. According to Articles 35 and 36 of the GDPR, a data protection impact assessment (DPIA) is mandatory for processes that could potentially jeopardize individuals’ rights and freedoms, especially those involving systematic and extensive automated profiling. This concern becomes particularly pertinent when AI systems are involved in automated decision-making for individuals. GDPR mandates organizations to conduct DPIAs for processing activities likely to result in a high risk to individuals’ rights and freedoms. A risk-based assessment enables identification and mitigation of potential risks associated with data processing activities. Article 22 of the GDPR which governs automated individual decision-making, including profiling, and its impact on individuals also becomes relevant in this context. The article applies to situations when decisions affecting individuals are made solely through automated processes without human involvement that may impact them in significant ways. Under this article, data subjects have the right to challenge decisions based solely on automated processing that significantly affect them. They can request human intervention, offer their perspective, and contest the decision. This provision aims to safeguard individuals’ rights and freedoms in the context of automated decision-making and profiling under the GDPR.
AI Act Compliance: Managing AI Risks
Analogous to DPIA, the Artificial Intelligence Act also adopts a risk-based approach which classifies AI systems in proportion of the risk it might potentially pose to users. The AI Act proposes a risk assessment framework to evaluate the potential risks posed by AI systems. This framework involves assessing various factors, including the system’s intended purpose, the context in which it will be used, the potential impact on individuals and society, and the likelihood of harm occurring.
The AI Act categorizes AI systems into different risk levels based on their potential to cause harm (Article 6). These progressively increasing risk levels include minimal risk, limited risk, high risk, and unacceptable risk. The classification depends on factors such as the system’s intended purpose, its technical characteristics, and the potential consequences of failure or misuse. Unacceptable risk category contains prohibited Artificial Intelligence Practices which include cognitive behavior manipulation particularly for specific groups, social scoring or classifying people on the basis of any behavioral traits or socio-economic status or any biometric identification systems, such as facial recognition. This risk category is largely prohibited. Limited risk and low risk categories are allowed with minimal regulatory requirements or through self-regulation.
The Act however places particular emphasis on regulating high-risk AI systems, which have the potential to cause significant harm to individuals or society. Examples of high-risk AI applications include those used in critical infrastructure, healthcare, transportation, and law enforcement. These systems are permissible albeit with strict compliance requirements, such as data quality, transparency, explainability, robustness, and human oversight as well pre-market conformity assessment (Article 43) and post market monitoring. Pre-market conformity assessment procedures would often involve third-party assessment and certification processes to verify that AI systems meet the necessary standards and safeguards before being deployed or placed on the market. Additionally, subsequent to its deployment in the market the AI Act emphasizes the importance of ongoing monitoring and review of AI systems’ compliance with regulatory requirements. This includes regular audits, evaluations, and updates to ensure that AI technologies continue to meet the evolving standards and mitigate potential risks effectively.
Integration for Effective AI Deployment
A comparative simultaneous reading of these Acts makes it clear that although their lens and focus is different, there are parallels, equivalences and consistencies between them evident in the analogous and complementary provisions they contain. AI act is geared towards service providers and users and GDPR acts as a shield for any data and privacy infringements by entities controlling and processing data. A synergistic application of the two thus can go a long way in balancing the hitherto often incompatible goals of service efficiency and data protection especially in the current context of AI deployment. It could usher in an area of secure and robust presence of AI driven technologies in our daily lives while allaying concerns regarding undesirable consequences of an unmitigated invasion of AI within human populations.
Global Impact and Adoption
Although GDPR and AI act both have originated in the EU and respectively govern personal data and AI driven technologies and systems within the European Union both also have extraterritorial incidences. Hence alongside compliance requirements beyond the boundaries of the European Union both these pioneering acts also set benchmarks that can serve as references for other national jurisdictions to adopt and incorporate. India has also sought to establish a data protection regime by promulgating the Personal Data Protection Act. The AI regulation and governance Ecosystem in India nevertheless is still in its very infancy. Both data protection framework as well as a potential AI legislation shall require robust formulation, revision and strengthening. In aligning with key principles and provisions of the EU’s regulatory frameworks, India can strengthen its data protection regime, promote responsible AI innovation, and contribute to global efforts towards harmonized AI governance.
By adopting EU’s GDPR, India can benefit from a well-established legal framework that provides clear guidelines for the collection, processing, and storage of personal data. This would fortify data protection standards within India, and usher transparency and accountability in data handling practices. Aligning with GDPR would also facilitate cross-border data flows between India and EU member states crucial for Indian businesses engaged in international trade and data exchange, ensuring compliance with EU data protection laws and facilitating interoperability in a global digital economy.
Also improvising and incorporating ethical AI practices emphasized by the EU AI Act, such as transparency, traceability, and human oversight for high-risk AI systems, into domestic AI legislation would enable India to effectuate responsible AI deployment. A risk-based approach to AI regulation, as outlined in the European Act, would help India identify and mitigate potential risks associated with AI applications impacting critical sectors like healthcare, transportation, and law enforcement to name a few.
Aligning with EU data protection and AI regulations would also elevate India’s compliance standards and bring it at par with the evolving state of the art international standards. It would facilitate informed collaboration with EU and other countries on data protection and AI governance initiatives on an equal footing. These two EU laws can in fact serve as the gold standard in the ADM related jurisprudence for other jurisdictions to follow suit exhibiting arguably a regulatory diffusion outside EU borders (Brussels effect). This could result in increased syncing and uniformity of related laws and cooperation in addressing global challenges related to digital technologies so as to facilitate harmonization of data protection and AI governance practices globally.
Finally, adopting robust data protection laws akin to GDPR would strengthen consumer trust in India’s digital economy. Clearly articulated rights for data subjects shall empower individuals to exercise control over their personal data, promoting privacy rights and data sovereignty. GDPR’s emphasis on data security measures and accountability principles would also bolster cybersecurity practices within India, enhancing resilience against data breaches and cyber threats.
Conclusion
As AI continues to relentlessly reshape the boundaries of technological innovation, the interplay between the EU AI Act and GDPR brings to fore the importance of balancing technological advancement with ethical considerations and data privacy protections. This can be achieved by aligning the twin regulatory frameworks to promote responsible AI which embodies a set of principles that steer the design, development deployment and application of AI. Responsible AI is rooted in ethical considerations including transparency, fairness, non-discrimination, accountability, interpretability, explainability, human-centric AI development etc.
EU has shown the way towards fostering a digital ecosystem rooted in trust, innovation, and respect for individuals’ rights. However, charting the course ahead and strengthening this intricate regulatory landscape so as to keep pace with the rapid advancement in this domain shall require consistent and continuous collaboration between policymakers, industry stakeholders, and civil society. This would ensure that AI serves as a catalyst for good while upholding fundamental humane principles of privacy dignity and responsible AI governance. This in turn would pave the way for a more inclusive and sustainable digital future.
However, while duly appreciating the synergies and interoperability between the two acts which can be put to good use, it is also extremely crucial to recognize and reiterate a subtle but qualitative difference in the foundational premise and objective of the two acts. The GDPR primarily focuses on protecting data being collected, stored, or used. The AI Act on the other hand extends its regulatory scope to also include the very methodology, processes and mechanisms of (artificial) decision-making (ADM) based on data and datasets. This broader functionality encompasses more than just exfiltration of the raw constituent data. Ensuring data integrity is essential, but so is scrutinizing any data manipulation during the decision-making process that may nudge or influence AI to arrive at biased decisions. Transparency, ethics and human oversight will therefore play a huge role in the fair application and deployment of AI. Therefore a preconfigured, standardized and humane operating process or more aptly envisaging an a constitution for Artificial Intelligence that can be embedded in the very design and decisional structures of AI driven systems and which themselves shall be continuously updated and monitored through vigilant human oversight shall be the key to establish a more secure and harmonized AI-Data protection universe.
*Amit Kumar has been a fellow with the Max Planck Institute for Social Law and Social Policy, Munich. He has post graduate degrees in Law as well as French literature from the Indian Law Institute and Jawaharlal Nehru University respectively. He currently teaches Public Policy, Human Rights and Jurisprudence at Maharashtra National Law University, Mumbai.
Categories: Law and Technology, Legislation and Government Policy
