The convergence of the General Data Protection Regulation (GDPR, Regulation (EU) 2016/679) with the Artificial Intelligence Act (AI Act, Regulation (EU) 2024/1689) creates a dual-layer regulatory framework affecting every company that uses AI systems to process personal data. Understanding how these two regulations interact — where they reinforce each other and where they create tension — is the essential starting point for building a robust compliance programme in the AI era.
The dual regulatory base: GDPR and AI Act as complementary frameworks
The GDPR has been directly applicable across the EU since May 2018 and was transposed into Spanish national law through the LOPD-GDD (Organic Law 3/2018, of 5 December), which established the Spanish Data Protection Authority (AEPD) as the national supervisory authority and introduced specific provisions for areas where the GDPR leaves discretion to Member States.
The AI Act was published in the Official Journal of the EU on 12 July 2024 and applies in stages: prohibitions on unacceptable-risk AI entered into force on 2 February 2025, obligations for high-risk AI systems and general-purpose AI models apply from 2 August 2026, and the full framework (including obligations for embedded AI in products covered by other EU legislation) applies from 2 August 2027.
Recital 97 of the AI Act acknowledges the concurrence of both regulations and establishes that, in cases of conflict regarding the protection of personal data, the GDPR prevails, while the AI Act adds requirements specific to AI systems. In practice, the vast majority of high-risk AI systems process personal data at some stage of their lifecycle — training, validation, inference, or output generation — which means GDPR and AI Act obligations apply simultaneously in most relevant use cases.
Article 22 GDPR and automated decision-making
Article 22 of the GDPR grants individuals the right not to be subject to decisions based solely on automated processing — including profiling — that produce legal effects or significantly affect them in a similar way. This provision is directly relevant to AI systems used in credit scoring, candidate evaluation in recruitment, insurance premium calculation, and determination of social benefit eligibility — precisely the categories classified as high-risk in Annex III of the AI Act.
For a fully automated decision to be lawful under Article 22 GDPR, at least one of three conditions must be met: (i) the decision is necessary for the conclusion or performance of a contract; (ii) it is authorised by EU or Member State law; or (iii) it is based on the data subject’s explicit consent. In all cases, the controller must implement appropriate safeguards including the right to human intervention, the right for the data subject to express their point of view, and the right to contest the decision.
The AI Act’s human oversight requirement (Article 14) operates as a complement to Article 22 GDPR: high-risk AI systems must be designed so that the persons responsible for oversight can understand the system’s capabilities and limitations, detect and address situations of malfunction or bias, and override or shut down the system when necessary. This means the technical architecture of the AI system must enable genuine human oversight — not merely nominal approval after the fact.
Data Protection Impact Assessments for AI systems
Article 35 of the GDPR requires controllers to carry out a Data Protection Impact Assessment (DPIA) before processing that is likely to result in a high risk to the rights and freedoms of individuals. Processing activities that require a mandatory DPIA include systematic and extensive evaluation of personal data based on automated processing (including profiling), large-scale processing of special categories of data, and systematic monitoring of publicly accessible areas.
The AEPD published its list of processing types requiring a DPIA in 2019. In the AI Act context, this list must be updated to include AI systems that carry out systematic profiling, process biometric data, take automated decisions in the domains of employment or financial services, and any high-risk AI system under Annex III that processes personal data.
A DPIA for an AI system must address, beyond the standard elements of Article 35 GDPR, the specific risks inherent to AI: algorithmic bias that generates indirect discrimination, model opacity (the black-box effect that makes it impossible for data subjects to understand the basis of decisions affecting them), re-identification and inference attacks against training data, and model drift over time that generates results less accurate or more biased than those evaluated at the time of deployment.
Privacy by Design and Privacy by Default in AI systems
Article 25 of the GDPR establishes the principle of data protection by design and by default: the controller must implement appropriate technical and organisational measures, both at the time of determining the means of processing and at the time of processing itself. For AI systems, this means incorporating data protection safeguards into the system design process — not as a corrective measure after deployment.
Technical Privacy by Design measures for AI systems include: anonymisation or pseudonymisation of training data where possible without significant loss of model utility; implementation of federated learning techniques that avoid centralisation of personal data; use of differential privacy to limit what the model can reveal about specific individuals; and bias and fairness audits integrated into the development and validation pipeline rather than conducted as a one-off exercise before go-live.
The DPO in the AI Act era: new responsibilities
The Data Protection Officer (DPO), mandatory for certain categories of controllers and processors under Article 37 of the GDPR, acquires new responsibilities in the AI Act context. The DPO must be informed and consulted regarding AI systems that process personal data, must participate in the DPIA for high-risk AI systems, and must coordinate with the AI Act compliance officer (who may be the same person as the DPO or a different person, depending on the size and structure of the organisation).
The European Commission has indicated in its interpretive guidance that the DPO is the natural point of contact for DPIA processes for AI systems, but that the technical competencies required to evaluate AI-specific risks — bias, opacity, robustness, adversarial attacks — may require incorporating specialised technical profiles into the compliance team. Organisations relying on an external DPO should assess whether the external provider has the technical AI expertise to fulfil this expanded role, or whether supplementary specialist support is needed.
The record of processing activities and AI systems
Article 30 of the GDPR requires controllers and processors to maintain a record of processing activities. When an AI system processes personal data, that processing must be documented in the record with the required information: purpose, data categories, recipients, retention periods, and security measures. Additionally, if the system is high-risk under the AI Act, the provider must maintain a parallel log (Article 12 AI Act) that can complement the controller’s record but serves different regulatory obligations. Managing the interaction between these two documentation requirements without duplication or contradiction is one of the practical compliance challenges organisations face when deploying AI.
Prohibited AI practices under the AI Act: the absolute prohibitions
From 2 February 2025, the AI Act prohibits a set of AI applications with no possibility of exception or authorisation. These include: AI systems that deploy subliminal or manipulative techniques to distort behaviour in ways that cause harm; systems that exploit vulnerabilities of specific groups (age, disability) to alter behaviour; social scoring systems by public or private actors based on behaviour or personal characteristics; real-time biometric identification systems in public spaces by law enforcement authorities (with narrow exceptions); and AI systems used to infer emotions in the workplace or educational institutions except for medical or safety reasons.
For organisations operating AI governance programmes under the GDPR, these prohibitions align with data minimisation and purpose limitation principles but go further by prohibiting entire categories of processing regardless of whether a legal basis exists under the GDPR.
Sector-specific implications
Human resources and talent management: AI-driven candidate screening and assessment tools fall under Annex III of the AI Act as high-risk systems in the employment domain. These tools require a DPIA under the GDPR and a conformity assessment under the AI Act before deployment, and must be registered in the EU database for high-risk AI systems prior to market placement.
Financial services: Credit scoring, creditworthiness assessment, and fraud detection systems using AI are high-risk under Annex III. The overlap with the GDPR’s restrictions on automated decision-making (Article 22) and the AI Act’s transparency requirements creates a significant documentation burden, but also an opportunity to build a unified AI governance framework that satisfies both regulators simultaneously.
Healthcare: AI systems used for diagnosis, prognosis, or treatment recommendations are high-risk under Annex III and process special category data under Article 9 GDPR. The requirement for explicit consent or another legal basis under Article 9(2) GDPR must be assessed alongside the AI Act conformity requirements for medical devices and healthcare systems.
At BMC our legal and data protection team advises companies across all sectors on GDPR compliance and AI Act readiness. See our data protection and AI compliance services.