Skip to content
Legal Article

Data Protection and AI Act: Points of Intersection

Where GDPR and the EU AI Act intersect: high-risk AI system requirements, personal data use for AI training, AEPD enforcement positions and joint DPO/AI compliance obligations.

5 min read

On 1 August 2024, Regulation (EU) 2024/1689 — the EU AI Act — entered into force, becoming the world's first horizontal regulatory framework for artificial intelligence. Its intersection with the General Data Protection Regulation (GDPR) creates a double compliance layer affecting any organisation that develops, deploys, or uses AI systems that process personal data. Simultaneous compliance with both frameworks is not discretionary: AI Act penalties reach €35 million or 7% of global annual turnover for the most serious infringements, matching the GDPR's highest tier.

The AI Act’s Phased Timeline: What Applies When

The AI Act does not apply all at once. Its staggered entry into force is the key to prioritising compliance efforts:

  • February 2025: Prohibition of unacceptable-risk AI systems (Article 5), including subliminal manipulation, social scoring, and emotion recognition in workplace and educational settings except in narrowly defined cases.
  • August 2025: Full application of obligations for General Purpose AI (GPAI) models, including technical documentation and copyright compliance regarding training data.
  • August 2026: Entry into force of obligations for high-risk AI systems listed in Annex III (HR, credit, education, critical infrastructure, biometrics).
  • August 2027: Application to high-risk AI systems already on the market before August 2026.

Critical Intersection Points Between the GDPR and the AI Act

The GDPR and the AI Act are not parallel rules that apply independently; in numerous scenarios their obligations overlap, reinforce each other, or create tensions requiring a coordinated resolution.

Impact assessments. The GDPR requires a Data Protection Impact Assessment (DPIA) for large-scale processing or profiling activities (Article 35 GDPR). The AI Act requires a conformity assessment for high-risk systems. In many cases, both assessments target the same AI system. Spain’s data protection authority (AEPD) has published guidance for integrating both assessments into a single procedure, reducing the compliance burden without sacrificing rigour.

Automated decisions and right to explanation. Article 22 GDPR grants data subjects the right not to be subject to decisions based solely on automated processing when those decisions produce significant effects. The AI Act adds transparency obligations for high-risk systems and a right to human oversight. Where an AI system makes automatic decisions in HR (candidate screening, performance evaluation) or credit granting, the data controller must document the system’s logic, inform data subjects, and ensure that decisions can be reviewed by a human.

Legal basis for AI training. Using personal data to train or fine-tune an AI model requires a valid legal basis under Article 6 GDPR. Consent appears to be the most intuitive option but carries the problem of revocability: if a data subject withdraws consent, the controller must be able to remove or anonymise their contribution to the model — which is technically complex or impossible in many cases. Legitimate interest is a more operationally robust alternative, but requires passing the proportionality balancing test, and for special category data (health, ethnic origin, sexual orientation) it is practically unavailable without explicit consent.

Specific Obligations for AI System Operators

Organisations deploying high-risk AI systems must:

Register the system. The EU database (managed by the European AI Office) requires registration of high-risk systems before market placement or putting into service. Registration details include the system’s intended purpose, the responsible provider or deployer, and key performance characteristics.

Maintain technical documentation. The AI Act requires comprehensive technical documentation covering the system description, training data used (including data sources and quality assurance measures), performance metrics, and results of robustness and accuracy testing. This documentation must be kept up to date and made available to market surveillance authorities on request.

Ensure human oversight. High-risk systems must be designed to allow natural persons to oversee their operation and intervene or stop them when necessary. This principle conflicts with the full-automation logic of many AI systems and requires rethinking workflow architectures where AI makes or recommends consequential decisions.

Disclose AI interaction. Where users interact with an AI system (chatbot, virtual assistant, emotion recognition), the AI Act requires that users be informed they are dealing with AI. Privacy policies and legal notices must be updated accordingly.

The DPO’s Evolving Role in the AI Act Era

The Data Protection Officer (DPO), a mandatory designation under Article 37 GDPR for certain controllers and processors, acquires a new dimension in the context of the AI Act. Their natural role in overseeing GDPR compliance now extends to coordinating AI Act conformity assessments, managing the AI systems register, and providing internal training on responsible AI use.

Organisations that were not previously required to appoint a DPO but now deploy high-risk AI systems should assess whether the scale of their compliance obligations justifies voluntary designation of this role or the engagement of an external DPO service.

Compliance Roadmap for 2025-2026

The starting point is an inventory of all AI systems used across the organisation, classified by risk level under the AI Act. For high-risk systems, the next step is conducting an integrated conformity assessment (combining the DPIA and AI Act evaluation), reviewing contracts with AI suppliers to incorporate the clauses required by both frameworks (including guarantees about training data and human oversight mechanisms), and updating GDPR data processing records to reflect new AI use cases.

At BMC, our specialist data protection and AI compliance team guides organisations through every stage of this dual regulatory framework. Explore our data protection services.

Want to learn more?

Let us discuss how to apply these ideas to your business.

Call Contact