Skip to content
Legal Article

Data Protection and AI: New Legal Challenges

New legal challenges at the intersection of data protection and AI: GDPR legal bases for training data, automated decisions under Article 22, DPIAs and EU AI Act obligations.

5 min read

The convergence between personal data protection and the development of artificial intelligence systems poses major new legal challenges. The General Data Protection Regulation (GDPR) and Spain's Organic Law on Data Protection and Digital Rights Guarantee (LOPDGDD) fully apply to AI systems that process personal data, but the new AI Act introduces additional requirements that create a dual compliance framework.

Many AI systems are fed large volumes of personal data for training. The most common legal basis is legitimate interest (Article 6.1(f) GDPR), although its application requires a balancing test that mass AI systems may fail to pass. Informed consent requires the data subject to understand how their data will be used in an AI context, which can be complex when the model is trained on historical data or is continuously updated.

For special category data — health data, biometric data, political or religious opinions — the GDPR additionally requires a condition under Article 9.2, which in practice significantly limits training AI models on this type of data without explicit consent or a specific legal basis.

Automated Decisions and Profiling

Article 22 of the GDPR regulates the right not to be subject to decisions based solely on automated processing. Companies using AI to make decisions with significant effects on individuals (credit, employment, insurance, access to services) must guarantee meaningful human intervention, process transparency and the possibility of challenge. In practice, this means having a review mechanism by a natural person — not a merely formal one — who is capable of actually modifying the algorithmic outcome.

Spain’s data protection authority (AEPD) has published specific guidelines on AI and data protection, stating that human intervention must be real and not merely symbolic: an operator who simply validates the algorithm’s result without effective review capability does not satisfy the Article 22 requirement.

Data Protection Impact Assessments (DPIAs)

When data processing in an AI system is likely to result in high risks to the rights and freedoms of data subjects, a Data Protection Impact Assessment (DPIA) must be carried out before processing begins. The AI Act adds its own conformity assessments for high-risk systems, potentially creating a double compliance burden.

High-risk AI systems as defined by the AI Act include, among others: systems for managing critical infrastructure, AI in education and vocational training, personnel selection systems, credit systems, systems used in the administration of justice, and biometric identification systems. For all of these, the company must document the system design, the datasets used, human oversight measures and the results of robustness testing.

The Role of the Data Protection Officer (DPO) in AI Environments

Companies required to appoint a DPO under Article 37 of the GDPR — including those conducting large-scale processing of special category data or systematic profiling — must involve their DPO in the design and auditing of AI systems from the outset. This privacy-by-design principle is particularly important when an AI system is acquired from an external vendor: the data processing agreement must reflect the obligations of Article 28 of the GDPR, and the client must verify that the supplier complies with the AI Act.

Transparency and User Information

The GDPR requires data subjects to be clearly informed about how their data is used, including whether it is used for automated decisions. The privacy notices of many companies remain inadequate in this regard. The information must include the logic applied, the significance of the processing and the envisaged consequences for the data subject. Recital 71 of the GDPR provides interpretive guidance on the level of detail required.

Penalties and Supervision

Serious GDPR infringements — including those related to AI systems — can attract fines of up to €20 million or 4% of annual global turnover. The AI Act adds its own penalty regime with fines of up to €30 million or 6% of global turnover for the most serious violations (use of prohibited AI systems). Spain’s AEPD acts as the national supervisory authority for both regulatory frameworks in relation to data protection matters.

Practical Steps for Companies Deploying AI

For companies already using or planning to deploy AI systems, the recommended compliance roadmap includes: first, mapping all AI tools in use (including third-party SaaS solutions that incorporate AI features) and classifying them by risk level under the AI Act framework; second, reviewing all data processing agreements with AI vendors to ensure they contain the Article 28 GDPR clauses and AI Act conformity obligations; third, updating internal privacy notices to describe any automated decision-making; and fourth, conducting a DPIA for each high-risk processing activity identified in the mapping exercise.

This process need not be burdensome if approached systematically. Many organisations discover that existing GDPR compliance infrastructure — Records of Processing Activities, DPIAs for legacy systems, vendor management procedures — provides a solid foundation that can be extended to cover AI-specific requirements with targeted additions rather than a complete rebuild.

At BMC we advise on data protection and artificial intelligence. See our data protection services.

Want to learn more?

Let us discuss how to apply these ideas to your business.

Call Contact