Skip to content

Business glossary

EU AI Act

The EU Artificial Intelligence Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk level, imposes obligations on developers, deployers, and importers, and establishes penalties of up to €35 million or 7% of global turnover for the most serious violations. It entered into force in August 2024 with phased compliance deadlines through 2027.

Digital

What Is the EU AI Act?

The EU Artificial Intelligence Act (Regulation EU 2024/1689) entered into force on 1 August 2024, making the EU the first jurisdiction in the world to enact a comprehensive, horizontal legal framework for artificial intelligence. Unlike sector-specific AI regulation (such as the algorithmic rules in DORA for financial services), the AI Act applies across all industries and use cases.

The AI Act uses a risk-based classification system: the higher the potential harm of an AI system, the stricter the obligations. This creates four main risk categories.

The Four Risk Categories

1. Unacceptable Risk (Prohibited AI Practices)

These AI applications are banned outright from 2 February 2025:

  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
  • AI that manipulates persons through subliminal techniques or exploits vulnerabilities
  • Emotion recognition in workplaces and educational institutions (with limited exceptions)
  • Biometric categorisation inferring sensitive characteristics (race, political opinion, religion, sexual orientation)
  • AI used to create or expand facial recognition databases through untargeted scraping
  • “Predictive policing” based on profiling

2. High-Risk AI Systems

These are permitted but subject to significant pre-market and ongoing obligations. High-risk AI systems include:

  • Biometric identification and categorisation
  • Critical infrastructure management (energy grids, water, transport)
  • Education and vocational training (access, assessment)
  • Employment, workers management, and access to self-employment (CV screening, monitoring, task allocation)
  • Access to essential private and public services (credit scoring, insurance underwriting, social benefits)
  • Law enforcement (risk assessments, evidence reliability, crime prediction)
  • Migration, asylum, and border control
  • Administration of justice

Obligations for high-risk AI include: risk management system, technical documentation, data governance, transparency and provision of information, human oversight mechanisms, accuracy and robustness, and registration in the EU AI database.

3. Limited Risk — Transparency Obligations

Systems such as chatbots, deepfakes, and AI-generated content must inform users that they are interacting with AI or that content is artificially generated.

4. Minimal Risk

Most current AI applications (spam filters, AI in video games, basic recommendation systems) fall here. No mandatory obligations apply, though voluntary codes of conduct are encouraged.

General-Purpose AI (GPAI) Models

The AI Act introduces specific rules for General-Purpose AI (GPAI) models — large AI models trained on broad data that can perform a wide range of tasks (such as large language models). All GPAI model providers must:

  • Provide technical documentation and instructions for use
  • Comply with EU copyright law
  • Publish summaries of training data

GPAI models with systemic risk (trained using more than 10^25 FLOPs, or otherwise designated) face additional obligations: adversarial testing (red-teaming), serious incident reporting to the European AI Office, and cybersecurity measures.

Compliance Timeline

DateObligation
1 August 2024AI Act enters into force
2 February 2025Prohibited AI practices apply
2 August 2025GPAI model rules apply; governance provisions apply
2 August 2026High-risk AI systems in Annex I (safety components) apply
2 August 2027All remaining provisions, including Annex III high-risk systems

Obligations by Role

The AI Act distinguishes between providers (those who develop and place AI systems on the market or into service), deployers (those who use AI systems in a professional context), importers, and distributors. Obligations are heaviest for providers, but deployers of high-risk systems also face significant duties around use according to instructions, human oversight implementation, and employee training.

Interaction with GDPR

The AI Act does not replace GDPR. Both apply concurrently when AI processes personal data. A Data Protection Impact Assessment (DPIA) under GDPR Article 35 may be required alongside an AI Act risk assessment. The AEPD is likely to play a role in AI Act enforcement given its existing data protection mandate and the overlap between AI risks and privacy risks.

Enforcement and Penalties

The European AI Office (created within the European Commission) oversees GPAI model compliance. National market surveillance authorities handle product-specific rules; national competent authorities designated by member states handle other obligations. In Spain, the AI Supervisory Authority is being established within the existing regulatory framework.

Penalties:

  • Prohibited practices violations: up to €35 million or 7% of global annual turnover
  • High-risk system violations: up to €15 million or 3% of global annual turnover
  • Incorrect information to authorities: up to €7.5 million or 1.5% of global annual turnover

How BMC Can Help

We advise companies on AI Act risk classification of their current and planned AI systems, deployer obligation mapping, AI governance policy drafting, interaction between AI Act and GDPR compliance programmes, and regulatory affairs strategy for AI product launches in the EU.

Frequently asked questions

When does the EU AI Act apply to Spanish businesses?
The AI Act entered into force on 1 August 2024 with phased deadlines. Prohibited AI practices were banned from 2 February 2025, rules for general-purpose AI models apply from 2 August 2025, and most high-risk AI system obligations apply from 2 August 2026 or 2027. Spanish businesses must assess their AI tools against these dates immediately.
Which Spanish regulator enforces the EU AI Act?
Spain is establishing a national AI Supervisory Authority within the existing regulatory framework. The AEPD (Spain's data protection authority) is expected to play a significant enforcement role given the overlap between AI risks and privacy under GDPR, which it already enforces. The European AI Office oversees GPAI model compliance at EU level.
What are the penalties for AI Act violations in Spain?
Fines can reach €35 million or 7% of global annual turnover for prohibited AI practice violations, €15 million or 3% for high-risk system violations, and €7.5 million or 1.5% for providing incorrect information to authorities. These penalties apply per infringement and are among the highest in EU digital regulation.
Does the AI Act replace GDPR obligations in Spain?
No. The AI Act and GDPR apply concurrently. When an AI system processes personal data, both frameworks apply simultaneously. A DPIA under GDPR Article 35 may be required alongside an AI Act risk assessment, and Spanish businesses must maintain compliance with both regimes.
What is a high-risk AI system under the AI Act?
High-risk AI systems include those used for CV screening, credit scoring, biometric identification, and access to essential services. They require a risk management system, technical documentation, data governance measures, human oversight mechanisms, and registration in the EU AI database before deployment.
Back to glossary

Request a personalized consultation

Our experts are ready to analyze your situation and provide tailored solutions.

Call Contact