Skip to content
Strategy News

Artificial Intelligence: EU Advances in Regulation

EU AI Act (Regulation 2024/1689) regulatory advances: risk classification system (prohibited, high-risk, limited, minimal), February 2025 prohibition enforcement, and August 2026 high-risk system obligations for Spanish companies.

5 min read

The European Union's Artificial Intelligence Regulation (AI Act), whose negotiation reached a critical point in 2023, is the first comprehensive global regulation on artificial intelligence systems. Its approval by the European Parliament in March 2024 and publication in the Official Journal in August 2024 — as Regulation (EU) 2024/1689 — established a directly applicable regulatory framework across all member states, covering companies that develop, market or use AI systems in the EU. Spain, like other member states, must designate its National AI Supervisory Authority before the full application deadlines come into force.

Risk-Based Approach

The AI Act classifies AI systems into four categories according to their risk level, with proportionate obligations at each tier:

  1. Unacceptable risk (prohibited): systems that pose an unacceptable threat to fundamental rights, such as real-time biometric mass surveillance in public spaces for law enforcement purposes, government social scoring systems, and subliminal manipulation of behaviour. Prohibitions on these categories applied six months after the Regulation entered into force — that is, from February 2025.

  2. High risk: systems capable of causing serious harm to health, safety or fundamental rights. These are subject to strict pre-market requirements and must pass mandatory conformity assessment procedures before being placed on the market or put into service.

  3. Limited risk: systems subject primarily to transparency obligations towards users, such as chatbots and synthetic content generation tools (deepfakes). Users must be informed that they are interacting with an AI system.

  4. Minimal risk: the vast majority of current AI applications — spam filters, content recommendation engines, AI-powered video games. These are not subject to specific AI Act requirements, though companies may voluntarily adhere to codes of conduct.

High-Risk Systems

Annex III of the AI Act lists the high-risk AI systems. The most relevant categories for companies operating in Spain include:

  • Critical infrastructure: AI systems used in the management of energy, water, transport or financial infrastructure networks.
  • Education and training: systems that determine access to educational institutions or evaluate student performance.
  • Employment and worker management: systems used in recruitment (CV screening, automated interviews), performance evaluation or task allocation. This is one of the areas with the greatest practical adoption among medium-sized and large companies.
  • Access to essential services: credit scoring systems, solvency assessment tools, or systems that determine access to insurance, social benefits or public services.
  • Law enforcement and administration of justice: systems that assist in police investigations or judicial decisions.
  • Border management and immigration: biometric or risk assessment systems used in border controls.

Companies that use or market systems in any of these categories must implement, before market placement or deployment:

  • A documented risk management system, maintained and updated throughout the system’s lifecycle.
  • Data quality measures for training, validation and testing datasets, designed to prevent relevant biases.
  • Complete technical documentation enabling conformity assessment.
  • Automatic event logging capability (audit logs) for post-market monitoring.
  • Sufficient transparency and information so that users can interpret the system’s outputs.
  • Effective human oversight, including the ability to intervene or shut down the system.
  • Adequate robustness, accuracy and cybersecurity.

Before market placement, high-risk systems covered by Annex III must be registered in the EU database established under Article 71 of the Regulation.

Application Timelines

Regulation (EU) 2024/1689 established a phased application schedule:

  • 6 months after entry into force (February 2025): absolute prohibitions (unacceptable risk category).
  • 12 months (August 2025): obligations for providers of general-purpose AI models (GPAI), including large language models.
  • 24 months (August 2026): full application for most high-risk systems listed in Annex III, including those used in employment decisions, credit assessment and access to services.
  • 36 months (August 2027): certain high-risk systems listed in Annex II (products already covered by pre-existing sectoral safety legislation).

Implications for Businesses

Companies must begin mapping their AI systems now to determine which risk category applies. This inventory must be comprehensive: it covers not only internally developed systems but also third-party tools acquired from vendors and AI-as-a-service models integrated via APIs.

For companies acting as providers of high-risk systems, the regulatory burden is greater: they must ensure conformity before commercialisation, manage registrations in the EU database, and maintain quality and risk management systems throughout the product lifecycle.

For companies acting as deployers — that is, organisations using high-risk AI systems developed by third parties in their own operations — obligations include ensuring effective human oversight, not modifying systems in ways that alter their conformity assessment, and reporting serious incidents to the supervisory authority.

The penalty regime is significant: fines for using prohibited systems can reach €30 million or 6% of global annual turnover from the previous year. For non-compliance in high-risk systems, fines can reach €20 million or 4% of global turnover. For SMEs and start-ups, the Regulation provides for proportionate reductions but does not eliminate liability.

Regardless of company size, the first step is always inventory and risk classification. From there, companies with systems in high-risk categories or that provide general-purpose AI models should begin adapting their technical documentation, governance processes and contracts with suppliers and customers as soon as possible — updating liability clauses in light of the new regulatory framework.

At BMC we advise on adapting to the AI regulatory framework, including system classification, conformity documentation and the implementation of AI governance systems appropriate to the obligations of Regulation (EU) 2024/1689. See our compliance services.

Want to learn more?

Let us discuss how to apply these ideas to your business.

Call Contact