Skip to content
Legal Regulatory Update

AI Act: High-Risk System Obligations

EU AI Act (Regulation 2024/1689) high-risk system obligations: Annex III categories, conformity assessment requirements, EU database registration, and the August 2026 enforcement deadline for Spanish companies.

5 min read

Regulation (EU) 2024/1689 of the European Parliament and of the Council, of 13 June 2024, laying down harmonised rules on artificial intelligence — the AI Act — was published in the Official Journal of the European Union on 12 August 2024 and entered into force on 1 August 2024. It is the world's first comprehensive regulatory framework for AI and establishes differentiated obligations according to the risk level of each system, with a phased implementation schedule culminating in August 2027 for AI systems integrated into products covered by product safety legislation.

The AI Act Risk Classification System

The AI Act classifies AI systems into four categories, with increasing obligations as risk level rises:

Prohibited AI practices (Article 5): Enforceable since 2 February 2025. These include social scoring systems by public authorities that condition differential treatment in spheres unrelated to data collection, biometric categorisation systems that infer protected characteristics (race, political opinion, religion, sexual orientation), real-time remote biometric identification in public spaces for law enforcement purposes (with limited exceptions), and subliminal manipulation systems exploiting the vulnerabilities of specific groups.

High-risk systems (Annex III): Enforceable from 2 August 2026. This category covers AI systems used in critical infrastructure, education, employment, access to essential services, law enforcement, border management and migration, administration of justice, and systems that are safety components of regulated products under EU harmonised legislation.

Limited risk systems: Basic transparency obligations, enforceable from August 2026, covering systems such as chatbots (users must know they are interacting with AI), emotion recognition systems and synthetic content generation systems (deepfakes must be clearly labelled).

Minimal risk systems: No specific AI Act obligations, though general civil liability and data protection rules apply.

Annex III High-Risk Systems: Detailed Analysis

Annex III of the AI Act lists eight categories of high-risk systems with particular relevance for companies operating in Spain:

Employment and workforce management. AI systems used for recruitment and selection — including CV screening, automated interviews and candidate scoring — are high-risk. So are systems for performance evaluation, promotion or termination decisions. This has direct implications for HR departments and e-recruiting platforms operating in Spain.

Access to essential services. Systems for evaluating the creditworthiness of natural persons, underwriting life and health insurance policies, and assessing social benefit applications are high-risk. This directly affects the banking sector (credit scoring), the insurance sector and public administrations with benefit management systems.

Biometrics. AI-based identification systems (facial recognition, voice recognition, gait analysis) used for facility access control, attendance monitoring or remote identity verification are high-risk, subject to applicable exceptions.

Critical infrastructure. AI systems used in managing power grids, water supply, transport and finance are high-risk. Critical infrastructure operators in Spain — who are also subject to the NIS2 Directive — must integrate AI Act requirements into their cybersecurity risk management framework.

Obligations for Providers of High-Risk Systems

The most demanding obligations fall on providers (companies that develop or place high-risk AI systems on the market). These obligations under Articles 9 to 17 of the AI Act include:

Risk management system (Article 9): A continuous, documented process throughout the entire lifecycle of the system, with identification and mitigation of risks before deployment and ongoing monitoring thereafter.

Data governance (Article 10): Training, validation and test data must be relevant, representative, sufficiently free from errors and complete, with specific attention to potential biases that could generate discriminatory outputs.

Technical documentation (Article 11 and Annex IV): Detailed documentation of the system before deployment, covering the general system description, training data, performance evaluation results, and the capabilities and limitations of the system.

Logging (Article 12): High-risk systems must generate automatic event logs enabling traceability of the system’s operation, particularly to identify situations that have given rise to risks to fundamental rights.

Transparency and information to operators (Article 13): The provider must supply the operator with instructions for use containing information on the system’s capabilities and limitations, interpretation of its outputs and the human oversight required.

Human oversight (Article 14): High-risk systems must be designed with a human-machine interface that allows the operator to supervise, understand, override, interrupt or deactivate the system.

Accuracy, robustness and cybersecurity (Article 15): Appropriate levels of accuracy, robustness against errors and manipulation, and resistance to adversarial attacks.

Conformity Assessment and CE Marking

Most high-risk systems listed in Annex III require a conformity assessment before deployment. For systems not subject to prior harmonisation legislation (i.e., Annex III systems that are not medical devices, machinery or safety products), conformity assessment may be conducted by means of self-assessment by the provider, provided the required technical documentation is generated and the EU declaration of conformity is issued.

High-risk systems intended for remote biometric identification and systems intended for use by law enforcement authorities require conformity assessment by an external notified body.

Following assessment, the provider registers the system in the EU AI database managed by the European Commission before deployment. This registration is an inescapable prerequisite for commercialising the system in the EU.

Penalties: Fines of Up to €35 Million

Article 99 of the AI Act establishes a tiered administrative sanctions regime based on the severity of the infringement: up to €35 million or 7% of total worldwide annual turnover (whichever is higher) for violation of the Article 5 prohibitions; up to €15 million or 3% of turnover for breach of other Regulation obligations; and up to €7.5 million or 1.5% of turnover for supplying incorrect information to national supervisory authorities.

At BMC, our legal team specialises in AI Act compliance mapping and implementation. Learn about our legal compliance services.

Want to learn more?

Let us discuss how to apply these ideas to your business.

Call Contact