Skip to content

High-Risk AI Systems: Prepare for EU AI Act Annex III Compliance

AI Act compliance for high-risk AI systems: conformity assessments, technical documentation, CE marking, post-market monitoring, and EU database registration.

8
High-risk categories in AI Act Annex III
€15M
Maximum fine for breaching high-risk AI system obligations
Aug 2026
Effective date for high-risk AI system compliance obligations
4.8/5 on Google · 50+ reviews 25+ years experience 5 offices in Spain 500+ clients
Quick assessment

Does this apply to your business?

Have you formally verified whether your AI systems fall within any of the eight high-risk categories in Annex III of the AI Act?

Do you have the complete technical documentation required by Annex IV of the Regulation for your critical AI systems?

Do you have a formal risk management system and a conformity assessment process for your high-risk AI systems?

Have you implemented effective human oversight and post-market monitoring for AI systems that influence decisions about individuals?

0 of 4 questions answered

Our approach

Our high-risk AI compliance process

01

Classification confirmation and scope

We verify whether the system falls within the Annex III categories, analysing the actual use case, affected population, and degree of system autonomy. Misclassification is the most frequent source of regulatory risk.

02

Risk management system design

We implement the risk management system required by Article 9 of the AI Act: identification and assessment of foreseeable risks, mitigation measures, robustness testing, and a continuous review plan across the system's entire lifecycle.

03

Technical documentation and conformity assessment

We prepare the complete technical documentation package required by Annex IV: system description, training data, performance metrics, human oversight measures, and risk analysis. We manage the conformity assessment process, including coordination with notified bodies where required.

04

EU registration, CE marking, and post-market monitoring

We manage system registration in the EU database prior to commercialisation, coordinate the CE marking process for systems that require it, and design the post-market monitoring system with the required performance and fairness indicators.

The challenge

Annex III of the EU AI Act classifies AI systems in critical sectors as high-risk: recruitment screening, credit scoring, law enforcement, education, critical infrastructure, and justice. The obligations for these systems are substantial — technical documentation, conformity assessment, EU registration — and fines for non-compliance reach EUR 15 million. Many companies do not know their systems fall into this category.

Our solution

We manage the full compliance lifecycle for high-risk AI systems: from classification confirmation to technical documentation, conformity assessment, CE marking where required, and post-market monitoring systems. We serve as integrated legal and technical adviser for both providers and deployers.

High-risk AI systems are defined by Annex III of the EU AI Act (Regulation 2024/1689) as AI systems deployed in eight critical areas: biometrics, critical infrastructure management, education and vocational training, employment and worker management, access to essential private services (including credit scoring), law enforcement, migration and asylum management, and administration of justice and democratic processes. Providers of these systems must conduct a conformity assessment, maintain detailed technical documentation, implement a risk management system under Article 9, ensure human oversight, and register the system in the EU AI database before deployment. From August 2026, non-compliant high-risk AI systems cannot be lawfully placed on or used in the EU market, with fines reaching EUR 15 million or 3% of global turnover.

Our AI Act compliance team combines legal expertise in the Regulation with technical experience in machine learning systems, algorithmic risk assessment, and regulated product certification processes.

The Annex III Reality Check

Annex III of the AI Act is the list most companies need to know and fewest have read carefully. Eight categories of AI systems — from credit scoring to recruitment screening, critical infrastructure management, and biometric identification — are subject to an obligations regime substantially more demanding than the rest of the Regulation. The question is not theoretical: if your organisation uses AI to make or influence decisions about individuals in any of these contexts, the August 2026 obligations apply directly to you.

Classification Is Not Obvious

Classification as high-risk depends not only on the technology, but on the specific use made of it. A facial recognition system used internally for access control at a facility may not be high-risk; the same system used to identify individuals in public spaces likely is. A machine learning model that helps managers prepare performance evaluations may sit in a grey area; the same model generating scores that directly determine promotions or dismissals probably falls within Annex III. Classification confirmation is the first service we provide because it is the foundation for everything that follows.

The Technical Documentation Requirement

The technical documentation required by Annex IV is extensive and specific. It is not a generic descriptive document but a set of technical evidence covering how the system functions, on what data it was trained, how its performance was evaluated, what biases were detected and how they were mitigated, and how human oversight is guaranteed in operational use. Preparing this documentation requires collaboration between the legal team and the technical team that developed or integrated the system — a process we coordinate end to end.

Integrating AI Act and GDPR Obligations

For high-risk AI systems that process personal data — which includes virtually all recruitment screening, financial scoring, and biometric identification systems — AI Act compliance must be coordinated with GDPR and data protection obligations. The AI Act’s fundamental rights impact assessment and the GDPR’s data protection impact assessment (DPIA) are not redundant but do overlap: an integrated process avoids duplication and ensures the two compliance frameworks are mutually consistent.

Post-Market Monitoring as Ongoing Obligation

Post-market monitoring is perhaps the most underestimated AI Act obligation for high-risk systems. Compliance at the moment of deployment is not sufficient: the Regulation requires a continuous system for collecting and analysing data on actual system operation. This means defining performance and fairness indicators, establishing alert thresholds that trigger reviews, and maintaining records that demonstrate the system continues to operate in accordance with the original conformity assessment. We design these monitoring systems to be operationally viable without generating disproportionate burden on technical teams — and to provide the audit trail that both the AI Act and AI governance best practice require.

Track record

Real results in high-risk AI system compliance

Our personal loan scoring system had been in production for three years. When the legal team reviewed the AI Act, it became clear we were in Annex III with none of the obligations covered. BMC prepared all the technical documentation, managed the conformity assessment, and designed the monitoring system. We were compliant before the deadline without interrupting operations.

Iberian Credit Solutions S.A.
Chief Risk Officer

Experienced team with local insight and international reach

What you get

What our high-risk AI compliance service includes

Annex III classification confirmation

Legal and technical analysis to determine whether a specific system falls within the Annex III categories, considering actual use, value chain position, and affected population.

Risk management system (Art. 9)

Implementation of the AI Act's required risk management system: risk identification, likelihood and impact assessment, mitigation measures, and continuous review plan across the system lifecycle.

Complete technical documentation (Annex IV)

Preparation of the full technical documentation package: system description, training data, performance metrics, bias analysis, human oversight provisions, and update plan.

Conformity assessment and CE marking

Management of the conformity assessment process: self-assessment against harmonised standards or coordination with notified bodies, declaration of conformity, and CE marking process.

EU registration and post-market monitoring

Management of EU database registration and design of the post-market monitoring system with performance, fairness, and serious incident indicators.

FAQ

Frequently asked questions about high-risk AI systems under the EU AI Act

Annex III of the AI Act lists eight high-risk categories: (1) biometrics for identification or categorisation of individuals, (2) critical infrastructure management, (3) education and vocational training, (4) employment, worker management, and self-employment access, (5) access to essential private and public services and benefits, (6) law enforcement, (7) migration and asylum management, and (8) administration of justice and democratic processes. The European Commission can update this list via delegated acts.
Most Annex III systems can undergo a self-conformity assessment by the provider, provided they follow harmonised technical standards. However, AI systems used for remote biometric identification and certain systems in critical infrastructure and law enforcement sectors require assessment by accredited external notified bodies. We help determine exactly which procedure applies.
Annex IV requires comprehensive technical documentation: general description of the system and its purpose, description of system elements and development process, training, validation, and test data used, description of risks and mitigation measures, description of expected performance level and evaluation metrics, human oversight and result interpretation measures, and description of system changes throughout its lifecycle.
The deployer must: use the system in accordance with the provider's instructions, assign human oversight to competent individuals, monitor system operation and report serious incidents, conduct fundamental rights impact assessments when the system interacts with the public, inform workers when the system affects them, and retain automatic operation logs for at least six months.
CE marking under the AI Act certifies that the high-risk AI system has passed the applicable conformity assessment and meets the Regulation's requirements. It is mandatory for high-risk systems before they are commercialised in the EU. The process is similar to CE marking for other regulated products: technical documentation, declaration of conformity, and in some cases involvement of a notified body.
Article 27 of the AI Act requires deployers of high-risk systems that interact with the public to conduct a fundamental rights impact assessment before putting the system into operation. This assessment must identify the fundamental rights at stake, the likelihood and severity of impact on each right, the mitigation measures adopted, and whether oversight by relevant bodies is necessary.
The AI Act requires providers of high-risk systems to maintain a post-market monitoring system that collects and analyses data on system operation after commercialisation. This system must detect performance deviations, new risks not identified during development, and potential biases or discriminatory effects that emerge under real conditions of use. Monitoring data feeds the continuous risk management process.
AI systems used for credit scoring or creditworthiness assessment of natural persons are high-risk under Annex III. Financial institutions using AI to make or influence credit, insurance, or investment decisions must comply with the AI Act in addition to sectoral regulation (DORA, CRR, IDD). The regulatory overlap requires an integrated compliance approach, which we coordinate with our financial regulation practice.
First step

Start with a free diagnostic

Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.

High-Risk AI Systems

Legal

First step

Start with a free diagnostic

Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.

25+
years experience
5
offices in Spain
500+
clients served

Request your diagnostic

We respond within 4 business hours

Or call us directly: +34 910 917 811

Call Contact