High-Risk AI Systems: Prepare for EU AI Act Annex III Compliance
AI Act compliance for high-risk AI systems: conformity assessments, technical documentation, CE marking, post-market monitoring, and EU database registration.
Why high-risk AI classification is the most common compliance error
Does this apply to your business?
Have you formally verified whether your AI systems fall within any of the eight high-risk categories in Annex III of the AI Act?
Do you have the complete technical documentation required by Annex IV of the Regulation for your critical AI systems?
Do you have a formal risk management system and a conformity assessment process for your high-risk AI systems?
Have you implemented effective human oversight and post-market monitoring for AI systems that influence decisions about individuals?
0 of 4 questions answered
Our high-risk AI compliance process
Classification confirmation and scope
We verify whether the system falls within the Annex III categories, analysing the actual use case, affected population, and degree of system autonomy. Misclassification is the most frequent source of regulatory risk.
Risk management system design
We implement the risk management system required by Article 9 of the AI Act: identification and assessment of foreseeable risks, mitigation measures, robustness testing, and a continuous review plan across the system's entire lifecycle.
Technical documentation and conformity assessment
We prepare the complete technical documentation package required by Annex IV: system description, training data, performance metrics, human oversight measures, and risk analysis. We manage the conformity assessment process, including coordination with notified bodies where required.
EU registration, CE marking, and post-market monitoring
We manage system registration in the EU database prior to commercialisation, coordinate the CE marking process for systems that require it, and design the post-market monitoring system with the required performance and fairness indicators.
The challenge
Annex III of the EU AI Act classifies AI systems in critical sectors as high-risk: recruitment screening, credit scoring, law enforcement, education, critical infrastructure, and justice. The obligations for these systems are substantial — technical documentation, conformity assessment, EU registration — and fines for non-compliance reach EUR 15 million. Many companies do not know their systems fall into this category.
Our solution
We manage the full compliance lifecycle for high-risk AI systems: from classification confirmation to technical documentation, conformity assessment, CE marking where required, and post-market monitoring systems. We serve as integrated legal and technical adviser for both providers and deployers.
High-risk AI systems are defined by Annex III of the EU AI Act (Regulation 2024/1689) as AI systems deployed in eight critical areas: biometrics, critical infrastructure management, education and vocational training, employment and worker management, access to essential private services (including credit scoring), law enforcement, migration and asylum management, and administration of justice and democratic processes. Providers of these systems must conduct a conformity assessment, maintain detailed technical documentation, implement a risk management system under Article 9, ensure human oversight, and register the system in the EU AI database before deployment. From August 2026, non-compliant high-risk AI systems cannot be lawfully placed on or used in the EU market, with fines reaching EUR 15 million or 3% of global turnover.
Our AI Act compliance team combines legal expertise in the Regulation with technical experience in machine learning systems, algorithmic risk assessment, and regulated product certification processes.
The Annex III Reality Check
Annex III of the AI Act is the list most companies need to know and fewest have read carefully. Eight categories of AI systems — from credit scoring to recruitment screening, critical infrastructure management, and biometric identification — are subject to an obligations regime substantially more demanding than the rest of the Regulation. The question is not theoretical: if your organisation uses AI to make or influence decisions about individuals in any of these contexts, the August 2026 obligations apply directly to you.
Classification Is Not Obvious
Classification as high-risk depends not only on the technology, but on the specific use made of it. A facial recognition system used internally for access control at a facility may not be high-risk; the same system used to identify individuals in public spaces likely is. A machine learning model that helps managers prepare performance evaluations may sit in a grey area; the same model generating scores that directly determine promotions or dismissals probably falls within Annex III. Classification confirmation is the first service we provide because it is the foundation for everything that follows.
The Technical Documentation Requirement
The technical documentation required by Annex IV is extensive and specific. It is not a generic descriptive document but a set of technical evidence covering how the system functions, on what data it was trained, how its performance was evaluated, what biases were detected and how they were mitigated, and how human oversight is guaranteed in operational use. Preparing this documentation requires collaboration between the legal team and the technical team that developed or integrated the system — a process we coordinate end to end.
Integrating AI Act and GDPR Obligations
For high-risk AI systems that process personal data — which includes virtually all recruitment screening, financial scoring, and biometric identification systems — AI Act compliance must be coordinated with GDPR and data protection obligations. The AI Act’s fundamental rights impact assessment and the GDPR’s data protection impact assessment (DPIA) are not redundant but do overlap: an integrated process avoids duplication and ensures the two compliance frameworks are mutually consistent.
Post-Market Monitoring as Ongoing Obligation
Post-market monitoring is perhaps the most underestimated AI Act obligation for high-risk systems. Compliance at the moment of deployment is not sufficient: the Regulation requires a continuous system for collecting and analysing data on actual system operation. This means defining performance and fairness indicators, establishing alert thresholds that trigger reviews, and maintaining records that demonstrate the system continues to operate in accordance with the original conformity assessment. We design these monitoring systems to be operationally viable without generating disproportionate burden on technical teams — and to provide the audit trail that both the AI Act and AI governance best practice require.
Annex III Use Cases: Understanding When You Are In Scope
The eight Annex III categories that trigger high-risk obligations are worth understanding in practical terms. Companies that use AI to take or influence decisions affecting people in any of the following contexts are likely in scope by August 2026:
Biometric identification and categorisation. Real-time and post-event remote biometric identification of individuals in public spaces. Biometric categorisation systems that assign sensitive attributes. Emotion recognition systems used in professional or educational settings.
Critical infrastructure management. AI systems that manage or operate critical digital infrastructure, road traffic, water, gas, electricity, or heating supply, where failures could endanger lives or disrupt essential services.
Education and vocational training. Systems that determine access to educational institutions, evaluate student performance, assess exam cheating, or determine educational pathways. Any AI-assisted scoring system that affects a student’s academic trajectory.
Employment, worker management, and access to self-employment. Systems used for recruitment, CV filtering, job candidate selection, employee monitoring, performance evaluation, and promotion or dismissal decisions. This category has direct relevance for HR technology companies and large employers who have integrated AI into hiring workflows.
Access to essential private services and public services and benefits. AI used in creditworthiness assessment, insurance risk scoring, and emergency services dispatching. For financial institutions using AI in credit or insurance decisions, the AI Act high-risk requirements add a compliance layer on top of existing sector regulation.
Law enforcement. AI systems used for crime prediction, profiling, polygraphs, and risk assessment tools used by police authorities. Most relevant for FinTech, security sector companies, and suppliers to public authorities.
Migration, asylum, and border control. AI used in visa and asylum processing, risk assessment of persons crossing borders, and document authenticity verification.
Administration of justice and democratic processes. AI used by courts, dispute resolution bodies, elections, and voting systems.
Compliance Timeline and Regulatory Deadlines
The AI Act’s phased compliance timeline creates urgent near-term obligations:
By 2 February 2025 (already passed): prohibited AI practices became unlawful. This includes social scoring by public authorities, real-time biometric identification in public spaces with narrow exceptions, and AI that exploits vulnerabilities or unconscious manipulation.
By 2 August 2025: compliance requirements for General-Purpose AI (GPAI) models and providers. Companies using GPAI through API integrations may have obligations under this timeline.
By 2 August 2026: full compliance obligations for high-risk AI systems under Annex III. This is the critical deadline for most commercial AI deployments. Providers and deployers of high-risk AI must have their risk management systems, technical documentation, conformity assessments, and post-market monitoring fully operational by this date.
The conformity assessment procedure for most Annex III systems can be completed through self-assessment with documentation, without an external notified body — which makes documentation quality and process rigour the decisive factor. Our team manages this process in coordination with the technical teams that developed or integrated the system.
Regulatory Framework: AI Act Art. 9-16 and Implementing Acts
The compliance obligations for high-risk AI systems are set out in Arts. 9-16 of the AI Act, supported by delegated and implementing acts issued by the European Commission. The principal obligations are:
Art. 9 — Risk management system: providers must establish, implement, document, and maintain a risk management system throughout the lifecycle of the AI system. This requires continuous iterative risk identification, risk estimation and evaluation, the adoption of risk mitigation measures, and testing. The risk management system must be documented and subject to annual review.
Art. 10 — Data and data governance: training, validation, and testing data must be subject to appropriate data governance and management practices. This includes analysis of possible biases, identification of any possible gaps or shortcomings, and consideration of how the design choices for data collection may affect the fundamental rights of affected persons.
Art. 11 — Technical documentation: providers must prepare technical documentation before placing the system on the market or putting it into service. Annex IV specifies the minimum content: general description of the system, description of the elements and development process, information on the monitoring and functioning plan, and instructions for use.
Art. 13 — Transparency and provision of information to deployers: high-risk AI systems must be designed and developed to ensure that their operation is sufficiently transparent for deployers to interpret the system’s output and use it appropriately. Instructions for use must accompany the system in electronic form.
Art. 14 — Human oversight: high-risk AI systems must be designed and developed so that they can be effectively overseen by natural persons during the period of use. Oversight measures must enable the responsible person to monitor the system’s performance, identify anomalies, and intervene or interrupt the system when necessary.
Art. 16 — Provider obligations: a comprehensive set of compliance obligations including the registration of the system in the EU database (for most Annex III categories), the maintenance of records and logs, post-market monitoring, incident reporting, and the designation of an EU representative by non-EU providers.
Sectors Most Affected
Recruitment and HR technology: Annex III, Category 4 (employment and workforce management) applies to AI systems used for recruitment or selection of natural persons, to make decisions on promotion and termination, to allocate tasks based on individual behaviour, and to monitor and evaluate performance. This captures the majority of modern AI-powered HR tools — ATS systems with ranking capabilities, employee monitoring platforms, performance evaluation tools.
Financial services: Annex III, Category 5 (access to essential private services) covers creditworthiness assessment, insurance risk scoring, and emergency services dispatching. Banks, insurers, and FinTechs using AI in any of these functions are providers or deployers of high-risk systems with full Art. 9-16 obligations.
Healthcare: Annex III, Category 2 captures AI systems intended to be used as safety components of medical devices under the Medical Device Regulation (MDR) or In-Vitro Diagnostic Medical Device Regulation (IVDR). Healthcare AI has the most complex compliance profile, with overlapping AI Act, MDR/IVDR, and GDPR obligations requiring coordinated management.
Education technology: Annex III, Category 3 covers AI systems used for determining access or admission to educational and vocational training institutions, evaluating learning outcomes, and detecting prohibited behaviour in examinations. EdTech companies deploying AI assessment tools face an August 2026 deadline that many have not yet begun addressing.
Worked Example: Conformity Assessment for a Credit Scoring System
A Spanish FinTech (120 employees, EUR 15 million revenue) operating an AI-driven consumer credit scoring system for digital lending sought advice on AI Act compliance when its compliance team raised a classification query.
BMC’s analysis:
- Classification confirmed: Annex III, Category 5 (creditworthiness assessment). The company is an AI Act provider of a high-risk system.
- Gap analysis: the company had model monitoring infrastructure in place (from existing model risk management practice), but had no formal Art. 9 risk management system, no Annex IV technical documentation, and no EU database registration.
- Bias assessment: the existing model fairness analysis (conducted for internal model risk purposes) was repurposed and supplemented to meet the Art. 10 data governance requirements. Protected attributes analysis extended to all GDPR Article 9 special categories.
- Human oversight design: the existing credit analyst review process was redesigned as the formal Art. 14 human oversight mechanism, with defined intervention thresholds and documentation requirements.
- Conformity assessment completed over 4 months; EU database registration submitted.
- Deployer contract templates updated to provide the Art. 13 information required for all institutional customers using the system.
Common Mistakes We Fix
-
Treating the risk management system as a one-off document rather than a lifecycle process. Art. 9 requires a continuous risk management system, not a document produced for the conformity assessment. The system must be integrated into the organisation’s MLOps and model governance processes and updated as the system evolves.
-
Underestimating the technical documentation scope. Annex IV’s requirements are more extensive than most providers expect. The technical documentation is not a marketing summary — it is a comprehensive technical record of the system’s design, development, training data, testing methodology, and known limitations that must be sufficient for a national competent authority to assess the system’s conformity.
-
Not reviewing the deployer’s obligations separately from the provider’s. Companies that both develop and deploy their own high-risk AI systems have obligations as both provider and deployer — which are not identical. The provider obligations are more extensive (conformity assessment, EU database registration), but deployer obligations (human oversight, log maintenance, incident reporting) apply continuously during operational use and must be embedded in operational processes.
-
Missing the EU database registration requirement. Most Annex III high-risk systems must be registered in the EU AI systems database before they are placed on the market or put into service. This registration is a mandatory step that cannot be retrospectively completed — a system in use without registration is already non-compliant from the registration deadline.
-
Not coordinating with GDPR compliance. High-risk AI systems that process personal data require a DPIA under GDPR Article 35 as well as the AI Act conformity assessment. The two processes are complementary but not identical. Companies that conduct them separately, with different teams and different timelines, produce inconsistent compliance documentation that creates vulnerabilities in both frameworks.
Geographic Coverage
AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) serves as the Spanish competent authority for high-risk AI systems under the AI Act. We advise companies across all Spanish territories on conformity assessment, EU database registration, and ongoing post-market monitoring obligations. For AI systems deployed in multiple EU Member States, we coordinate with national competent authorities in each jurisdiction through our EU regulatory network.
How We Work
Our high-risk AI compliance practice operates as an integrated team of regulatory lawyers and technical AI auditors. A typical engagement follows four phases:
Phase 1 — Classification and scoping (2-3 weeks): formal Annex III classification analysis, identification of provider/deployer roles, initial gap assessment against Art. 9-16 obligations.
Phase 2 — Technical documentation and risk management system (6-10 weeks): working with the client’s technical team to produce Annex IV-compliant documentation, design the Art. 9 risk management system, and conduct the data governance and bias assessment required by Art. 10.
Phase 3 — Conformity assessment and registration (2-4 weeks): finalisation of the conformity assessment documentation, EU Declaration of Conformity, and EU AI systems database registration.
Phase 4 — Ongoing monitoring: post-market monitoring system design and activation, annual review of the risk management system, and incident reporting support.
Fixed-fee classification packages are available for companies that need a rapid and documented Annex III classification conclusion — for example, when a client, regulator, or insurer has requested confirmation of the AI Act risk category of a specific system.
Interaction Between High-Risk AI and the GDPR: Coordinated Compliance
High-risk AI systems that process personal data — which includes virtually all recruitment, credit scoring, healthcare, and biometric systems — must comply with both the AI Act and the GDPR simultaneously. The principal interaction points are:
DPIA and AI Act fundamental rights impact assessment: GDPR Article 35 requires a Data Protection Impact Assessment before processing that is likely to result in high risk to individuals. The AI Act (Art. 27, applicable to deployers of high-risk Annex III systems in specific categories) additionally requires a fundamental rights impact assessment. We design a unified assessment process that satisfies both obligations simultaneously, avoiding the duplication of conducting two separate assessments with different methodologies on the same system.
Art. 22 GDPR right not to be subject to automated decisions: individuals have the right not to be subject solely to automated decisions that produce legal or similarly significant effects, and the right to request a human review. For high-risk AI systems that generate recommendations affecting individuals — credit decisions, recruitment rankings, performance evaluations — the “human oversight” requirement of Art. 14 AI Act and the “human review” right of Art. 22 GDPR are overlapping but not identical obligations. The human oversight under the AI Act must be genuine (capable of actually overriding the system) rather than formal (a human who rubber-stamps AI recommendations). Designing AI systems that satisfy both requirements requires legal and technical coordination from the design phase.
Data subject rights and transparency: GDPR transparency obligations (Arts. 13-14) require individuals to be informed when their data is processed by AI systems. The AI Act’s Art. 50 transparency obligations for certain AI systems (chatbots, deepfake tools) add a further layer. For high-risk systems making significant decisions about individuals, the combined transparency obligation requires clear, understandable explanation of how the AI system contributes to the decision.
International data transfers and high-risk AI: high-risk AI systems trained on EU personal data that is processed by providers located outside the EU trigger both GDPR international transfer obligations (Standard Contractual Clauses, adequacy decisions) and AI Act obligations (provider obligations apply regardless of location if the system is used in the EU). We coordinate both sets of obligations for companies using US-based or other non-EU AI providers for high-risk functions.
Limitation Periods and Proactive Compliance
The August 2026 deadline for Annex III obligations creates a clear compliance urgency, but the regulatory consequences of non-compliance extend beyond the initial deadline. AESIA will have enforcement powers to conduct market surveillance audits, request documentation, and impose fines for systems already in use without completed conformity assessments. The limitation periods for AI Act infringements will be aligned with Spanish administrative law principles — typically four years for the most serious violations.
Companies that begin the conformity assessment process before August 2026 — even if they do not complete it before the deadline — are in a fundamentally better position than those who have not begun. Good-faith compliance efforts, documented from an early stage, are a relevant factor in AESIA’s enforcement discretion and in any administrative appeal of a sanction.
Real results in high-risk AI system compliance
Our personal loan scoring system had been in production for three years. When the legal team reviewed the AI Act, it became clear we were in Annex III with none of the obligations covered. BMC prepared all the technical documentation, managed the conformity assessment, and designed the monitoring system. We were compliant before the deadline without interrupting operations.
Experienced team with local insight and international reach
What our high-risk AI compliance service includes
Annex III classification confirmation
Legal and technical analysis to determine whether a specific system falls within the Annex III categories, considering actual use, value chain position, and affected population.
Risk management system (Art. 9)
Implementation of the AI Act's required risk management system: risk identification, likelihood and impact assessment, mitigation measures, and continuous review plan across the system lifecycle.
Complete technical documentation (Annex IV)
Preparation of the full technical documentation package: system description, training data, performance metrics, bias analysis, human oversight provisions, and update plan.
Conformity assessment and CE marking
Management of the conformity assessment process: self-assessment against harmonised standards or coordination with notified bodies, declaration of conformity, and CE marking process.
EU registration and post-market monitoring
Management of EU database registration and design of the post-market monitoring system with performance, fairness, and serious incident indicators.
Results that speak for themselves
Criminal Compliance Spain: Construction Group Case | BMC
Criminal compliance program implemented in 6 months, whistleblower channel operational, AENOR certification obtained, and prosecution risk effectively mitigated.
GDPR Healthcare Spain: Compliance Case Study | BMC
AEPD investigation closed with no sanction. Full GDPR compliance achieved across all group centres within 6 months.
AML compliance program for a real estate development group
SEPBLAC inspection passed with minor observations only, zero sanctions. Full AML program operational within 90 days.
Reference guides
Post-Brexit: your British company operating in Spain with the right structure
post-Brexit advisory for UK companies operating in Spain: entity structuring, customs and VAT, work permits for British nationals, UK-Spain tax treaty optimisation and data protection compliance.
View guideAML compliance in Spain 2026: what your business must know about anti-money laundering regulation
Spain AML compliance 2026: SEPBLAC obligations, risk-based approach, PBC manual, UBO verification, and suspicious transaction reporting. Expert service from BMC.
View guideComprehensive legal services for businesses
Comprehensive legal advisory for businesses: commercial, employment, contracts, regulatory compliance, and dispute resolution. A dedicated legal team to protect your company.
View guideBuy property in Spain with confidence — and without the horror stories
Buying property in Spain 2026: NIE, conveyancing, ITP tax, mortgage advice, and due diligence for foreign buyers. Step-by-step guide from BMC property lawyers.
View guideThe collective agreement that governs your workforce: understand it and negotiate from strength
Spain collective bargaining guide: union negotiation obligations, ERE/ERTE triggers, works council rights, agreement registration, and how BMC protects employer interests.
View guideYour commercial lease agreement: get the clauses right before you sign
Spain commercial lease guide: LAU legal framework, rent review clauses, break options, guarantee structures, and key negotiation points for tenants and landlords.
View guideAnalysis and perspectives
Frequently asked questions about high-risk AI systems under the EU AI Act
Start with a free diagnostic
Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.
High-Risk AI Systems
Legal
First step
Start with a free diagnostic
Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.
Request your diagnostic
You may also be interested in
EU AI Act Compliance
Full compliance with the EU Artificial Intelligence Act: risk classification, conformity assessments, transparency obligations, and prohibited practice audits.
Saber másAI Governance
AI governance frameworks, ethics committees, algorithmic auditing, bias detection, and AI system registries for responsible organisations.
Saber másCybersecurity Audit
Security posture assessment, compliance audits (ENS, ISO 27001, NIS2), vulnerability assessment, penetration testing management, and third-party risk evaluation.
Saber másData Protection & Privacy
GDPR and LOPDGDD compliance, outsourced DPO, and comprehensive privacy management for businesses.
Saber másKey terms
EU AI Act
The EU Artificial Intelligence Act (Regulation EU 2024/1689) is the world's first comprehensive…
Read definitionData Protection Officer (DPO)
A Data Protection Officer (DPO) is a designated individual responsible for overseeing an…
Read definitionNIS2 Directive
The Network and Information Security Directive 2 (NIS2 — Directive 2022/2555/EU) is the EU's updated…
Read definitionPrivacy by Design
A GDPR principle (Article 25) requiring data protection to be integrated into the design of…
Read definitionTalk to the partner in charge
Response within 24 business hours. First meeting free.