EU AI Act Compliance: Avoid €35M Fines Before August 2026
Full compliance with the EU Artificial Intelligence Act: risk classification, conformity assessments, transparency obligations, and prohibited practice audits.
Why EU AI Act compliance matters for your business
Does this apply to your business?
Does your company have a complete inventory of every AI system it deploys, develops, or uses in its operations?
Do you know whether any of your AI systems fall into the prohibited or high-risk categories under the EU AI Act?
Have you reviewed your AI practices against the prohibitions that took effect in August 2025?
Do you have technical documentation, human oversight policies, and risk management procedures in place for your AI systems?
0 of 4 questions answered
Our AI Act compliance process
AI system inventory and risk classification
We identify every AI system your company deploys, develops, or procures — whether as provider, importer, distributor, or deployer. We formally classify each system into the correct risk category: prohibited, high-risk, limited risk, or minimal risk.
Regulatory gap analysis
For each identified system, we analyse the applicable obligations and current compliance status: technical documentation, transparency measures, human oversight, risk management, and EU database registration requirements.
Compliance plan and remediation
We prioritise corrective actions by risk, regulatory deadlines, and operational impact. We design internal AI policies, conformity assessment procedures, and governance structures.
Implementation and regulatory monitoring
We support the implementation of technical and organisational controls, prepare the required documentation, and monitor regulatory developments from the EU AI Office and delegated acts.
The challenge
The EU AI Act is the world's most comprehensive AI regulation. Prohibitions on unacceptable AI practices took effect in August 2025. High-risk AI system obligations apply from August 2026. Fines reach EUR 35 million or 7% of global turnover. Most companies do not know which regulatory category their AI systems fall into — or that they are already non-compliant.
Our solution
We map every AI system your organisation deploys, develops, or uses, classify each by risk level under the Regulation, identify the applicable obligations, and design the compliance roadmap. From acceptable-use policies to full conformity assessments for high-risk systems, we guide every step of the process.
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework governing artificial intelligence systems, applicable to any company that develops, places on the market, or uses AI within the European Union, regardless of where the company is established. It establishes a four-tier risk classification — prohibited, high-risk, limited-risk, and minimal-risk — with fines reaching EUR 35 million or 7% of global annual turnover for the most serious violations. Prohibitions on unacceptable AI practices took effect in February 2025, while full obligations for high-risk AI systems under Annex III apply from August 2026.
Our regulatory technology team combines legal expertise in the EU AI Act with practical experience in information systems, data governance, and European digital regulation.
The Compliance Window Is Already Open
The AI Act is not a future regulation. Its first obligations — the prohibitions on unacceptable AI practices — became enforceable in August 2025. Companies using AI in recruitment, credit scoring, customer interaction, or any process affecting individuals in the EU are already subject to enforcement. The August 2026 deadline for high-risk AI system obligations appears distant, but conformity assessments, technical documentation, and risk management systems require months of preparatory work. Companies that begin in 2026 will not finish in time.
The Inventory Problem
The starting point is always the inventory. Most organisations lack a complete picture of all the AI systems they use: HR tools with automated screening algorithms, marketing platforms with behavioural segmentation, customer scoring systems, service chatbots, predictive analytics tools. Each must be classified within the Regulation’s taxonomy to determine which obligations apply. Misclassification — particularly underestimating the risk level — is the most common error and the one that creates the greatest enforcement exposure. A system that processes CV data to rank candidates is almost certainly high-risk under Annex III, regardless of how the vendor markets it.
High-Risk System Obligations in Practice
For systems classified as high-risk, the obligations are substantial. The provider must maintain detailed technical documentation, implement a risk management system, ensure training data quality, guarantee system transparency and interpretability, design effective human oversight mechanisms, and register the system in the EU database before commercialisation. We coordinate this process with data protection obligations under the GDPR, which overlap significantly when AI systems process personal data and require coordinated impact assessments.
The Contract Layer
The AI Act restructures contractual relationships across the AI supply chain. Agreements with AI system providers must be reviewed to ensure that Regulation obligations are correctly allocated between provider and deployer, that access rights to the technical documentation required for compliance are in place, and that contracts address serious incident scenarios requiring authority notification. This contractual review is an integral component of our compliance service.
General Purpose AI Models: A Separate Regime
The AI Act creates a distinct regulatory layer for General Purpose AI (GPAI) models — large language models, foundation models, and other systems capable of performing a wide range of tasks. Providers of GPAI models must supply technical documentation, comply with EU copyright law regarding training data, and publish training data summaries. Models with systemic risk face additional obligations: adversarial testing, serious incident reporting to the EU AI Office, and enhanced cybersecurity measures. Organisations that deploy GPAI models in their operations must integrate this compliance layer into their own AI management framework alongside data protection obligations.
Practical Timeline for High-Risk System Compliance
The August 2026 deadline for Annex III high-risk systems sounds distant; it is not. The conformity assessment process for a high-risk system requires first completing the AI inventory, then classifying each system, then conducting the full conformity assessment for each high-risk system identified — which includes risk management documentation, training data quality analysis, human oversight design, and, where required, notified body engagement. A company with three high-risk systems completing a thorough conformity assessment for each should plan for six to nine months of structured work. That window, counting back from August 2026, has already narrowed significantly.
Our recommended sequencing: begin with the AI inventory (typically four to six weeks), proceed to classification and gap analysis (two to four weeks), then prioritise conformity assessments for the highest-risk systems. The compliance programme is built iteratively, and ongoing monitoring of EU AI Office guidance is a permanent function, not a one-time exercise. The enterprise risk management framework is the natural home for this ongoing oversight structure.
The Contract Layer in Practice
The AI Act restructures contractual relationships across the AI supply chain. Agreements with AI system providers must be reviewed to ensure that Regulation obligations are correctly allocated between provider and deployer, that access rights to the technical documentation required for compliance are in place, and that contracts address serious incident scenarios requiring authority notification. Providers of high-risk AI systems must contractually grant deployers access to logs, training data summaries, and the technical documentation required for the deployer’s own compliance obligations. Many standard vendor contracts do not include these provisions — and without them, the deployer cannot demonstrate compliance. A compliance risk mapping exercise that places the AI Act within the multi-regulatory context enables management to view AI compliance not as an isolated obligation but as part of the organisation’s risk governance framework.
Building Compliance as a Competitive Asset
AI governance is the necessary internal complement to regulatory compliance. Organisations that manage their AI systems well do not merely avoid fines: they build trust assets with customers, partners, and regulators that generate real competitive advantage in a market where algorithmic opacity is increasingly unacceptable to institutional buyers, insurers, and counterparties conducting due diligence. For companies in financial services or healthcare, proactive AI Act compliance — demonstrated through a robust AI inventory, functioning governance committee, and documented conformity assessments — is increasingly the threshold for accessing institutional partnerships and regulated markets.
Sectors Most Affected and Their Specific Compliance Challenges
Financial services: banks, insurers, and FinTechs using AI for credit scoring, anti-fraud, anti-money laundering, or customer risk profiling are squarely within the high-risk category under Annex III (access to essential services). The overlap with DORA’s ICT risk management obligations creates a dual compliance requirement that must be addressed in an integrated framework to avoid duplication.
Human resources: companies using AI-powered tools for CV screening, candidate ranking, or employee performance evaluation must comply with AI Act deployer obligations (Article 26 AI Act) regardless of whether the AI system is developed in-house or purchased from a vendor. The HR director’s contractual relationship with the AI vendor must include access to technical documentation, incident reporting procedures, and confirmation that the system has completed the required conformity assessment.
Healthcare: AI-assisted diagnostic tools are regulated simultaneously as medical devices under the MDR (Regulation 2017/745) and as high-risk AI systems under the AI Act. The conformity assessment tracks overlap significantly, requiring coordinated management by legal, regulatory affairs, and technical teams.
Technology companies developing AI products: companies that develop AI-enabled products for deployment within the EU — whether as cloud-based services, embedded software, or integrated tools — are classified as AI Act providers and bear the primary conformity assessment burden. Technical documentation, risk management systems, and EU database registration must be completed before the product is placed on the EU market.
Company Size Segmentation
Startups and scale-ups developing AI products face disproportionate compliance burdens relative to their headcount: the AI Act’s provider obligations apply regardless of company size. We help early-stage AI companies build compliance into their product development lifecycle (SDLC governance checkpoints, technical documentation templates, conformity assessment procedures) at a cost proportionate to their funding stage, starting from a minimum viable compliance programme that can be scaled as the company grows.
SMEs deploying third-party AI tools are classified as deployers rather than providers and have lighter obligations: the primary focus is human oversight, AI literacy training under Art. 4 AI Act, and ensuring that the third-party tools they use have completed the provider’s conformity assessment. We design deployer compliance programmes for SMEs that address these obligations efficiently without requiring a dedicated compliance team.
Corporate groups deploying AI across multiple business units require enterprise-wide AI inventories, group-level AI governance frameworks, and coordinated conformity assessment programmes. For groups subject to CSRD reporting, AI governance disclosure under ESRS G1 creates an additional reporting layer that the compliance programme must feed.
Common Mistakes We Fix
-
Assuming the AI Act only applies to AI companies. Any company that deploys an AI system — even a commercially available third-party tool used in HR, finance, or customer service — has AI Act obligations as a deployer. The deployer’s obligations are lighter than the provider’s, but they are real: human oversight, AI literacy training, and maintenance of use logs for high-risk systems.
-
Not reviewing vendor contracts for AI Act compliance provisions. Providers of high-risk AI systems must give deployers access to technical documentation, training data summaries, and incident reporting mechanisms. Most standard SaaS and software contracts do not include these provisions. Companies that do not renegotiate their vendor agreements risk being in technical breach of their deployer obligations before they have even started using the system.
-
Treating the AI inventory as a one-time exercise. New AI systems are adopted continuously — embedded in updated software, introduced by business units without central approval, or added through acquisitions. An AI inventory that is not updated regularly will become obsolete, and any new high-risk system added without a conformity assessment creates immediate compliance exposure.
-
Underestimating the August 2026 timeline. The conformity assessment for a single high-risk AI system — including technical documentation, risk management system, data quality analysis, and human oversight design — takes two to four months when well-resourced. A company with multiple high-risk systems that begins in early 2026 cannot complete all assessments before the August deadline. The window is narrowing.
-
Confusing AI Act compliance with ethical AI aspirations. The AI Act is a legal compliance obligation with real enforcement, real fines, and real timelines. Building an AI ethics policy or publishing AI principles does not satisfy AI Act obligations. Conformity assessment, technical documentation, and incident reporting are legal requirements independent of any voluntary governance commitments.
Geographic Coverage
AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) has supervisory jurisdiction over AI systems deployed or placed on the market in Spain. For companies operating across multiple EU Member States, each national competent authority has jurisdiction within its territory, but enforcement is coordinated through the European AI Office for GPAI models and cross-border high-risk systems. We advise companies across all Spanish territories on AI Act compliance, and coordinate with EU counsel in other member states for clients with multi-jurisdiction AI deployments.
The Prohibition List: Practices Already Unlawful Since February 2025
The AI Act’s prohibition provisions (Art. 5) took effect on 2 February 2025 and are already enforceable. Companies using any of the following practices must cease immediately or face immediate enforcement exposure:
- Subliminal manipulation: AI systems that deploy techniques operating below the threshold of a person’s consciousness to distort their behaviour in a way that causes or is likely to cause harm.
- Exploitation of vulnerabilities: systems that exploit the specific vulnerabilities of persons due to age (children), disability, or social or economic circumstances to distort behaviour in a harmful way.
- Social scoring by public authorities: AI systems used by public authorities for the purposes of evaluating or classifying natural persons based on their social behaviour or personal characteristics, with detrimental social consequences.
- Real-time remote biometric identification in public spaces: biometric systems used in real-time on publicly accessible spaces for law enforcement purposes, with limited and strictly defined exceptions.
- Emotion recognition in workplaces and educational institutions: AI systems designed to infer emotions of natural persons in these contexts, with exceptions for medical or safety purposes.
- Biometric categorisation to infer sensitive attributes: systems that categorise individuals based on biometric data to infer race, political opinion, trade union membership, religious beliefs, or sexual orientation.
For most companies, the most practically relevant prohibition is the emotion recognition restriction in workplace settings. Companies using AI-powered employee monitoring tools, productivity analytics with emotional state detection, or recruitment assessment tools with emotion inference components must review whether those features are now prohibited.
How We Work
Our AI Act compliance practice combines regulatory lawyers with technical consultants who have hands-on experience in AI system auditing and data governance. An engagement typically follows three phases:
Phase 1 — AI inventory and risk classification (4-6 weeks): mapping every AI system in use (including embedded features in third-party software), classifying each against the AI Act risk tiers, and producing the inventory documentation required by Art. 49 AI Act for high-risk system providers.
Phase 2 — Gap analysis and compliance plan (2-4 weeks): assessment of gaps between the current state and the applicable obligations for each classified system, with a risk-prioritised remediation plan structured by regulatory deadline.
Phase 3 — Implementation and monitoring: conformity assessment design and execution for high-risk systems, vendor contract review and renegotiation for AI Act compliance provisions, AI literacy training programme design and delivery, and quarterly monitoring of EU AI Office guidance and delegated acts.
Our fixed-fee AI Act diagnostic package delivers a complete AI inventory, risk classification, and gap analysis within four weeks — the starting point for any compliance programme and the evidence required to demonstrate good-faith compliance efforts to AESIA in the event of an inquiry.
Worked Example: AI Act Compliance for a Spanish HR Technology Platform
A Spanish HR technology company (85 employees, EUR 12 million SaaS revenue) had built an AI-powered recruitment screening platform used by 200+ client companies across the EU to rank job applicants. The company’s legal team approached us after a client raised questions about the AI Act classification of the platform.
BMC’s analysis:
- The platform’s CV ranking and candidate scoring features are squarely within Annex III, Category 4 (employment and workforce management). The company is therefore an AI Act provider of a high-risk AI system.
- The prohibitions assessment identified that the platform’s facial analysis feature — which estimated candidate engagement during video interviews — was in the emotion recognition category prohibited in recruitment contexts under Art. 5(1)(f) AI Act (prohibition in force since February 2025). The feature was immediately disabled pending redesign.
- Conformity assessment scope: technical documentation of the AI model (training data, architecture, bias assessment methodology, known limitations), risk management system, human oversight design (client companies must be able to override any system ranking), and EU database registration.
- Client contract review: the company’s standard SaaS agreement did not include the Art. 26 AI Act provisions required for deployer clients (access to documentation, incident reporting procedures, usage instructions). 200+ contracts required updating.
Timeline: 6 months from instruction to conformity assessment completion and EU database registration. The prohibited emotion recognition feature was redesigned as an opt-in engagement analytics tool (limited-risk, Art. 50 transparency obligation only) rather than as an automated assessment system.
Regulatory Interaction: AI Act, GDPR, and Employment Law
AI systems used in employment contexts — recruitment, performance management, promotion decisions, termination risk scoring — operate at the intersection of three legal frameworks simultaneously:
EU AI Act (high-risk Annex III, Category 4): conformity assessment, human oversight, technical documentation, and EU database registration for providers; human oversight, AI literacy, and log maintenance for deployers.
GDPR (Regulation 2016/679): data protection impact assessment (DPIA) required for automated processing with significant effects on individuals; lawful basis for processing (legitimate interest or consent — consent is problematic in employment contexts due to the power imbalance); data subject rights including the right to explanation for automated decisions (Art. 22 GDPR).
Spanish employment law (Estatuto de los Trabajadores — TRLET): algorithmic management obligations introduced by the “riders’ law” (RDL 9/2021) require employers to inform workers’ representatives of the parameters, rules, and instructions on which algorithms or AI systems that affect employment conditions are based. This is a specific Spanish obligation that goes beyond both GDPR and AI Act requirements in the employment context.
We design compliance programmes for employers and HR technology companies that address all three frameworks simultaneously, avoiding the duplication and gaps that arise from treating them as separate compliance projects. The unified approach produces a single documentation package — DPIA integrated with AI Act conformity assessment, human oversight procedure integrated with Art. 22 GDPR compliance, and algorithmic transparency notice integrated with TRLET Art. 64 ter — that satisfies all three regulators.
The Post-Market Monitoring Obligation
One of the most frequently overlooked AI Act obligations is the requirement for providers of high-risk AI systems to implement a post-market monitoring system (Art. 72 AI Act) that actively collects and analyses data on the performance of deployed systems throughout their lifecycle. This is not a one-time compliance check — it is an ongoing operational requirement.
The post-market monitoring system must:
- Define the performance metrics and fairness indicators that will be monitored throughout the system’s operational life.
- Establish data collection mechanisms that capture sufficient information to detect performance drift, emerging bias, or unexpected failure modes.
- Set alert thresholds that trigger review when performance indicators fall below defined levels.
- Define the corrective action protocol when the system no longer meets its conformity assessment conclusions.
For deployers of high-risk systems (companies using third-party AI tools), the post-market monitoring obligation is shared with the provider — but deployers must cooperate with the provider’s monitoring requirements and report serious incidents that occur in their use of the system. We design post-market monitoring frameworks that are proportionate to the risk profile of the system and integrated into the organisation’s existing quality and compliance management processes.
Real results in AI Act compliance
Our product team had integrated several language models into our recruitment screening process without realising this placed us squarely in the high-risk category under the AI Act. BMC completed the full inventory, explained exactly what obligations applied, and designed a compliance plan we implemented over four months — well ahead of the key enforcement deadlines.
Experienced team with local insight and international reach
What our EU AI Act compliance service includes
AI inventory and risk classification
Comprehensive mapping of AI systems in use, development, or commercialisation, with formal classification by risk category under the Regulation and analysis of the value chain (provider, importer, distributor, deployer).
Gap analysis and compliance roadmap
Analysis of gaps between current state and applicable obligations for each system, with a prioritised action plan structured by risk level and regulatory deadlines.
Conformity assessments
Design and execution of the conformity assessment process for high-risk systems: technical documentation, incident logging, bias analysis, robustness testing, and preparation for notified body review where required.
Internal AI policies and governance
Drafting of acceptable AI use policies, internal governance frameworks, human oversight procedures, and incident reporting mechanisms aligned with the AI Act.
Training and regulatory monitoring
Training for technology, compliance, and leadership teams on AI Act obligations, with ongoing monitoring of EU AI Office guidance and delegated acts.
Results that speak for themselves
GDPR Healthcare Spain: Compliance Case Study | BMC
AEPD investigation closed with no sanction. Full GDPR compliance achieved across all group centres within 6 months.
Criminal Compliance Spain: Construction Group Case | BMC
Criminal compliance program implemented in 6 months, whistleblower channel operational, AENOR certification obtained, and prosecution risk effectively mitigated.
AML compliance program for a real estate development group
SEPBLAC inspection passed with minor observations only, zero sanctions. Full AML program operational within 90 days.
Reference guides
Post-Brexit: your British company operating in Spain with the right structure
post-Brexit advisory for UK companies operating in Spain: entity structuring, customs and VAT, work permits for British nationals, UK-Spain tax treaty optimisation and data protection compliance.
View guideAML compliance in Spain 2026: what your business must know about anti-money laundering regulation
Spain AML compliance 2026: SEPBLAC obligations, risk-based approach, PBC manual, UBO verification, and suspicious transaction reporting. Expert service from BMC.
View guideComprehensive legal services for businesses
Comprehensive legal advisory for businesses: commercial, employment, contracts, regulatory compliance, and dispute resolution. A dedicated legal team to protect your company.
View guideBuy property in Spain with confidence — and without the horror stories
Buying property in Spain 2026: NIE, conveyancing, ITP tax, mortgage advice, and due diligence for foreign buyers. Step-by-step guide from BMC property lawyers.
View guideThe collective agreement that governs your workforce: understand it and negotiate from strength
Spain collective bargaining guide: union negotiation obligations, ERE/ERTE triggers, works council rights, agreement registration, and how BMC protects employer interests.
View guideYour commercial lease agreement: get the clauses right before you sign
Spain commercial lease guide: LAU legal framework, rent review clauses, break options, guarantee structures, and key negotiation points for tenants and landlords.
View guideFrequently asked questions about EU AI Act compliance
Start with a free diagnostic
Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.
EU AI Act Compliance
Legal
First step
Start with a free diagnostic
Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.
Request your diagnostic
You may also be interested in
AI Governance
AI governance frameworks, ethics committees, algorithmic auditing, bias detection, and AI system registries for responsible organisations.
Saber másCompliance Risk Mapping
Comprehensive compliance risk mapping: regulatory obligation register, risk heat maps, multi-regulatory gap analysis (GDPR, NIS2, AI Act, AML), and regulatory change management.
Saber másCriminal Compliance
Corporate criminal compliance programmes to exempt or mitigate the criminal liability of legal entities under Article 31 bis of the Spanish Criminal Code.
Saber másData Protection & Privacy
GDPR and LOPDGDD compliance, outsourced DPO, and comprehensive privacy management for businesses.
Saber másDORA Compliance (Digital Operational Resilience)
Full implementation of the DORA framework (Regulation 2022/2554) for financial entities: ICT risk management, incident reporting, resilience testing, and ICT third-party risk.
Saber másHigh-Risk AI Systems
AI Act compliance for high-risk AI systems: conformity assessments, technical documentation, CE marking, post-market monitoring, and EU database registration.
Saber másKey terms
EU AI Act
The EU Artificial Intelligence Act (Regulation EU 2024/1689) is the world's first comprehensive…
Read definitionCISO (Chief Information Security Officer)
A Chief Information Security Officer (CISO) is the senior executive responsible for an…
Read definitionData Protection Officer (DPO)
A Data Protection Officer (DPO) is a designated individual responsible for overseeing an…
Read definitionNIS2 Directive
The Network and Information Security Directive 2 (NIS2 — Directive 2022/2555/EU) is the EU's updated…
Read definitionPrivacy by Design
A GDPR principle (Article 25) requiring data protection to be integrated into the design of…
Read definitionTalk to the partner in charge
Response within 24 business hours. First meeting free.