Skip to content

EU AI Act Compliance: Avoid €35M Fines Before August 2026

Full compliance with the EU Artificial Intelligence Act: risk classification, conformity assessments, transparency obligations, and prohibited practice audits.

Why EU AI Act compliance matters for your business

€35M
Maximum fine for prohibited AI practices under the AI Act
Aug 2026
Deadline for high-risk AI system compliance obligations
7%
Of global turnover: maximum fine for AI providers
4.8/5 on Google · 50+ reviews 25+ years experience 5 offices in Spain 500+ clients
Deadline 2 February 2025 (in force)

AI Act prohibitions

Prohibited AI practices are already enforceable. High-risk AI systems: August 2026

Quick assessment

Does this apply to your business?

Does your company have a complete inventory of every AI system it deploys, develops, or uses in its operations?

Do you know whether any of your AI systems fall into the prohibited or high-risk categories under the EU AI Act?

Have you reviewed your AI practices against the prohibitions that took effect in August 2025?

Do you have technical documentation, human oversight policies, and risk management procedures in place for your AI systems?

0 of 4 questions answered

Our approach

Our AI Act compliance process

01

AI system inventory and risk classification

We identify every AI system your company deploys, develops, or procures — whether as provider, importer, distributor, or deployer. We formally classify each system into the correct risk category: prohibited, high-risk, limited risk, or minimal risk.

02

Regulatory gap analysis

For each identified system, we analyse the applicable obligations and current compliance status: technical documentation, transparency measures, human oversight, risk management, and EU database registration requirements.

03

Compliance plan and remediation

We prioritise corrective actions by risk, regulatory deadlines, and operational impact. We design internal AI policies, conformity assessment procedures, and governance structures.

04

Implementation and regulatory monitoring

We support the implementation of technical and organisational controls, prepare the required documentation, and monitor regulatory developments from the EU AI Office and delegated acts.

The challenge

The EU AI Act is the world's most comprehensive AI regulation. Prohibitions on unacceptable AI practices took effect in August 2025. High-risk AI system obligations apply from August 2026. Fines reach EUR 35 million or 7% of global turnover. Most companies do not know which regulatory category their AI systems fall into — or that they are already non-compliant.

Our solution

We map every AI system your organisation deploys, develops, or uses, classify each by risk level under the Regulation, identify the applicable obligations, and design the compliance roadmap. From acceptable-use policies to full conformity assessments for high-risk systems, we guide every step of the process.

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework governing artificial intelligence systems, applicable to any company that develops, places on the market, or uses AI within the European Union, regardless of where the company is established. It establishes a four-tier risk classification — prohibited, high-risk, limited-risk, and minimal-risk — with fines reaching EUR 35 million or 7% of global annual turnover for the most serious violations. Prohibitions on unacceptable AI practices took effect in February 2025, while full obligations for high-risk AI systems under Annex III apply from August 2026.

Our regulatory technology team combines legal expertise in the EU AI Act with practical experience in information systems, data governance, and European digital regulation.

The Compliance Window Is Already Open

The AI Act is not a future regulation. Its first obligations — the prohibitions on unacceptable AI practices — became enforceable in August 2025. Companies using AI in recruitment, credit scoring, customer interaction, or any process affecting individuals in the EU are already subject to enforcement. The August 2026 deadline for high-risk AI system obligations appears distant, but conformity assessments, technical documentation, and risk management systems require months of preparatory work. Companies that begin in 2026 will not finish in time.

The Inventory Problem

The starting point is always the inventory. Most organisations lack a complete picture of all the AI systems they use: HR tools with automated screening algorithms, marketing platforms with behavioural segmentation, customer scoring systems, service chatbots, predictive analytics tools. Each must be classified within the Regulation’s taxonomy to determine which obligations apply. Misclassification — particularly underestimating the risk level — is the most common error and the one that creates the greatest enforcement exposure. A system that processes CV data to rank candidates is almost certainly high-risk under Annex III, regardless of how the vendor markets it.

High-Risk System Obligations in Practice

For systems classified as high-risk, the obligations are substantial. The provider must maintain detailed technical documentation, implement a risk management system, ensure training data quality, guarantee system transparency and interpretability, design effective human oversight mechanisms, and register the system in the EU database before commercialisation. We coordinate this process with data protection obligations under the GDPR, which overlap significantly when AI systems process personal data and require coordinated impact assessments.

The Contract Layer

The AI Act restructures contractual relationships across the AI supply chain. Agreements with AI system providers must be reviewed to ensure that Regulation obligations are correctly allocated between provider and deployer, that access rights to the technical documentation required for compliance are in place, and that contracts address serious incident scenarios requiring authority notification. This contractual review is an integral component of our compliance service.

General Purpose AI Models: A Separate Regime

The AI Act creates a distinct regulatory layer for General Purpose AI (GPAI) models — large language models, foundation models, and other systems capable of performing a wide range of tasks. Providers of GPAI models must supply technical documentation, comply with EU copyright law regarding training data, and publish training data summaries. Models with systemic risk face additional obligations: adversarial testing, serious incident reporting to the EU AI Office, and enhanced cybersecurity measures. Organisations that deploy GPAI models in their operations must integrate this compliance layer into their own AI management framework alongside data protection obligations.

Practical Timeline for High-Risk System Compliance

The August 2026 deadline for Annex III high-risk systems sounds distant; it is not. The conformity assessment process for a high-risk system requires first completing the AI inventory, then classifying each system, then conducting the full conformity assessment for each high-risk system identified — which includes risk management documentation, training data quality analysis, human oversight design, and, where required, notified body engagement. A company with three high-risk systems completing a thorough conformity assessment for each should plan for six to nine months of structured work. That window, counting back from August 2026, has already narrowed significantly.

Our recommended sequencing: begin with the AI inventory (typically four to six weeks), proceed to classification and gap analysis (two to four weeks), then prioritise conformity assessments for the highest-risk systems. The compliance programme is built iteratively, and ongoing monitoring of EU AI Office guidance is a permanent function, not a one-time exercise. The enterprise risk management framework is the natural home for this ongoing oversight structure.

The Contract Layer in Practice

The AI Act restructures contractual relationships across the AI supply chain. Agreements with AI system providers must be reviewed to ensure that Regulation obligations are correctly allocated between provider and deployer, that access rights to the technical documentation required for compliance are in place, and that contracts address serious incident scenarios requiring authority notification. Providers of high-risk AI systems must contractually grant deployers access to logs, training data summaries, and the technical documentation required for the deployer’s own compliance obligations. Many standard vendor contracts do not include these provisions — and without them, the deployer cannot demonstrate compliance. A compliance risk mapping exercise that places the AI Act within the multi-regulatory context enables management to view AI compliance not as an isolated obligation but as part of the organisation’s risk governance framework.

Building Compliance as a Competitive Asset

AI governance is the necessary internal complement to regulatory compliance. Organisations that manage their AI systems well do not merely avoid fines: they build trust assets with customers, partners, and regulators that generate real competitive advantage in a market where algorithmic opacity is increasingly unacceptable to institutional buyers, insurers, and counterparties conducting due diligence. For companies in financial services or healthcare, proactive AI Act compliance — demonstrated through a robust AI inventory, functioning governance committee, and documented conformity assessments — is increasingly the threshold for accessing institutional partnerships and regulated markets.

Sectors Most Affected and Their Specific Compliance Challenges

Financial services: banks, insurers, and FinTechs using AI for credit scoring, anti-fraud, anti-money laundering, or customer risk profiling are squarely within the high-risk category under Annex III (access to essential services). The overlap with DORA’s ICT risk management obligations creates a dual compliance requirement that must be addressed in an integrated framework to avoid duplication.

Human resources: companies using AI-powered tools for CV screening, candidate ranking, or employee performance evaluation must comply with AI Act deployer obligations (Article 26 AI Act) regardless of whether the AI system is developed in-house or purchased from a vendor. The HR director’s contractual relationship with the AI vendor must include access to technical documentation, incident reporting procedures, and confirmation that the system has completed the required conformity assessment.

Healthcare: AI-assisted diagnostic tools are regulated simultaneously as medical devices under the MDR (Regulation 2017/745) and as high-risk AI systems under the AI Act. The conformity assessment tracks overlap significantly, requiring coordinated management by legal, regulatory affairs, and technical teams.

Technology companies developing AI products: companies that develop AI-enabled products for deployment within the EU — whether as cloud-based services, embedded software, or integrated tools — are classified as AI Act providers and bear the primary conformity assessment burden. Technical documentation, risk management systems, and EU database registration must be completed before the product is placed on the EU market.

Company Size Segmentation

Startups and scale-ups developing AI products face disproportionate compliance burdens relative to their headcount: the AI Act’s provider obligations apply regardless of company size. We help early-stage AI companies build compliance into their product development lifecycle (SDLC governance checkpoints, technical documentation templates, conformity assessment procedures) at a cost proportionate to their funding stage, starting from a minimum viable compliance programme that can be scaled as the company grows.

SMEs deploying third-party AI tools are classified as deployers rather than providers and have lighter obligations: the primary focus is human oversight, AI literacy training under Art. 4 AI Act, and ensuring that the third-party tools they use have completed the provider’s conformity assessment. We design deployer compliance programmes for SMEs that address these obligations efficiently without requiring a dedicated compliance team.

Corporate groups deploying AI across multiple business units require enterprise-wide AI inventories, group-level AI governance frameworks, and coordinated conformity assessment programmes. For groups subject to CSRD reporting, AI governance disclosure under ESRS G1 creates an additional reporting layer that the compliance programme must feed.

Common Mistakes We Fix

  1. Assuming the AI Act only applies to AI companies. Any company that deploys an AI system — even a commercially available third-party tool used in HR, finance, or customer service — has AI Act obligations as a deployer. The deployer’s obligations are lighter than the provider’s, but they are real: human oversight, AI literacy training, and maintenance of use logs for high-risk systems.

  2. Not reviewing vendor contracts for AI Act compliance provisions. Providers of high-risk AI systems must give deployers access to technical documentation, training data summaries, and incident reporting mechanisms. Most standard SaaS and software contracts do not include these provisions. Companies that do not renegotiate their vendor agreements risk being in technical breach of their deployer obligations before they have even started using the system.

  3. Treating the AI inventory as a one-time exercise. New AI systems are adopted continuously — embedded in updated software, introduced by business units without central approval, or added through acquisitions. An AI inventory that is not updated regularly will become obsolete, and any new high-risk system added without a conformity assessment creates immediate compliance exposure.

  4. Underestimating the August 2026 timeline. The conformity assessment for a single high-risk AI system — including technical documentation, risk management system, data quality analysis, and human oversight design — takes two to four months when well-resourced. A company with multiple high-risk systems that begins in early 2026 cannot complete all assessments before the August deadline. The window is narrowing.

  5. Confusing AI Act compliance with ethical AI aspirations. The AI Act is a legal compliance obligation with real enforcement, real fines, and real timelines. Building an AI ethics policy or publishing AI principles does not satisfy AI Act obligations. Conformity assessment, technical documentation, and incident reporting are legal requirements independent of any voluntary governance commitments.

Geographic Coverage

AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) has supervisory jurisdiction over AI systems deployed or placed on the market in Spain. For companies operating across multiple EU Member States, each national competent authority has jurisdiction within its territory, but enforcement is coordinated through the European AI Office for GPAI models and cross-border high-risk systems. We advise companies across all Spanish territories on AI Act compliance, and coordinate with EU counsel in other member states for clients with multi-jurisdiction AI deployments.

The Prohibition List: Practices Already Unlawful Since February 2025

The AI Act’s prohibition provisions (Art. 5) took effect on 2 February 2025 and are already enforceable. Companies using any of the following practices must cease immediately or face immediate enforcement exposure:

  • Subliminal manipulation: AI systems that deploy techniques operating below the threshold of a person’s consciousness to distort their behaviour in a way that causes or is likely to cause harm.
  • Exploitation of vulnerabilities: systems that exploit the specific vulnerabilities of persons due to age (children), disability, or social or economic circumstances to distort behaviour in a harmful way.
  • Social scoring by public authorities: AI systems used by public authorities for the purposes of evaluating or classifying natural persons based on their social behaviour or personal characteristics, with detrimental social consequences.
  • Real-time remote biometric identification in public spaces: biometric systems used in real-time on publicly accessible spaces for law enforcement purposes, with limited and strictly defined exceptions.
  • Emotion recognition in workplaces and educational institutions: AI systems designed to infer emotions of natural persons in these contexts, with exceptions for medical or safety purposes.
  • Biometric categorisation to infer sensitive attributes: systems that categorise individuals based on biometric data to infer race, political opinion, trade union membership, religious beliefs, or sexual orientation.

For most companies, the most practically relevant prohibition is the emotion recognition restriction in workplace settings. Companies using AI-powered employee monitoring tools, productivity analytics with emotional state detection, or recruitment assessment tools with emotion inference components must review whether those features are now prohibited.

How We Work

Our AI Act compliance practice combines regulatory lawyers with technical consultants who have hands-on experience in AI system auditing and data governance. An engagement typically follows three phases:

Phase 1 — AI inventory and risk classification (4-6 weeks): mapping every AI system in use (including embedded features in third-party software), classifying each against the AI Act risk tiers, and producing the inventory documentation required by Art. 49 AI Act for high-risk system providers.

Phase 2 — Gap analysis and compliance plan (2-4 weeks): assessment of gaps between the current state and the applicable obligations for each classified system, with a risk-prioritised remediation plan structured by regulatory deadline.

Phase 3 — Implementation and monitoring: conformity assessment design and execution for high-risk systems, vendor contract review and renegotiation for AI Act compliance provisions, AI literacy training programme design and delivery, and quarterly monitoring of EU AI Office guidance and delegated acts.

Our fixed-fee AI Act diagnostic package delivers a complete AI inventory, risk classification, and gap analysis within four weeks — the starting point for any compliance programme and the evidence required to demonstrate good-faith compliance efforts to AESIA in the event of an inquiry.

Worked Example: AI Act Compliance for a Spanish HR Technology Platform

A Spanish HR technology company (85 employees, EUR 12 million SaaS revenue) had built an AI-powered recruitment screening platform used by 200+ client companies across the EU to rank job applicants. The company’s legal team approached us after a client raised questions about the AI Act classification of the platform.

BMC’s analysis:

  • The platform’s CV ranking and candidate scoring features are squarely within Annex III, Category 4 (employment and workforce management). The company is therefore an AI Act provider of a high-risk AI system.
  • The prohibitions assessment identified that the platform’s facial analysis feature — which estimated candidate engagement during video interviews — was in the emotion recognition category prohibited in recruitment contexts under Art. 5(1)(f) AI Act (prohibition in force since February 2025). The feature was immediately disabled pending redesign.
  • Conformity assessment scope: technical documentation of the AI model (training data, architecture, bias assessment methodology, known limitations), risk management system, human oversight design (client companies must be able to override any system ranking), and EU database registration.
  • Client contract review: the company’s standard SaaS agreement did not include the Art. 26 AI Act provisions required for deployer clients (access to documentation, incident reporting procedures, usage instructions). 200+ contracts required updating.

Timeline: 6 months from instruction to conformity assessment completion and EU database registration. The prohibited emotion recognition feature was redesigned as an opt-in engagement analytics tool (limited-risk, Art. 50 transparency obligation only) rather than as an automated assessment system.

Regulatory Interaction: AI Act, GDPR, and Employment Law

AI systems used in employment contexts — recruitment, performance management, promotion decisions, termination risk scoring — operate at the intersection of three legal frameworks simultaneously:

EU AI Act (high-risk Annex III, Category 4): conformity assessment, human oversight, technical documentation, and EU database registration for providers; human oversight, AI literacy, and log maintenance for deployers.

GDPR (Regulation 2016/679): data protection impact assessment (DPIA) required for automated processing with significant effects on individuals; lawful basis for processing (legitimate interest or consent — consent is problematic in employment contexts due to the power imbalance); data subject rights including the right to explanation for automated decisions (Art. 22 GDPR).

Spanish employment law (Estatuto de los Trabajadores — TRLET): algorithmic management obligations introduced by the “riders’ law” (RDL 9/2021) require employers to inform workers’ representatives of the parameters, rules, and instructions on which algorithms or AI systems that affect employment conditions are based. This is a specific Spanish obligation that goes beyond both GDPR and AI Act requirements in the employment context.

We design compliance programmes for employers and HR technology companies that address all three frameworks simultaneously, avoiding the duplication and gaps that arise from treating them as separate compliance projects. The unified approach produces a single documentation package — DPIA integrated with AI Act conformity assessment, human oversight procedure integrated with Art. 22 GDPR compliance, and algorithmic transparency notice integrated with TRLET Art. 64 ter — that satisfies all three regulators.

The Post-Market Monitoring Obligation

One of the most frequently overlooked AI Act obligations is the requirement for providers of high-risk AI systems to implement a post-market monitoring system (Art. 72 AI Act) that actively collects and analyses data on the performance of deployed systems throughout their lifecycle. This is not a one-time compliance check — it is an ongoing operational requirement.

The post-market monitoring system must:

  • Define the performance metrics and fairness indicators that will be monitored throughout the system’s operational life.
  • Establish data collection mechanisms that capture sufficient information to detect performance drift, emerging bias, or unexpected failure modes.
  • Set alert thresholds that trigger review when performance indicators fall below defined levels.
  • Define the corrective action protocol when the system no longer meets its conformity assessment conclusions.

For deployers of high-risk systems (companies using third-party AI tools), the post-market monitoring obligation is shared with the provider — but deployers must cooperate with the provider’s monitoring requirements and report serious incidents that occur in their use of the system. We design post-market monitoring frameworks that are proportionate to the risk profile of the system and integrated into the organisation’s existing quality and compliance management processes.

Track record

Real results in AI Act compliance

Our product team had integrated several language models into our recruitment screening process without realising this placed us squarely in the high-risk category under the AI Act. BMC completed the full inventory, explained exactly what obligations applied, and designed a compliance plan we implemented over four months — well ahead of the key enforcement deadlines.

Talentbridge Europe S.L.
Chief Compliance Officer

Experienced team with local insight and international reach

What our EU AI Act compliance service includes

AI inventory and risk classification

Comprehensive mapping of AI systems in use, development, or commercialisation, with formal classification by risk category under the Regulation and analysis of the value chain (provider, importer, distributor, deployer).

Gap analysis and compliance roadmap

Analysis of gaps between current state and applicable obligations for each system, with a prioritised action plan structured by risk level and regulatory deadlines.

Conformity assessments

Design and execution of the conformity assessment process for high-risk systems: technical documentation, incident logging, bias analysis, robustness testing, and preparation for notified body review where required.

Internal AI policies and governance

Drafting of acceptable AI use policies, internal governance frameworks, human oversight procedures, and incident reporting mechanisms aligned with the AI Act.

Training and regulatory monitoring

Training for technology, compliance, and leadership teams on AI Act obligations, with ongoing monitoring of EU AI Office guidance and delegated acts.

Guides

Reference guides

Post-Brexit: your British company operating in Spain with the right structure

post-Brexit advisory for UK companies operating in Spain: entity structuring, customs and VAT, work permits for British nationals, UK-Spain tax treaty optimisation and data protection compliance.

View guide

AML compliance in Spain 2026: what your business must know about anti-money laundering regulation

Spain AML compliance 2026: SEPBLAC obligations, risk-based approach, PBC manual, UBO verification, and suspicious transaction reporting. Expert service from BMC.

View guide

Comprehensive legal services for businesses

Comprehensive legal advisory for businesses: commercial, employment, contracts, regulatory compliance, and dispute resolution. A dedicated legal team to protect your company.

View guide

Buy property in Spain with confidence — and without the horror stories

Buying property in Spain 2026: NIE, conveyancing, ITP tax, mortgage advice, and due diligence for foreign buyers. Step-by-step guide from BMC property lawyers.

View guide

The collective agreement that governs your workforce: understand it and negotiate from strength

Spain collective bargaining guide: union negotiation obligations, ERE/ERTE triggers, works council rights, agreement registration, and how BMC protects employer interests.

View guide

Your commercial lease agreement: get the clauses right before you sign

Spain commercial lease guide: LAU legal framework, rent review clauses, break options, guarantee structures, and key negotiation points for tenants and landlords.

View guide

Service Lead

Sofia Navarro Estevez

Associate - Legal Division

LLM in Technology Law and Digital Regulation, King's College London Law Degree, Universidade de Santiago de Compostela
FAQ

Frequently asked questions about EU AI Act compliance

The AI Act applies to any company that places AI systems on the EU market or uses AI systems in the EU, regardless of where the company is established. It covers providers who develop and sell AI systems as well as deployers who integrate them into their processes. Non-EU companies that sell or use AI affecting EU citizens and residents are also subject to the Regulation.
The AI Act absolutely prohibits systems classified as unacceptable risk: subliminal manipulation or exploitation of vulnerabilities, broad social scoring by public entities, real-time remote biometric identification in public spaces (with limited exceptions), emotion recognition in workplace and educational settings, and biometric categorisation to infer sensitive attributes such as race or sexual orientation.
Annex III lists the high-risk categories: AI in critical infrastructure, student assessment in education, recruitment screening and performance evaluation, access to essential services (credit, insurance), law enforcement, migration and asylum, and administration of justice. These systems require conformity assessment, detailed technical documentation, and in most cases prior registration in the EU database.
Deployers must ensure the system is used in accordance with the provider's instructions, implement effective human oversight, report serious incidents to authorities, inform employees when AI affecting them is used, and conduct fundamental rights impact assessments when the system interacts with the public. They must also maintain operation logs.
GPAI models (such as large language models) have their own regulatory regime. Providers must supply technical documentation, comply with copyright law regarding training data, and publish a summary of training content. Models with systemic risk (training computation exceeding 10^25 FLOPs) carry additional obligations: adversarial testing, serious incident reporting, and enhanced cybersecurity measures.
The AI Act carries the highest penalties in European technology regulation: up to EUR 35 million or 7% of global annual turnover for violations of prohibited practice rules; up to EUR 15 million or 3% for other obligation breaches; and up to EUR 7.5 million or 1.5% for providing incorrect information. Reduced caps apply for SMEs and startups.
It depends on the category. Most Annex III systems can undergo a self-assessment by the provider provided they follow harmonised standards. However, systems in categories such as biometrics and critical infrastructure in certain configurations require assessment by external notified bodies. We help determine which conformity procedure applies to each system.
The AI Act and GDPR are complementary and frequently overlap, particularly when AI systems process personal data. The AI Act's fundamental rights impact assessment must be coordinated with the GDPR's data protection impact assessment (DPIA). Our team integrates both frameworks into a single process, avoiding duplication and ensuring a coherent compliance structure.
Non-compliance with the EU AI Act (Regulation 2024/1689) can result in fines of up to EUR 35 million or 7% of global annual group turnover for prohibited AI practices, up to EUR 15 million or 3% for breaches of high-risk system obligations, and up to EUR 7.5 million or 1.5% for providing incorrect information to supervisory authorities. In Spain, the supervisory authority will be AESIA (Agencia Española de Supervisión de la Inteligencia Artificial). Sanctions for prohibited practices have been applicable since August 2025 with no additional grace period.
The AI Act applies in phases. Prohibitions on unacceptable-risk AI practices have been enforceable since 2 August 2025. Obligations for high-risk AI systems listed in Annex III — including systems used for personnel selection, credit scoring, law enforcement, education, and critical infrastructure — apply from 2 August 2026. Obligations for general-purpose AI (GPAI) models have applied since August 2025. Companies operating high-risk systems must complete their conformity assessment and EU database registration by summer 2026.
The maximum penalty for operating an AI system classified as a prohibited practice — such as subliminal manipulation, general-purpose social scoring, or real-time remote biometric identification in public spaces — is EUR 35 million or 7% of global annual group turnover, whichever is higher. For SMEs and startups, the Regulation provides for lower caps, though the precise amount depends on the national supervisory authority's assessment. These are the highest fines in European technology regulation, exceeding those under the GDPR.
First step

Start with a free diagnostic

Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.

EU AI Act Compliance

Legal

Talk to the partner in charge

Response within 24 business hours. First meeting free.

Services
Contact
Insights