Skip to content

EU AI Act Compliance: Avoid €35M Fines Before August 2026

Full compliance with the EU Artificial Intelligence Act: risk classification, conformity assessments, transparency obligations, and prohibited practice audits.

€35M
Maximum fine for prohibited AI practices under the AI Act
Aug 2026
Deadline for high-risk AI system compliance obligations
7%
Of global turnover: maximum fine for AI providers
4.8/5 on Google · 50+ reviews 25+ years experience 5 offices in Spain 500+ clients
Deadline 2 February 2025 (in force)

AI Act prohibitions

Prohibited AI practices are already enforceable. High-risk AI systems: August 2026

Quick assessment

Does this apply to your business?

Does your company have a complete inventory of every AI system it deploys, develops, or uses in its operations?

Do you know whether any of your AI systems fall into the prohibited or high-risk categories under the EU AI Act?

Have you reviewed your AI practices against the prohibitions that took effect in August 2025?

Do you have technical documentation, human oversight policies, and risk management procedures in place for your AI systems?

0 of 4 questions answered

Our approach

Our AI Act compliance process

01

AI system inventory and risk classification

We identify every AI system your company deploys, develops, or procures — whether as provider, importer, distributor, or deployer. We formally classify each system into the correct risk category: prohibited, high-risk, limited risk, or minimal risk.

02

Regulatory gap analysis

For each identified system, we analyse the applicable obligations and current compliance status: technical documentation, transparency measures, human oversight, risk management, and EU database registration requirements.

03

Compliance plan and remediation

We prioritise corrective actions by risk, regulatory deadlines, and operational impact. We design internal AI policies, conformity assessment procedures, and governance structures.

04

Implementation and regulatory monitoring

We support the implementation of technical and organisational controls, prepare the required documentation, and monitor regulatory developments from the EU AI Office and delegated acts.

The challenge

The EU AI Act is the world's most comprehensive AI regulation. Prohibitions on unacceptable AI practices took effect in August 2025. High-risk AI system obligations apply from August 2026. Fines reach EUR 35 million or 7% of global turnover. Most companies do not know which regulatory category their AI systems fall into — or that they are already non-compliant.

Our solution

We map every AI system your organisation deploys, develops, or uses, classify each by risk level under the Regulation, identify the applicable obligations, and design the compliance roadmap. From acceptable-use policies to full conformity assessments for high-risk systems, we guide every step of the process.

The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework governing artificial intelligence systems, applicable to any company that develops, places on the market, or uses AI within the European Union, regardless of where the company is established. It establishes a four-tier risk classification — prohibited, high-risk, limited-risk, and minimal-risk — with fines reaching EUR 35 million or 7% of global annual turnover for the most serious violations. Prohibitions on unacceptable AI practices took effect in February 2025, while full obligations for high-risk AI systems under Annex III apply from August 2026.

Our regulatory technology team combines legal expertise in the EU AI Act with practical experience in information systems, data governance, and European digital regulation.

The Compliance Window Is Already Open

The AI Act is not a future regulation. Its first obligations — the prohibitions on unacceptable AI practices — became enforceable in August 2025. Companies using AI in recruitment, credit scoring, customer interaction, or any process affecting individuals in the EU are already subject to enforcement. The August 2026 deadline for high-risk AI system obligations appears distant, but conformity assessments, technical documentation, and risk management systems require months of preparatory work. Companies that begin in 2026 will not finish in time.

The Inventory Problem

The starting point is always the inventory. Most organisations lack a complete picture of all the AI systems they use: HR tools with automated screening algorithms, marketing platforms with behavioural segmentation, customer scoring systems, service chatbots, predictive analytics tools. Each must be classified within the Regulation’s taxonomy to determine which obligations apply. Misclassification — particularly underestimating the risk level — is the most common error and the one that creates the greatest enforcement exposure. A system that processes CV data to rank candidates is almost certainly high-risk under Annex III, regardless of how the vendor markets it.

High-Risk System Obligations in Practice

For systems classified as high-risk, the obligations are substantial. The provider must maintain detailed technical documentation, implement a risk management system, ensure training data quality, guarantee system transparency and interpretability, design effective human oversight mechanisms, and register the system in the EU database before commercialisation. We coordinate this process with data protection obligations under the GDPR, which overlap significantly when AI systems process personal data and require coordinated impact assessments.

The Contract Layer

The AI Act restructures contractual relationships across the AI supply chain. Agreements with AI system providers must be reviewed to ensure that Regulation obligations are correctly allocated between provider and deployer, that access rights to the technical documentation required for compliance are in place, and that contracts address serious incident scenarios requiring authority notification. This contractual review is an integral component of our compliance service.

Building Compliance as a Competitive Asset

AI governance is the necessary internal complement to regulatory compliance. Organisations that manage their AI systems well do not merely avoid fines: they build trust assets with customers, partners, and regulators that generate real competitive advantage in a market where algorithmic opacity is increasingly unacceptable to institutional buyers, insurers, and counterparties conducting due diligence.

Track record

Real results in AI Act compliance

Our product team had integrated several language models into our recruitment screening process without realising this placed us squarely in the high-risk category under the AI Act. BMC completed the full inventory, explained exactly what obligations applied, and designed a compliance plan we implemented over four months — well ahead of the key enforcement deadlines.

Talentbridge Europe S.L.
Chief Compliance Officer

Experienced team with local insight and international reach

What you get

What our EU AI Act compliance service includes

AI inventory and risk classification

Comprehensive mapping of AI systems in use, development, or commercialisation, with formal classification by risk category under the Regulation and analysis of the value chain (provider, importer, distributor, deployer).

Gap analysis and compliance roadmap

Analysis of gaps between current state and applicable obligations for each system, with a prioritised action plan structured by risk level and regulatory deadlines.

Conformity assessments

Design and execution of the conformity assessment process for high-risk systems: technical documentation, incident logging, bias analysis, robustness testing, and preparation for notified body review where required.

Internal AI policies and governance

Drafting of acceptable AI use policies, internal governance frameworks, human oversight procedures, and incident reporting mechanisms aligned with the AI Act.

Training and regulatory monitoring

Training for technology, compliance, and leadership teams on AI Act obligations, with ongoing monitoring of EU AI Office guidance and delegated acts.

FAQ

Frequently asked questions about EU AI Act compliance

The AI Act applies to any company that places AI systems on the EU market or uses AI systems in the EU, regardless of where the company is established. It covers providers who develop and sell AI systems as well as deployers who integrate them into their processes. Non-EU companies that sell or use AI affecting EU citizens and residents are also subject to the Regulation.
The AI Act absolutely prohibits systems classified as unacceptable risk: subliminal manipulation or exploitation of vulnerabilities, broad social scoring by public entities, real-time remote biometric identification in public spaces (with limited exceptions), emotion recognition in workplace and educational settings, and biometric categorisation to infer sensitive attributes such as race or sexual orientation.
Annex III lists the high-risk categories: AI in critical infrastructure, student assessment in education, recruitment screening and performance evaluation, access to essential services (credit, insurance), law enforcement, migration and asylum, and administration of justice. These systems require conformity assessment, detailed technical documentation, and in most cases prior registration in the EU database.
Deployers must ensure the system is used in accordance with the provider's instructions, implement effective human oversight, report serious incidents to authorities, inform employees when AI affecting them is used, and conduct fundamental rights impact assessments when the system interacts with the public. They must also maintain operation logs.
GPAI models (such as large language models) have their own regulatory regime. Providers must supply technical documentation, comply with copyright law regarding training data, and publish a summary of training content. Models with systemic risk (training computation exceeding 10^25 FLOPs) carry additional obligations: adversarial testing, serious incident reporting, and enhanced cybersecurity measures.
The AI Act carries the highest penalties in European technology regulation: up to EUR 35 million or 7% of global annual turnover for violations of prohibited practice rules; up to EUR 15 million or 3% for other obligation breaches; and up to EUR 7.5 million or 1.5% for providing incorrect information. Reduced caps apply for SMEs and startups.
It depends on the category. Most Annex III systems can undergo a self-assessment by the provider provided they follow harmonised standards. However, systems in categories such as biometrics and critical infrastructure in certain configurations require assessment by external notified bodies. We help determine which conformity procedure applies to each system.
The AI Act and GDPR are complementary and frequently overlap, particularly when AI systems process personal data. The AI Act's fundamental rights impact assessment must be coordinated with the GDPR's data protection impact assessment (DPIA). Our team integrates both frameworks into a single process, avoiding duplication and ensuring a coherent compliance structure.
First step

Start with a free diagnostic

Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.

EU AI Act Compliance

Legal

First step

Start with a free diagnostic

Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.

25+
years experience
5
offices in Spain
500+
clients served

Request your diagnostic

We respond within 4 business hours

Or call us directly: +34 910 917 811

Call Contact