Skip to content
Legal Regulatory Update

EU AI Act Published: What Businesses Need to Know

EU AI Act (Regulation 2024/1689) published August 2024: three key compliance dates (February 2025, August 2025, August 2026), four risk tiers, and what Spanish companies using AI in HR, marketing or credit scoring must do now.

5 min read

Regulation (EU) 2024/1689 of the European Parliament and of the Council, of 13 June 2024, laying down harmonised rules on artificial intelligence — the AI Act — was published in the Official Journal of the European Union on 12 August 2024 (L 2024/1689). It is the world's first comprehensive regulatory framework for artificial intelligence and the result of more than three years of legislative negotiations since the European Commission published its proposal on 21 April 2021.

Why the AI Act Is a Regulatory Turning Point

The AI Act takes a radically different approach from most previous technology regulation: rather than regulating the technology itself, it regulates the risk that AI systems generate based on their use. This risk-based approach means that the level of obligations depends not on whether a system uses machine learning, computer vision or natural language processing, but on the context and purpose in which it is used.

The other defining characteristic of the AI Act is its extraterritorial application: the Regulation applies to providers established in the EU and to those established outside the EU when their AI systems are placed on the market in the EU, when their systems are used in the EU, or when the outputs of their systems are used in the EU. Consequently, US, Asian and other technology companies operating in the European market are subject to the AI Act on the same terms as European companies.

Structure and Architecture of the Regulation

The AI Act comprises 113 articles and 13 annexes, organised into twelve chapters. Its structure reflects the architecture of the risk classification system:

Chapter I: General provisions, definitions and scope.

Chapter II (Article 5): Prohibited AI practices. Establishes the list of AI uses that cannot be carried out in the EU under any circumstances.

Chapters III and IV: High-risk systems. Chapter III establishes the obligations for providers and operators of high-risk systems; Chapter IV regulates high-risk systems that are components of products covered by EU harmonisation legislation (medical devices, machinery, aircraft, etc.).

Chapter V: General-purpose AI models (GPAI). Regulates foundation models such as GPT-4, Gemini, Claude and Llama, with differentiated obligations for standard GPAI models and those presenting systemic risks.

Chapter VII: Governance. Establishes the supervisory structure: the AI Office (AI Office) of the European Commission and the national AI supervisory authorities.

Chapter XII (Article 99): Penalties.

The Spanish AI Supervisory Agency (AESIA)

Spain was the first Member State to create a specific AI supervisory authority: the Spanish Agency for the Supervision of Artificial Intelligence (Agencia Española de Supervisión de la Inteligencia Artificial, AESIA), established by Royal Decree 729/2023 of 22 August, headquartered in A Coruña. The AESIA acts as the national competent authority for the application of the AI Act in Spain and is the contact point with the European Commission’s AI Office.

The AESIA has powers to investigate potential breaches of the AI Act, impose administrative penalties, register high-risk AI systems developed or commercialised in Spain, and participate in the cooperation mechanisms of the European AI Board.

General-Purpose AI Models (GPAI): Obligations from August 2025

Chapter V of the AI Act introduces a specific regime for providers of general-purpose AI models, applicable from 2 August 2025. GPAI models are AI systems trained on large volumes of data that can perform a wide variety of tasks (text generation, images, code, audio, reasoning) and that are integrated into third-party AI systems.

All GPAI model providers must: (i) prepare and maintain up-to-date technical documentation; (ii) provide to AI system providers integrating their model the information and technical documentation necessary for those providers to fulfil their own obligations; (iii) implement a policy of respect for copyright legislation; and (iv) publish a sufficiently detailed summary of the training content used.

Providers of GPAI models presenting systemic risks — defined as models trained with a computing power exceeding 10^25 floating point operations — have additional obligations: model evaluation, systemic risk assessment and mitigation, reporting serious incidents to the Commission, and cybersecurity measures.

Compliance Roadmap for Companies Using AI

For companies using AI systems — the vast majority of businesses operating in Spain — the AI Act compliance process begins with three basic steps:

(1) Inventory of AI systems in use, covering both internally developed systems and those acquired from third parties. This inventory should cover HR tools with AI features, CRM systems with automated scoring, customer service chatbots, fraud detection and any other AI-powered functionality.

(2) Classification of each system according to the AI Act risk framework: prohibited, high-risk (Annex III), limited risk, or minimal risk.

(3) Assessment of applicable obligations based on the company’s role (provider, importer, distributor or operator) in relation to each system, and preparation of a compliance plan with defined timelines.

For companies using third-party tools — HR software with AI features, automated CRM scoring, customer service chatbots, fraud detection systems — the critical step is to verify that the software provider will fulfil its obligations as a provider under the AI Act and will transfer the necessary information to the operator (the user company) so that the latter can fulfil its obligations regarding supervised use.

The most urgent compliance actions before 2 August 2026 are: conducting the AI inventory, classifying all high-risk systems, updating Data Protection Impact Assessments (DPIAs) to incorporate AI-specific risks, and establishing human oversight procedures for any high-risk system already in operation.

At BMC, our legal team can help you build a practical AI Act compliance programme from initial inventory to documentation and oversight procedures. Learn about our AI Act compliance services.

Want to learn more?

Let us discuss how to apply these ideas to your business.

Call Contact