Regulation (EU) 2024/1689 of the European Parliament and of the Council, of 13 June 2024, laying down harmonised rules on artificial intelligence — the AI Act — was published in the Official Journal of the European Union on 12 August 2024 (L 2024/1689). It is the world's first comprehensive regulatory framework for artificial intelligence and the result of more than three years of legislative negotiations since the European Commission published its proposal on 21 April 2021.
Why the AI Act Is a Regulatory Turning Point
The AI Act takes a radically different approach from most previous technology regulation: rather than regulating the technology itself, it regulates the risk that AI systems generate based on their use. This risk-based approach means that the level of obligations depends not on whether a system uses machine learning, computer vision or natural language processing, but on the context and purpose in which it is used.
The other defining characteristic of the AI Act is its extraterritorial application: the Regulation applies to providers established in the EU and to those established outside the EU when their AI systems are placed on the market in the EU, when their systems are used in the EU, or when the outputs of their systems are used in the EU. Consequently, US, Asian and other technology companies operating in the European market are subject to the AI Act on the same terms as European companies.
Structure and Architecture of the Regulation
The AI Act comprises 113 articles and 13 annexes, organised into twelve chapters. Its structure reflects the architecture of the risk classification system:
Chapter I: General provisions, definitions and scope.
Chapter II (Article 5): Prohibited AI practices. Establishes the list of AI uses that cannot be carried out in the EU under any circumstances.
Chapters III and IV: High-risk systems. Chapter III establishes the obligations for providers and operators of high-risk systems; Chapter IV regulates high-risk systems that are components of products covered by EU harmonisation legislation (medical devices, machinery, aircraft, etc.).
Chapter V: General-purpose AI models (GPAI). Regulates foundation models such as GPT-4, Gemini, Claude and Llama, with differentiated obligations for standard GPAI models and those presenting systemic risks.
Chapter VII: Governance. Establishes the supervisory structure: the AI Office (AI Office) of the European Commission and the national AI supervisory authorities.
Chapter XII (Article 99): Penalties.
The Spanish AI Supervisory Agency (AESIA)
Spain was the first Member State to create a specific AI supervisory authority: the Spanish Agency for the Supervision of Artificial Intelligence (Agencia Española de Supervisión de la Inteligencia Artificial, AESIA), established by Royal Decree 729/2023 of 22 August, headquartered in A Coruña. The AESIA acts as the national competent authority for the application of the AI Act in Spain and is the contact point with the European Commission’s AI Office.
The AESIA has powers to investigate potential breaches of the AI Act, impose administrative penalties, register high-risk AI systems developed or commercialised in Spain, and participate in the cooperation mechanisms of the European AI Board.
General-Purpose AI Models (GPAI): Obligations from August 2025
Chapter V of the AI Act introduces a specific regime for providers of general-purpose AI models, applicable from 2 August 2025. GPAI models are AI systems trained on large volumes of data that can perform a wide variety of tasks (text generation, images, code, audio, reasoning) and that are integrated into third-party AI systems.
All GPAI model providers must: (i) prepare and maintain up-to-date technical documentation; (ii) provide to AI system providers integrating their model the information and technical documentation necessary for those providers to fulfil their own obligations; (iii) implement a policy of respect for copyright legislation; and (iv) publish a sufficiently detailed summary of the training content used.
Providers of GPAI models presenting systemic risks — defined as models trained with a computing power exceeding 10^25 floating point operations — have additional obligations: model evaluation, systemic risk assessment and mitigation, reporting serious incidents to the Commission, and cybersecurity measures.
Compliance Roadmap for Companies Using AI
For companies using AI systems — the vast majority of businesses operating in Spain — the AI Act compliance process begins with three basic steps:
(1) Inventory of AI systems in use, covering both internally developed systems and those acquired from third parties. This inventory should cover HR tools with AI features, CRM systems with automated scoring, customer service chatbots, fraud detection and any other AI-powered functionality.
(2) Classification of each system according to the AI Act risk framework: prohibited, high-risk (Annex III), limited risk, or minimal risk.
(3) Assessment of applicable obligations based on the company’s role (provider, importer, distributor or operator) in relation to each system, and preparation of a compliance plan with defined timelines.
For companies using third-party tools — HR software with AI features, automated CRM scoring, customer service chatbots, fraud detection systems — the critical step is to verify that the software provider will fulfil its obligations as a provider under the AI Act and will transfer the necessary information to the operator (the user company) so that the latter can fulfil its obligations regarding supervised use.
The most urgent compliance actions before 2 August 2026 are: conducting the AI inventory, classifying all high-risk systems, updating Data Protection Impact Assessments (DPIAs) to incorporate AI-specific risks, and establishing human oversight procedures for any high-risk system already in operation.
At BMC, our legal team can help you build a practical AI Act compliance programme from initial inventory to documentation and oversight procedures. Learn about our AI Act compliance services.
Common Mistakes in AI Act Compliance for Spanish Businesses
As the August 2026 high-risk system compliance deadline approaches, organisations are making avoidable errors that will expose them to supervisory action by the Spanish AI Supervisory Authority (AESIA — Agencia Española de Supervisión de la Inteligencia Artificial).
Mistake 1: Treating the AI inventory as a one-time exercise. The AI Act’s obligations attach to systems “placed on the market or put into service” — meaning they apply to AI tools integrated into business operations, not just to products sold to third parties. Every time a company subscribes to a new SaaS tool with AI functionality (recruitment screening, CRM propensity scoring, customer service chatbots with automated decision-making), a new inventory item is created. Without a continuous governance process for AI tool onboarding — including classification against the risk tiers of Regulation (EU) 2024/1689 — organisations will progressively accumulate unclassified, potentially high-risk deployments that they cannot document to regulators.
Mistake 2: Assuming AI Act obligations fall on the AI developer, not the user. The AI Act distinguishes between providers (developers who place AI systems on the market), deployers (companies that use AI systems in their operations), and importers or distributors. A Spanish company that uses a US-developed AI recruitment tool to screen job applications is a deployer and has its own obligations: ensuring human oversight of AI-assisted hiring decisions, informing candidates when AI is used in a decision that significantly affects them, and maintaining logs of system use. These obligations exist regardless of whether the AI provider is compliant with their own obligations.
Mistake 3: Failing to update Data Protection Impact Assessments (DPIAs) to incorporate AI risks. High-risk AI systems involving personal data — and most HR, credit and security screening systems do — require both a DPIA under GDPR (Reglamento (UE) 2016/679, implemented in Spain by the Ley Orgánica 3/2018, LOPDGDD) and a conformity assessment under the AI Act. These are distinct exercises with overlapping data inputs. Companies that have conducted DPIAs for their AI deployments but have not updated them to incorporate the AI Act risk assessment framework will have documentation gaps that a supervisory authority cross-referencing AEPD (data protection) and AESIA (AI) oversight records will immediately identify.