**Regulation (EU) 2024/1689** of the European Parliament and of the Council, of 13 June 2024, laying down harmonised rules on artificial intelligence (AI Act), together with **Spain's draft law on the responsible use and governance of AI** and the existing criminal compliance framework, configure a new standard of care for company directors regarding the use of artificial intelligence systems.
AI Act timeline: where we stand
| Date | Milestone |
|---|---|
| 1 August 2024 | Regulation enters into force |
| 2 February 2025 | Prohibited practices and AI literacy obligation apply |
| 2 August 2025 | Governance, supervisory authorities and GPAI obligations |
| 2 August 2026 | Full application: high-risk systems |
| 2 August 2027 | High-risk systems embedded in regulated products |
Since 2 February 2025, prohibitions on unacceptable practices and the AI literacy obligation (Art. 4) are already enforceable. Any company using AI systems in the EU must comply with these obligations today.
The question: is the director personally liable?
What the AI Act says
The AI Act does not establish direct personal liability for directors or executives. Sanctions are imposed on the entity (provider, deployer or distributor of the AI system):
| Infringement | Maximum fine | % of turnover |
|---|---|---|
| Prohibited practices (Art. 5) | €35M | 7% |
| High-risk obligations (Arts. 6–49) | €15M | 3% |
| Incorrect information | €7.5M | 1.5% |
For SMEs and start-ups, the lower amount or percentage applies.
What Spain’s draft law says
Spain’s draft law on the responsible use and governance of AI (Council of Ministers, 11 March 2025) transposes the AI Act’s sanctions regime into Spanish law:
- Very serious infringements: €7.5M–€35M or 2–7% of turnover.
- Serious infringements: €500K–€7.5M or 1–2%.
- Minor infringements: €6K–€500K or 0.5–1%.
For public sector entities: no fines are imposed, but a formal reprimand naming the responsible officer is issued. This does personally identify the individual director.
Where the infringing company belongs to a corporate group, the group’s turnover is used to calculate the sanction, preventing large corporations from channelling liability through low-revenue subsidiaries.
But liability exists through other routes
A director who fails to implement AI governance faces liability through three converging routes:
Route 1: Duty of care and loyalty (Spanish Companies Act)
Articles 225 and 226 of the Spanish Companies Act (LSC) impose on directors:
- Duty of care: to act with the diligence of an orderly businessperson, which includes implementing the internal control systems necessary to manage the company’s risks.
- Duty of loyalty: to act in good faith and in the best interests of the company.
- Business judgment rule protection (Art. 226 LSC): this protection only covers decisions taken with sufficient information and an adequate procedure.
If the company is sanctioned with substantial fines for non-compliance with the AI Act and it is demonstrated that the board of directors had not implemented any AI governance system, the company or shareholders may bring a derivative action (Art. 238 LSC) against the directors for the damages caused.
Route 2: Criminal compliance (Art. 31 bis CP)
Article 31 bis of the Criminal Code establishes the criminal liability of legal entities for offences committed by their legal representatives or employees where there is no organisational and management model that includes adequate supervisory and control measures to prevent offences.
AI systems can be the instrument through which corporate offences are committed:
- Discrimination (Art. 314 CP): a recruitment algorithm that discriminates on grounds of sex, age or race.
- Fraud (Art. 248 CP): an AI system that generates misleading information for customers.
- Discovery of secrets (Art. 197 CP): an AI system that accesses personal data without a legal basis.
- Intellectual property offences (Arts. 270–272 CP): training on protected data.
If the company does not have a crime prevention model that includes specific AI risks, directors may face criminal liability for breach of the supervisory duty under Art. 31 bis.2 CP.
Route 3: AI literacy obligation (Art. 4 AI Act)
Article 4 of the AI Act imposes on providers and deployers the obligation to ensure that their staff have a sufficient level of AI literacy, taking into account their technical knowledge, experience, training and the context of use.
This obligation rests directly on the organisation, but its fulfilment is the responsibility of the board of directors, which must:
- Identify which AI systems are in use within the company.
- Assess the risk level of each system.
- Design and deliver appropriate training for staff who operate, supervise or make decisions based on AI systems.
- Document compliance.
AESIA (Spanish Agency for the Supervision of Artificial Intelligence), operational since 2023 and with full inspection and sanctioning powers since August 2025, may request this documentation at any time.
CGPJ Instruction 2/2026
On 28 January 2026, the Plenary of the General Council of the Judiciary approved Instruction 2/2026 on the use of artificial intelligence systems in judicial activity (BOE-A-2026-2205).
This instruction is relevant for directors because:
- It establishes a reference framework for how Spanish courts will evaluate the use of AI.
- It requires transparency, explicability and human oversight in AI-assisted judicial decisions.
- It creates a governance precedent that courts will apply analogically when assessing directors’ diligence in the private sector.
Supervisory authorities in Spain
Spain’s draft law distributes supervision among multiple authorities:
| Authority | Scope |
|---|---|
| AESIA | General supervision, coordination, sandbox |
| AEPD | Biometrics, migration, personal data |
| CGPJ | Administration of justice |
| Bank of Spain | Credit scoring |
| CNMV | Capital markets |
| DG Insurance | Insurance sector |
| Central Electoral Commission | Electoral processes |
What the diligent director must do
Minimum AI governance plan
- AI system inventory: catalogue all AI systems in use, classifying them by risk level (unacceptable, high, limited, minimal).
- AI responsible officer: designate an AI governance officer or committee with direct access to the board.
- Internal AI policy: adopt a policy setting out the principles of use, authorised systems, internal prohibitions and procedures for approving new systems.
- Impact assessment: for high-risk systems, document the conformity assessment, training data, identified biases and mitigation measures.
- Training (AI literacy): design and deliver training programmes for staff who operate, supervise or make decisions based on AI systems.
- Human oversight: ensure that every significant AI-assisted decision has effective — not merely formal — human oversight.
- Incident procedure: establish an internal reporting channel for AI system incidents or failures, connected to the compliance system.
- Periodic audit: review the inventory, policy and training annually, updating in line with regulatory and technological developments.
Integration with the existing compliance model
The AI governance plan must not be a standalone document but must be integrated into the crime prevention model (Art. 31 bis CP) and the company’s overall compliance system:
- Update the risk map to include specific AI risks.
- Incorporate AI controls into the internal audit programme.
- Add AI as a mandatory training topic in the compliance plan.
- Include AI incidents in the whistleblowing channel (Law 2/2023).
GDPR + AI Act interaction matrix for high-risk systems
The intersection of the GDPR and the AI Act creates a dual regulatory layer that must be managed in a coordinated way. The following matrix summarises the most relevant overlapping points for directors:
| Requirement | GDPR | AI Act (high-risk systems) | Joint action required |
|---|---|---|---|
| Impact assessment | DPIA (Art. 35 GDPR) | Conformity assessment (Arts. 9–15 AI Act) | Integrated process: one assessment covering both frameworks |
| Right to explanation | Not to be subject to automated decision-making (Art. 22 GDPR) | Mandatory human oversight (Art. 14 AI Act) | System must allow documented human review |
| Transparency | Information to data subjects on automated processing | AI interaction notice (Art. 13 AI Act) | Update legal notices and privacy policies |
| Record-keeping | Record of processing activities (Art. 30 GDPR) | Activity logs of the system (Art. 12 AI Act) | Unified register with full traceability |
| Legal basis for AI | Art. 6 GDPR (legitimate interest or consent for training) | No specific legal basis obligation in AI Act | Document legal basis for training data |
| Responsibility | Controller (Art. 4 GDPR) | Provider + deployer (AI Act definitions) | Clarify roles in contracts with AI providers |
Conclusion: de facto liability, not de iure
The AI Act does not explicitly state that directors are personally liable. But the combined effect of:
- Substantial entity-level sanctions (up to €35M or 7% of turnover)
- Duty of care obligations under the LSC (Arts. 225–226)
- Criminal compliance (Art. 31 bis CP)
- AI literacy obligation (Art. 4 AI Act)
- AESIA with full inspection powers
- CGPJ Instruction 2/2026 as judicial precedent
Creates a standard of care which, if not met, exposes the director to civil liability (derivative action, Art. 238 LSC), criminal liability (Art. 31 bis CP) and reputational damage.
The message is clear: AI governance is no longer a technical recommendation — it is an obligation flowing from the director’s duty of care.
BMC advises companies and their boards on implementing AI governance systems that comply with the AI Act and the Spanish compliance framework. Learn about our AI Act compliance services.