AI Governance: Control and Trust Over AI in Your Organisation
AI governance frameworks, ethics committees, algorithmic auditing, bias detection, and AI system registries for responsible organisations.
Why AI governance is urgent for your business
Does this apply to your business?
Do you know exactly how many AI systems your company uses and who is accountable for each one?
Is there a formal approval process before a new AI system goes into production?
Have bias tests been conducted on AI systems that influence decisions about individuals?
Do your AI systems that make or influence significant decisions have documented human oversight mechanisms?
0 of 4 questions answered
Our AI governance framework process
Current governance diagnostic
We assess the current state of AI governance: which systems exist, who oversees them, what policies apply, how decisions on new deployments are made, and what control mechanisms exist over model behaviour in production.
Governance framework design
We define the governance structure suited to the organisation: AI ethics committee, roles and responsibilities, new system approval procedures, acceptable-use policies, and human oversight criteria for high-impact automated decisions.
Operational controls implementation
We develop the AI system inventory, algorithmic audit procedures, bias detection methodologies, incident notification protocols, and continuous monitoring mechanisms for model behaviour in production.
Responsible AI culture and training
We train technology, business, and compliance teams on responsible AI principles, regulatory obligations, and correct use of governance controls. We integrate AI governance into product development processes.
The challenge
AI is embedded in critical business processes — recruitment, credit, customer service, risk analysis — with no equivalent internal oversight structure. Risk committees cannot see the algorithms. Technology teams do not know the regulatory obligations. The result is legal and reputational exposure that grows with every new model deployed.
Our solution
We design AI governance frameworks tailored to each organisation's sector and operational reality: from the AI system inventory to ethics committees, algorithmic auditing procedures, bias detection, and human oversight policies. We build structures that work in practice, not just on paper.
AI governance refers to the internal policies, oversight structures, and accountability mechanisms an organisation puts in place to ensure that artificial intelligence systems are developed and deployed responsibly, lawfully, and in alignment with the EU AI Act (Regulation 2024/1689) and sector-specific regulations. In the EU, the AI Act requires providers and deployers of high-risk AI systems to maintain documented governance frameworks, including risk management systems and human oversight procedures. Organisations without adequate AI governance face regulatory sanctions, reputational risk, and potential liability for algorithmic decisions that affect individuals.
Our AI governance team combines legal expertise in digital regulation with practical knowledge of machine learning systems and software development processes.
The Oversight Gap
Artificial intelligence has penetrated business processes far faster than internal oversight structures have developed. Organisations make critical decisions — about hiring, credit, pricing, customer service — using models whose internal workings are not transparent to the executives who are accountable for those decisions. This gap between AI adoption and supervisory capacity is the fundamental governance problem we address.
Starting with the Inventory
An effective AI governance framework begins with knowing which systems exist. The corporate AI inventory is surprisingly incomplete in most organisations: systems purchased from external vendors are rarely formally registered, models developed by data science teams are not always documented in a way accessible to compliance functions, and AI tools embedded in third-party applications are frequently invisible to risk officers. Opacity about your own AI technology estate is the starting point for most regulatory and reputational problems.
The Ethics Committee as Decision Authority
The AI ethics committee is the central oversight mechanism — not a merely consultative body, but the decision point on whether a new system may be deployed, under what conditions, with what human oversight mechanisms, and with what periodic review schedule. When a regulator investigates an AI-related incident, the existence of a functioning committee with records of its deliberations is the most powerful evidence of organisational due diligence. We design these committees with clear mandates, balanced composition across legal, technology, and business functions, and procedures that do not obstruct innovation while maintaining meaningful control.
Algorithmic Auditing and Bias Detection
Algorithmic auditing and bias detection are the technical controls that give substance to the governance framework. Analysing whether a recruitment model produces systematically higher rejection rates for women or candidates from certain ethnic backgrounds is not a theoretical exercise: it is an obligation arising from the AI Act, the GDPR, and existing anti-discrimination law. We develop audit methodologies adapted to each type of system and coordinate the process with internal data teams or system providers. For organisations subject to AI Act compliance requirements, these audits also serve as evidence of the continuous post-market monitoring obligations applicable to high-risk systems.
Responsible AI Policies: Beyond the Legal Minimum
Responsible AI policies articulate the organisation’s ethical commitments and operational rules for AI deployment — going beyond the minimum required by the AI Act to address principles of fairness, explainability, human dignity, and privacy protection across all AI use, not only in high-risk systems. Our policy development process begins with the organisation’s existing values framework and builds an AI policy architecture that is coherent, defensible, and genuinely embedded in technology and product processes rather than housed in a compliance document that nobody reads.
The SDLC as the Governance Control Point
The most effective point to implement AI governance controls is within the software development life cycle (SDLC) — before systems reach production. We integrate governance checkpoints into your development and procurement processes: a mandatory governance review for any new AI system, a bias and fairness assessment as part of the testing phase, and a human oversight design requirement before deployment. The AI Act compliance conformity assessments for high-risk systems are substantially easier when the SDLC already captures the required documentation at each stage.
Incident Management for AI Systems
AI systems fail in distinctive ways: they degrade gradually as the data distribution shifts from training data, they produce unfair outcomes for demographic subgroups underrepresented in training, and they can be adversarially manipulated. Effective AI governance requires an incident management framework adapted to these failure modes — one that captures operational deviations, triggers review when fairness metrics fall below defined thresholds, and escalates to the ethics committee when necessary. We design these frameworks drawing on incident reporting protocols aligned with NIS2 requirements for organisations in critical sectors.
Board Accountability and Governance Documentation
The AI Act imposes explicit governance accountability on senior management for high-risk AI systems. Directors bear personal responsibility for ensuring the governance framework is adequate and operational. This accountability is evidenced — or refuted — by documentation: committee minutes, risk assessments, governance decisions, and audit trails. We design governance documentation that creates a clear, auditable record of how AI risks are identified, assessed, and managed. The compliance risk mapping function provides the broader regulatory context within which AI governance sits alongside GDPR, NIS2, and sector-specific obligations.
AI Governance as a Commercial Asset
Robust AI governance is increasingly a prerequisite in commercial relationships. In financial services, healthcare, and professional services, large institutional clients and corporate buyers conduct due diligence on their suppliers’ AI systems as part of third-party risk management. An organisation with a robust governance framework, an up-to-date inventory, and documented AI policies holds a significant advantage in these evaluations over competitors who cannot demonstrate control over their own systems. For companies supplying AI-enabled services to large corporate buyers or public sector clients, formal AI governance is rapidly moving from a differentiating capability to a contract prerequisite.
Regulatory Framework: EU AI Act and Spanish Implementation
The EU AI Act (Regulation 2024/1689/EU) entered into force on 1 August 2024 and applies in a phased manner through 2027. The regulatory obligations differ significantly depending on the risk classification of the AI system:
Prohibited AI systems (Art. 5 AI Act): biometric categorisation by sensitive characteristics, social scoring by public authorities, certain real-time remote biometric identification, subliminal manipulation, and AI exploiting vulnerabilities of protected groups. Prohibition applies from 2 February 2025.
High-risk AI systems (Art. 6 and Annex III AI Act): systems used in critical infrastructure, education, employment and workforce management, access to essential services, law enforcement, migration, and administration of justice. High-risk systems must comply with conformity assessment requirements, maintain technical documentation, implement risk management systems, ensure human oversight, and register in the EU AI systems database. Compliance obligations apply from 2 August 2026.
Limited-risk AI systems (Art. 50 AI Act): chatbots, deepfake generators, and emotion recognition systems are subject to transparency obligations — users must be informed they are interacting with an AI.
General-purpose AI models (GPAI, Art. 51-56 AI Act): providers of large language models and other foundation models have their own set of obligations, including documentation, evaluation against benchmarks, and — for models with systemic risk — adversarial testing and incident reporting.
Spanish implementation of the AI Act is coordinated by the newly created AESIA (Agencia Española de Supervisión de la Inteligencia Artificial), which has supervisory jurisdiction over AI systems used or placed on the market in Spain and serves as the Spanish competent authority under the AI Act.
Sectors Particularly Affected
Financial services: AI systems used for credit scoring, anti-fraud, anti-money laundering, and insurance risk assessment are classified as high-risk under Annex III AI Act (access to essential services). Banks, insurers, and FinTechs face the most immediate conformity assessment obligations.
Human resources and recruitment: AI systems that assist in CV screening, candidate ranking, interview assessment, or performance evaluation are high-risk under Annex III AI Act (employment and workforce management). HR departments using AI-powered tools — including those embedded in major ATS platforms — must ensure the tools comply with AI Act requirements or risk being classified as deployers of non-compliant high-risk systems.
Healthcare: AI-assisted diagnostic tools are classified as medical devices under the MDR/IVDR and concurrently as high-risk AI systems. The conformity assessment requirements overlap, requiring careful coordination between AI Act and medical device regulatory tracks.
Technology and software companies: companies that develop AI-enabled products for sale or licensing to businesses in the EU are classified as AI Act providers and bear the primary compliance burden for conformity assessment and technical documentation of any high-risk systems.
Company Size Segmentation
Startups and scale-ups developing AI products face the most significant compliance burden relative to their size: the AI Act’s provider obligations apply regardless of company size. We help early-stage AI companies integrate compliance into their product development process (SDLC governance checkpoints, technical documentation templates, conformity assessment procedures) at a cost proportionate to their stage.
SMEs deploying third-party AI tools are classified as deployers rather than providers under the AI Act. Their primary obligations relate to ensuring that high-risk tools from external vendors are used with appropriate human oversight and that AI literacy training is provided to employees using these systems (Art. 4 AI Act).
Corporate groups with AI systems across multiple business units require enterprise-wide AI inventories, group-level AI ethics committees with business-unit representation, and coordinated conformity assessment programmes. For groups subject to CSRD reporting, AI governance is a disclosed element of the sustainability report under ESRS G1.
Common Mistakes We Fix
-
Assuming the AI Act only applies to AI companies. Any company that deploys an AI system in its operations — even a commercially available third-party tool — has AI Act obligations as a deployer. HR departments using AI-powered recruitment tools, marketing teams using AI-driven personalisation, and finance functions using AI fraud detection systems are all within scope.
-
Building governance frameworks around the current AI tools, not the AI strategy. Companies that design governance frameworks around their current AI estate often find they are obsolete within 18 months as AI adoption accelerates. Governance must be designed for the trajectory of adoption, not the current state.
-
Treating bias detection as a one-time exercise. AI model performance drifts over time as the data distribution shifts from the training distribution. A bias audit conducted at deployment does not guarantee compliance 12 months later. Continuous monitoring with defined fairness metrics and alert thresholds is required.
-
Not documenting governance decisions. The most expensive governance failure is an undocumented good decision. When a regulator investigates an AI-related incident, the absence of documented evidence that the system was reviewed, assessed, and approved through a defined governance process is treated as evidence of inadequate governance — regardless of whether the actual decisions were reasonable.
-
Ignoring the GPAI provisions for companies using foundation models. Companies that fine-tune or deploy general-purpose AI models (including large language models from major providers) may have obligations as GPAI deployers or, if they significantly modify the model, as GPAI providers. The regulatory boundary between deploying and providing is not always clear and requires case-by-case analysis.
AI Governance and GDPR: the Intersection
AI systems that process personal data — which includes virtually all AI systems used in HR, marketing, customer service, fraud detection, and credit scoring — must comply with both the EU AI Act and the GDPR simultaneously. The intersection generates specific obligations:
Data Protection Impact Assessments (DPIAs): GDPR Article 35 requires DPIAs for processing that is likely to result in high risk to individuals’ rights and freedoms. Automated decision-making systems and profiling systems are specifically listed as DPIA-triggering activities. Where an AI system is also a high-risk system under the AI Act, the DPIA and the AI Act conformity assessment requirements significantly overlap, and we design a combined assessment process that satisfies both simultaneously.
Purpose limitation and data minimisation: AI systems trained on personal data must respect the purpose limitation principle — the model cannot be used for purposes incompatible with the original collection purpose. Retraining a model on historical HR data for a new use case (for example, using a performance evaluation model to inform redundancy decisions) requires a new lawful basis assessment and DPIA.
Right to explanation for automated decisions: GDPR Article 22 gives individuals the right not to be subject to solely automated decisions that have significant effects, and the right to request a human review. For AI systems that generate automated recommendations (even if a human technically approves them), the substantive degree of human review is assessed against whether the human override is genuinely exercised or is merely perfunctory.
We design GDPR-compliant data governance frameworks for AI systems that satisfy both the AI Act and the GDPR, with purpose limitation policies, DPIA templates, and human oversight procedures that are substantive rather than formal.
AI Literacy: the Art. 4 AI Act Obligation
Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure that their staff and any third parties they engage to operate AI systems on their behalf have sufficient AI literacy — the skills, knowledge, and understanding to make informed use of AI systems, taking into account the intended use and the risks. This obligation applies regardless of the risk classification of the AI systems used.
In practice, AI literacy training must address: the technical capabilities and limitations of the AI systems the organisation uses, the categories of risk and error the systems are designed to manage (and their known failure modes), the human oversight responsibilities assigned to specific roles, and the escalation procedure when a system produces an unexpected or concerning output. We design AI literacy training programmes adapted to the different levels of AI exposure across the organisation — from the C-suite to the front-line employees who interact with AI outputs daily.
Geographic Coverage and AESIA
The AESIA (Agencia Española de Supervisión de la Inteligencia Artificial) serves as Spain’s national competent authority under the AI Act and coordinates with the European AI Office for cross-border cases. AESIA is headquartered in A Coruña (Galicia) but has jurisdiction over AI systems deployed or placed on the market throughout Spain. We advise companies across all Spanish territories on AI Act compliance under AESIA’s supervisory framework, from initial conformity assessment through to ongoing monitoring and incident reporting obligations.
Conformity Assessment for High-Risk AI Systems
High-risk AI systems under Annex III of the AI Act are subject to a conformity assessment before they can be placed on the market or put into service. For most high-risk systems, the conformity assessment is conducted by the provider as an internal self-assessment (not a third-party certification), documented in a technical file and accompanied by an EU Declaration of Conformity. For AI systems that are also safety components of products covered by specific EU harmonised legislation (machinery, medical devices, vehicles), a third-party conformity assessment by a notified body may be required.
The conformity assessment process requires the provider or, for deployed systems, the deployer’s cooperation, to produce:
- A technical description of the AI system (training data, architecture, intended purpose, and foreseeable misuse scenarios).
- A risk management system (record of the risks identified, the risk mitigation measures applied, and the residual risks accepted).
- A data governance statement covering the data sources, data quality controls, and the bias assessment methodology applied.
- A human oversight design: the specific controls that allow human operators to monitor, intervene in, and override the system’s outputs.
- A post-market monitoring plan: the metrics, thresholds, and escalation procedures for ongoing performance and fairness monitoring.
We design and document conformity assessment processes for clients developing or deploying high-risk AI systems, calibrated to the specific risk profile of the system and the applicable requirements under Annex III.
AI Governance and the CSRD Reporting Obligation
Companies subject to the Corporate Sustainability Reporting Directive (CSRD) must disclose material information about their AI governance practices under the European Sustainability Reporting Standards (ESRS). The relevant standard is ESRS G1 (Business Conduct), which covers governance structures and policies for managing AI-related risks, including bias, algorithmic discrimination, and data ethics.
In practice, the AI governance framework established under the AI Act provides the evidentiary foundation for the CSRD disclosure: the AI inventory, the conformity assessments, the ethics committee governance, and the AI literacy training programme are all disclosable elements under ESRS G1. Companies that build a substantive AI governance framework for AI Act compliance — rather than a purely formal one — generate sustainability reporting value as a co-benefit.
We coordinate the AI governance framework design with the CSRD reporting team (where one exists) to ensure that the governance structures, the documented decisions, and the monitoring data are captured in a format that feeds directly into the annual sustainability report.
How We Work
Our AI governance practice combines legal advisers with technical expertise in AI system auditing and data governance. An engagement typically follows three phases:
Phase 1 — AI inventory and risk classification: mapping all AI systems in use across the organisation (including third-party tools embedded in existing software), classifying each against the AI Act risk tiers, and identifying the compliance obligations attached to each.
Phase 2 — Governance framework design: building the AI ethics committee terms of reference, the conformity assessment templates, the incident reporting procedure, and the AI literacy training programme — designed for the organisation’s specific AI estate and adoption trajectory.
Phase 3 — Ongoing monitoring: quarterly conformity reviews, annual bias audits for high-risk systems, and AESIA incident reporting support where required.
Our fixed-fee AI compliance audit provides organisations with a complete AI inventory, risk classification, and gap analysis within four weeks of instruction — the starting point for any AI governance programme.
Real results in AI governance
We had six AI models in production — some purchased, some built in-house — and nobody had a complete picture of what they did or how they were overseen. BMC designed the governance committee, created the formal inventory, and established the audit procedures we now apply before any new deployment.
Experienced team with local insight and international reach
What our AI governance service includes
AI system inventory and registry
Development of the corporate AI inventory: identification, risk classification, assignment of internal owners, and registry maintenance in line with AI Act requirements.
AI ethics committee and governance structure
Design of the AI ethics committee: mandate, composition, new system approval procedures, evaluation criteria, and review frequency for production systems.
Algorithmic auditing and bias detection
Methodology and execution of algorithmic audits: fairness analysis, demographic bias testing, training data review, and mitigation recommendations for critical systems.
Responsible AI policies
Drafting of the internal AI policy suite: acceptable use, mandatory human oversight, algorithmic incident management, deployment and review criteria, and transparency policy toward affected users.
Training and SDLC integration
Training for technology, product, and compliance teams on responsible AI governance, and integration of governance controls into the software development life cycle.
Results that speak for themselves
Criminal Compliance Spain: Construction Group Case | BMC
Criminal compliance program implemented in 6 months, whistleblower channel operational, AENOR certification obtained, and prosecution risk effectively mitigated.
AML compliance program for a real estate development group
SEPBLAC inspection passed with minor observations only, zero sanctions. Full AML program operational within 90 days.
GDPR Healthcare Spain: Compliance Case Study | BMC
AEPD investigation closed with no sanction. Full GDPR compliance achieved across all group centres within 6 months.
Reference guides
Post-Brexit: your British company operating in Spain with the right structure
post-Brexit advisory for UK companies operating in Spain: entity structuring, customs and VAT, work permits for British nationals, UK-Spain tax treaty optimisation and data protection compliance.
View guideAML compliance in Spain 2026: what your business must know about anti-money laundering regulation
Spain AML compliance 2026: SEPBLAC obligations, risk-based approach, PBC manual, UBO verification, and suspicious transaction reporting. Expert service from BMC.
View guideComprehensive legal services for businesses
Comprehensive legal advisory for businesses: commercial, employment, contracts, regulatory compliance, and dispute resolution. A dedicated legal team to protect your company.
View guideBuy property in Spain with confidence — and without the horror stories
Buying property in Spain 2026: NIE, conveyancing, ITP tax, mortgage advice, and due diligence for foreign buyers. Step-by-step guide from BMC property lawyers.
View guideThe collective agreement that governs your workforce: understand it and negotiate from strength
Spain collective bargaining guide: union negotiation obligations, ERE/ERTE triggers, works council rights, agreement registration, and how BMC protects employer interests.
View guideYour commercial lease agreement: get the clauses right before you sign
Spain commercial lease guide: LAU legal framework, rent review clauses, break options, guarantee structures, and key negotiation points for tenants and landlords.
View guideAnalysis and perspectives
Frequently asked questions about AI governance
Start with a free diagnostic
Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.
AI Governance
Legal
First step
Start with a free diagnostic
Our team of specialists, with deep knowledge of the Spanish and European market, will guide you from day one.
Request your diagnostic
You may also be interested in
Enterprise Risk Management
COSO ERM framework: risk appetite, risk registers, KRIs, board risk reporting, and integration of operational, strategic, financial, and compliance risk.
Saber másEU AI Act Compliance
Full compliance with the EU Artificial Intelligence Act: risk classification, conformity assessments, transparency obligations, and prohibited practice audits.
Saber másCompliance Risk Mapping
Comprehensive compliance risk mapping: regulatory obligation register, risk heat maps, multi-regulatory gap analysis (GDPR, NIS2, AI Act, AML), and regulatory change management.
Saber másData Protection & Privacy
GDPR and LOPDGDD compliance, outsourced DPO, and comprehensive privacy management for businesses.
Saber másDORA Compliance (Digital Operational Resilience)
Full implementation of the DORA framework (Regulation 2022/2554) for financial entities: ICT risk management, incident reporting, resilience testing, and ICT third-party risk.
Saber másVirtual CISO
Outsourced Chief Information Security Officer for SMEs: strategic cybersecurity leadership, governance, and regulatory compliance without the cost of a full-time executive.
Saber másKey terms
EU AI Act
The EU Artificial Intelligence Act (Regulation EU 2024/1689) is the world's first comprehensive…
Read definitionCISO (Chief Information Security Officer)
A Chief Information Security Officer (CISO) is the senior executive responsible for an…
Read definitionData Protection Officer (DPO)
A Data Protection Officer (DPO) is a designated individual responsible for overseeing an…
Read definitionISO 27001 (Information Security Management System)
ISO/IEC 27001 is the internationally recognised standard for Information Security Management Systems…
Read definitionPrivacy by Design
A GDPR principle (Article 25) requiring data protection to be integrated into the design of…
Read definitionTalk to the partner in charge
Response within 24 business hours. First meeting free.