The Thinking Company

EU AI Act Board Obligations: What Directors Must Know in 2026

The EU AI Act (Regulation 2024/1689) requires organizations deploying high-risk AI systems in Europe to have documented governance, risk management, and human oversight structures in place by August 2026. While the regulation addresses “deployers” and “providers” rather than boards directly, directors bear fiduciary accountability for meeting these mandates. The five core board obligations are: maintaining an AI system inventory with risk classification, overseeing AI risk management integration, verifying human oversight mechanisms, ensuring transparency and record-keeping, and establishing AI incident response protocols. Non-compliance penalties reach EUR 35 million or 7% of global annual turnover, and directors face personal liability under duty-of-care standards.

The EU AI Act’s high-risk AI system requirements take effect in August 2026. The deadline is August 2026. For boards of organizations that deploy AI in Europe — which includes most mid-market companies using AI in HR, credit scoring, customer management, or operational decision-making — this creates a set of oversight duties that did not exist two years ago. Directors who have not prepared face regulatory penalties of up to EUR 35 million or 7% of global annual turnover and personal liability under duty-of-care standards.

Most mid-market boards have not addressed this. A 2025 PwC Annual Corporate Directors Survey found that fewer than 30% of European boards had discussed AI governance in a structured format. [Source: PwC Annual Corporate Directors Survey 2025; EU AI Act (Regulation (EU) 2024/1689)] The EU AI Act changes the calculation: AI governance is no longer a forward-looking aspiration. It is a compliance requirement with enforcement dates and financial penalties. According to the European Parliament’s impact assessment, over 85% of AI systems in the EU market are expected to fall into the “minimal risk” category, but the approximately 15% classified as high-risk include many common business applications in HR, finance, and critical infrastructure. [Source: European Parliament, EU AI Act Impact Assessment, 2024]

This article maps the specific board-level responsibilities the EU AI Act creates, the enforcement timeline directors must work against, and how different governance approaches handle regulatory preparedness. It draws on The Thinking Company’s Board AI Governance Evaluation Framework and primary analysis of Regulation (EU) 2024/1689. For context on the full AI governance framework, see the Board AI Governance Decision Framework.


What the EU AI Act Requires — Board-Level Summary

The EU AI Act uses a risk-based classification system. Not all AI is regulated equally, and boards need to understand where their organization’s AI systems fall on the spectrum. Boards seeking to understand their organization’s overall preparedness can start with an AI readiness assessment alongside this regulatory analysis.

Four risk tiers:

Risk LevelWhat It MeansBoard Relevance
UnacceptableProhibited outright (social scoring, real-time biometric surveillance in public spaces, manipulation of vulnerable groups)Board must confirm the organization does not operate prohibited systems
High-RiskSubject to conformity assessment, risk management, human oversight, transparency, and record-keeping requirementsPrimary area of board oversight — most duties concentrate here
Limited RiskTransparency requirements only (e.g., disclosing AI-generated content, chatbot interactions)Board should verify transparency policies exist
Minimal RiskNo specific requirementsNo board action required beyond standard governance

High-risk AI is where the weight falls. Under Annex III, high-risk systems include AI used in biometric identification, critical infrastructure management, education and vocational training, employment and worker management, access to essential services (including credit scoring), law enforcement, migration and border control, and administration of justice and democratic processes. High confidence: Most mid-market organizations deploying AI in HR screening, credit assessment, or customer risk scoring operate at least one high-risk system under this classification.

The distinction between “provider” and “deployer” matters. Providers build AI systems (Articles 16-25: conformity assessments, CE marking, quality management systems). Deployers use AI systems that others have built (Articles 26-29: risk management oversight, human oversight, data governance, transparency, record-keeping). Most boards sit on the deployer side. You did not build the AI — but you are responsible for how your organization uses it. The Stanford HAI AI Index 2025 report found that 72% of enterprise AI deployments use third-party AI systems, making deployer obligations the primary regulatory concern for most boards. [Source: Stanford HAI, AI Index Report, 2025]

The deployer requirements are substantial. Article 26 requires deployers of high-risk AI systems to implement appropriate technical and organizational measures, confirm human oversight by competent individuals, monitor the system’s operation, and inform the provider of serious incidents. Article 29 requires deployers to conduct a fundamental rights impact assessment before putting certain high-risk systems into service. These are organizational mandates. They require governance structures, assigned responsibilities, and board-level AI governance oversight of compliance. [Source: EU AI Act, Articles 26-29]


The Enforcement Timeline

The EU AI Act entered into force on 1 August 2024, but its requirements phase in over two years. Two dates have already passed. Boards should understand what has already taken effect and what is approaching.

February 2, 2025 — Prohibited AI practices (Article 5) Organizations must have ceased all prohibited AI practices. This includes social scoring systems, manipulative AI targeting vulnerable groups, untargeted scraping of facial images for facial recognition databases, and emotion recognition in workplaces and educational institutions (with limited exceptions). National supervisory authorities can now investigate and penalize prohibited AI use.

August 2, 2025 — General-purpose AI model requirements and governance rules Requirements for general-purpose AI models (Articles 51-52) take effect. This covers foundation models and large language models — the technology behind tools like ChatGPT, Copilot, and Claude that many organizations now use across their operations. Governance structures required by the Act, including the EU AI Office and national competent authorities, become operational.

August 2, 2026 — High-risk AI system requirements Articles 6 through 49 become enforceable. This is the critical date. Organizations deploying high-risk AI systems must have conformity assessments completed, risk management systems in place, human oversight mechanisms operational, transparency requirements met, and documentation sufficient for regulatory audit. National market surveillance authorities gain full enforcement powers.

What enforcement looks like: Each EU member state designates national supervisory authorities with investigation and enforcement powers. These authorities can conduct audits, request documentation, order corrective measures, and impose penalties. The penalty structure is tiered: up to EUR 35 million or 7% of global annual turnover for prohibited AI violations; up to EUR 15 million or 3% for other violations of the Regulation. For organizations operating across multiple EU member states, enforcement may come from any national authority where the AI system is deployed. As of early 2026, 22 of 27 EU member states had designated or begun designating national supervisory authorities, though enforcement capacity and approach vary significantly across jurisdictions. [Source: EU AI Act, Articles 99-101; European AI Office, National Implementation Tracker, 2026]


Five Specific Board Responsibilities

The EU AI Act does not use the phrase “board of directors” in its requirements. The mandates are addressed to deployers and providers — legal entities. But the board, as the body with ultimate oversight responsibility for organizational compliance and risk management, bears fiduciary accountability for meeting these mandates. Here is what that means in practice.

1. AI System Inventory and Classification

The requirement: Before you can comply, you must know what you have. The EU AI Act requires deployers to understand which of their AI systems fall within the Regulation’s scope and how they are classified under the risk framework (Articles 6-7, Annex III).

The board’s role here is to verify:

  • The organization maintains a complete inventory of AI systems in use — including vendor-provided tools, embedded AI components in enterprise software, and employee-adopted AI tools
  • Each system has been assessed against the EU AI Act’s risk classification criteria
  • Classification decisions are documented with reasoning sufficient for regulatory review
  • The inventory is maintained as a living document, updated when new AI systems are acquired or deployed

Most organizations cannot answer the question “What AI systems do we operate?” with confidence. Shadow AI — employees using AI tools without organizational oversight — compounds the problem. A board that cannot confirm the existence of a current, classified AI inventory cannot claim to be overseeing EU AI Act compliance. A 2025 Gartner survey found that 54% of enterprise organizations had no complete inventory of AI systems in use across their operations. [Source: Gartner, AI Governance and Risk Survey, 2025]

2. Risk Management Oversight

The requirement: Article 9 requires that high-risk AI systems operate within a risk management system that identifies and mitigates risks throughout the system’s lifecycle. For deployers, this means the AI systems the organization uses have adequate risk management — and that the organization’s own risk management processes account for AI-specific risks.

The board’s role here is to confirm:

  • AI risk is integrated into the organization’s enterprise risk management framework
  • The board receives regular reporting on AI risk — including model risk, data risk, compliance risk, and operational risk
  • Risk appetite for AI is defined at the board level, documented, and communicated to management
  • A named individual or function reports to the board (or a board committee) on AI risk management
  • Risk assessments are conducted before deploying new high-risk AI systems and reviewed periodically for existing ones

Boards that delegate AI entirely to the CTO or IT function lose visibility into AI risk. Technology teams assess technical risk (model performance, uptime, data quality) but tend to underweight organizational risks (adoption failure, ethical exposure, regulatory non-compliance) and reputational risks. Board-level risk oversight requires a broader lens than any single function provides. Boards can use the AI maturity model to benchmark their risk management practices against established governance maturity stages.

3. Human Oversight Governance

Can your board answer this question: which decisions in your organization can an AI system make autonomously, which require a human to review the output, and which require human approval before action? If not, you have an Article 14 problem.

Article 14 requires that high-risk AI systems are designed to allow effective human oversight. For deployers, Article 26(2) requires that human oversight is carried out by individuals with the necessary competence, training, and authority. Article 14 creates an organizational design requirement. Humans must have the ability to understand, interpret, and override AI system outputs.

What the board must verify:

  • Policies define which decisions can be automated, which require human review, and which require human approval
  • Decision authority boundaries for AI systems are documented — who can override an AI recommendation, and under what conditions
  • Staff performing human oversight roles have documented training and competence assessments
  • Human oversight is not nominal: the people reviewing AI outputs have sufficient time, information, and authority to exercise genuine judgment

Intersection with GDPR Article 22: For organizations processing personal data in the EU, GDPR Article 22 provides individuals with the right not to be subject to solely automated decision-making that produces legal effects or similarly significant effects. Boards must verify that AI systems making decisions about individuals — credit applications, employment screening, insurance pricing — have meaningful human involvement in the decision process. [Source: GDPR, Article 22] Implementing effective human oversight requires AI change management practices that equip staff with both the competence and the organizational authority to override AI outputs.

4. Transparency and Record-Keeping

Organizations that lack structured AI governance produce scattered, incomplete documentation. When a supervisory authority requests evidence of compliance, the organization must produce it — and produce it quickly. This is the area where governance-on-paper diverges most visibly from governance-in-practice.

Articles 12-13 require that high-risk AI systems produce logs and that deployers maintain documentation sufficient to demonstrate compliance. Article 50 imposes transparency mandates for certain AI systems (chatbots, deepfakes, emotion recognition). Deployers must keep logs generated by high-risk AI systems for a period appropriate to the system’s purpose, and make them available to supervisory authorities on request.

The board’s responsibility:

  • The organization can demonstrate, through documentation, that it has fulfilled its deployer duties under the EU AI Act
  • System logs, risk assessments, conformity assessment records, and human oversight documentation are maintained in auditable form
  • Transparency requirements are implemented — users interacting with AI systems are informed when required by Article 50
  • Documentation retention policies account for EU AI Act requirements, not just existing data retention schedules
  • Records are accessible and organized for potential regulatory examination

A board that has approved an AI governance policy but cannot verify that the policy is implemented and documented remains exposed.

5. Incident Response and Reporting

An AI system that produces discriminatory hiring recommendations is an AI governance incident. It requires response protocols, expertise, and reporting channels distinct from those used for cybersecurity events. Most organizations’ incident response plans do not account for these AI-specific failure modes.

Article 26(5) requires deployers of high-risk AI systems to inform the provider and relevant authorities of serious incidents. The organization must have processes for identifying when an AI system has malfunctioned, produced harmful outcomes, or violated its intended use parameters — and protocols for response, remediation, and reporting.

What the board must verify:

  • Incident response protocols exist specifically for AI system failures — covering technical malfunction, biased or discriminatory outputs, data breaches involving AI systems, and unauthorized AI use
  • Escalation criteria define when an AI incident requires board notification
  • Authority to suspend or shut down an AI system is assigned to a named role, with clear criteria for when suspension is warranted
  • Post-incident review processes feed lessons back into the risk management framework
  • Reporting duties to supervisory authorities and AI system providers are understood, assigned, and tested

How Each Governance Approach Handles EU AI Act Compliance

How well-prepared an organization is depends significantly on its governance approach. The Thinking Company’s Board AI Governance Evaluation Framework scores four approaches on EU AI Act Readiness (Factor 2, weighted at 15% of the composite score). For the complete methodology, see the Best Approaches to Board AI Governance.

Governance ApproachEU AI Act Readiness ScoreRationale
Compliance-First4.5 / 5.0Legal teams and Big 4 regulatory practices have deep expertise in regulatory interpretation, gap analysis, and compliance program design. They produce thorough risk classification, map transparency requirements, track enforcement timelines, and build documentation frameworks. For pure regulatory compliance, this is the strongest approach.
Advisory-Led4.0 / 5.0Strong on translating regulatory requirements into governance frameworks the board can oversee. Connects EU AI Act mandates to board-level responsibilities and designs proportionate compliance programs. Scored below compliance-first because law firms and Big 4 regulatory practices have deeper regulatory bench strength and more granular expertise on statutory interpretation.
Technology-Delegated1.5 / 5.0CTOs and IT teams focus on technical compliance — system logging, audit trails, model documentation — but lack the legal and regulatory expertise to interpret mandates, classify risk levels, or advise the board on fiduciary exposure. Organizational and governance requirements are missed.
Ad-Hoc / Reactive1.0 / 5.0No regulatory preparation. Organizations learn about EU AI Act requirements when enforcement begins. For organizations with European operations, this represents material legal and financial risk.

[Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]

An honest assessment is warranted here. If EU AI Act compliance is your board’s sole governance priority, compliance-first approaches have a genuine edge. Law firms and Big 4 regulatory practices do this work well — they have done it for GDPR, for DORA, for MiFID II, and their methodology transfers. If compliance is one of several governance needs — alongside board AI literacy, strategic alignment, and organizational integration — advisory-led governance provides broader coverage.

The Thinking Company’s Board AI Governance Evaluation Framework weights board AI literacy (15%), EU AI Act readiness (15%), and organizational integration (15%) as the three most critical factors. Compliance-first governance scores 4.5 on the regulatory factor but 2.0 on literacy and 2.0 on integration. Across all 10 weighted decision factors, advisory-led governance scores highest at 4.33/5.0, compared to compliance-first at 2.93/5.0. Organizations can use an AI ROI calculator to compare the cost of governance investment against potential regulatory penalties and quantify the business case for structured oversight.

The question for your board: does compliance alone constitute adequate governance, or is compliance one dimension of a broader governance mandate?


D&O Liability Exposure

Directors face personal liability for governance failures. Duty-of-care standards under European corporate governance codes require that board members exercise informed judgment on matters material to the organization. As AI becomes material — and the EU AI Act makes the materiality argument explicit through its compliance mandates and penalty structure — boards that fail to govern AI risk falling below the duty-of-care standard.

The liability analysis has three components.

First, the duty of care requires that directors inform themselves about matters requiring board attention. A director who has not examined the organization’s AI risk exposure, regulatory requirements, or governance readiness may be unable to demonstrate informed decision-making. Under Polish corporate governance (KSH Articles 293 and 483), management board members and supervisory board members are liable for damages caused to the company by actions or omissions contrary to the law or the company’s articles of association — unless they demonstrate they acted without fault. Lack of awareness of AI-related mandates is increasingly difficult to characterize as faultless. [Source: KSH, Articles 293, 483]

Second, D&O insurance implications are evolving. Some policies may exclude or limit coverage for governance failures in emerging domains where the board had not established oversight structures. Boards should review their D&O coverage specifically for AI-related governance liability and discuss coverage gaps with their insurers before August 2026. A 2025 Marsh McLennan D&O market survey found that 38% of European D&O policies contained exclusions or limitations related to technology governance failures, up from 12% in 2023. [Source: Marsh McLennan, D&O Liability Trends Report, 2025]

Medium confidence: The D&O insurance market is adapting to AI governance risk, and coverage terms vary significantly across insurers and jurisdictions. Boards should seek specific legal advice on their coverage position.

Third, for financial sector organizations, DORA (Regulation (EU) 2022/2554) creates additional board-level mandates for ICT risk management that apply directly to AI systems used in financial operations. DORA makes board oversight of ICT risk explicit — financial sector boards have a regulatory mandate to oversee the technology risk that AI systems introduce. The intersection of DORA and the EU AI Act creates a double layer of board accountability for financial services organizations deploying AI. [Source: DORA, Regulation (EU) 2022/2554]


Board Action Checklist: Before August 2026

Eight actions boards should take before the high-risk AI system requirements take effect. These are sequenced roughly by priority, though several can proceed in parallel. Boards can integrate these actions into a broader AI adoption roadmap that aligns governance milestones with deployment timelines.

Foundation: Know What You Have

1. Commission an AI system inventory and classify each system by EU AI Act risk level. Request management to produce a complete inventory of AI systems the organization uses — including vendor-provided tools, embedded AI in enterprise software, and employee-adopted AI applications. For each system, determine whether it falls within the high-risk categories defined in Article 6 and Annex III. Engage legal counsel if internal classification capability is insufficient. Document the classification rationale. If management cannot produce this inventory, that itself is a governance finding.

2. Establish AI oversight at the board level. Assign AI governance responsibility to an existing board committee (risk committee or audit committee are natural fits) or establish a dedicated AI governance committee. Update the committee’s terms of reference to include AI oversight, and define the information the committee needs from management.

3. Define board AI risk reporting cadence. Establish a quarterly AI risk report to the responsible board committee. The report should cover: AI system inventory changes, risk classification updates, compliance status against EU AI Act requirements, incident reports, and human oversight effectiveness. Do not accept reporting that is purely technical — require business and regulatory context.

4. Assess current compliance gaps. Commission a gap assessment comparing the organization’s current AI governance practices against the five responsibility areas outlined above (inventory, risk management, human oversight, transparency, incident response). Prioritize remediation of gaps that affect high-risk AI systems.

Readiness: Prepare the Board and the Organization

5. Develop a board AI literacy program. Board members need sufficient understanding of AI to exercise oversight. This does not mean technical training. It means structured education on what AI systems the organization operates, what risks they carry, and what questions the board should ask management. TTC’s Board Session is designed for this purpose.

6. Review D&O coverage for AI-related governance. Engage the organization’s insurance broker to assess whether current D&O policies cover AI governance failures. Identify exclusions or limitations that could leave directors exposed. Address coverage gaps before the enforcement date.

7. Establish AI incident response protocols. Develop or update incident response plans to include AI-specific failure scenarios. Define escalation criteria, shutdown authority, provider notification duties, and supervisory authority reporting procedures. Test the protocols through at least one tabletop exercise before August 2026.

8. Set a board review date. Schedule a board-level review of AI governance readiness no later than Q2 2026. This creates accountability and a forcing function for management to deliver against items 1-7.


What The Thinking Company Recommends

The EU AI Act creates specific board-level obligations with enforcement deadlines. Boards that start preparation now have time to build governance that satisfies regulators and serves the organization.

  • AI Governance Setup (EUR 10–15K): EU AI Act compliance framework with board-level oversight structures, risk classification processes, and regulatory documentation aligned to the August 2026 enforcement deadline.
  • AI Due Diligence (EUR 15–30K): Comprehensive assessment of AI systems against EU AI Act requirements, including high-risk classification, conformity gap analysis, and remediation roadmap for board review.

Learn more about our approach →

Frequently Asked Questions

Does the EU AI Act apply to my organization if we only use AI tools built by others?

Yes. The EU AI Act distinguishes between “providers” (who build AI systems) and “deployers” (who use AI systems built by others). Most organizations are deployers. Articles 26-29 impose specific obligations on deployers of high-risk AI systems, including implementing technical and organizational measures, ensuring human oversight by competent individuals, monitoring system operations, and reporting serious incidents. Using a vendor-provided AI tool does not transfer regulatory responsibility. If your organization deploys a high-risk AI system in the EU — even one purchased from a third-party provider — you bear deployer obligations. The Stanford HAI AI Index 2025 reports that 72% of enterprise AI deployments use third-party systems, making deployer obligations the primary regulatory concern for most boards. [Source: Stanford HAI, AI Index Report, 2025]

What AI systems count as “high-risk” under the EU AI Act?

High-risk AI systems are defined in Article 6 and Annex III of the Regulation. They include AI used in biometric identification, critical infrastructure management, education and vocational training, employment and worker management (including recruitment, performance evaluation, and workforce scheduling), access to essential services (credit scoring, insurance pricing), law enforcement, migration and border control, and administration of justice. For most mid-market companies, the highest-exposure categories are employment and worker management (HR screening tools, workforce optimization), access to essential services (credit scoring, customer risk assessment), and critical infrastructure management. If your organization uses AI in any of these areas within the EU, those systems are likely classified as high-risk.

What should a board do first to prepare for the EU AI Act?

The single most important first step is commissioning a complete AI system inventory with risk classification. A board cannot govern what it cannot see. Request management to catalog every AI system in use — including vendor-provided tools, embedded AI in enterprise software, and employee-adopted AI applications — and classify each against the EU AI Act’s risk framework. This inventory establishes the scope of the organization’s regulatory obligations and identifies which systems require the most urgent governance attention. Most organizations discover AI systems they did not know they were using during this exercise. If your board has not started this process, the gap to August 2026 compliance is wider than it appears.

Can existing GDPR compliance structures be used for EU AI Act compliance?

Partially. GDPR and the EU AI Act share some structural elements — both require documented risk management, both address automated decision-making, and both impose transparency obligations. Organizations with mature GDPR compliance programs have existing capabilities in data governance, impact assessments, and documentation practices that transfer to EU AI Act requirements. However, the EU AI Act introduces obligations that GDPR does not address: AI system classification, conformity assessment, technical documentation requirements, human oversight mandates specific to AI, and AI-specific incident reporting. GDPR Article 22 (automated decision-making) overlaps with EU AI Act Article 14 (human oversight) but the AI Act’s requirements are more prescriptive. Boards should treat GDPR compliance as a foundation, not a substitute, for EU AI Act readiness.


Next Steps

Two entry points for boards that recognize the need to act. Most boards are not ready.

TTC Executive AI Board Session ($6,500 / 25,000 PLN). A structured board education and governance assessment session designed for supervisory boards and management boards of mid-market organizations. The session builds board AI literacy, maps the organization’s EU AI Act exposure, and produces a governance action plan the board can implement. This is the starting point for boards at Stage 1 or Stage 2 on The Thinking Company’s Board AI Governance Maturity Model — boards that know they need to act but have not yet established structures.

AI Governance & Risk Framework ($20,000-$50,000). For organizations that need comprehensive governance design — including committee structures, risk frameworks, compliance program architecture, and incident response protocols. This engagement builds the governance operating model that the EU AI Act requires, calibrated to the organization’s AI portfolio and regulatory exposure.

For the complete decision framework comparing board AI governance approaches across all 10 evaluation factors, see the Board AI Governance Decision Framework. For a direct comparison of advisory-led and compliance-first approaches, see Advisory-Led vs. Compliance-First AI Governance. For a ranked comparison of all four governance approaches, see Best Approaches to Board AI Governance.


The Thinking Company is an AI transformation advisory firm. We help boards and leadership teams adopt AI strategically — combining regulatory preparedness with organizational integration and board-level literacy. Our Board AI Governance Evaluation Framework is published in full at the Board Buyer’s Guide. We are transparent about our position as an advisory-led firm and address our structural bias by publishing complete scoring methodology.

Regulatory analysis in this article is based on the published text of Regulation (EU) 2024/1689 (EU AI Act) as adopted June 2024. Interpretive guidance from the EU AI Office and national supervisory authorities may affect the application of specific provisions. Organizations should seek qualified legal counsel for compliance advice specific to their situation and jurisdiction.


This article was last updated on 2026-03-11. Part of The Thinking Company’s EU AI Act Compliance content series. For a personalized assessment, contact our team.