D&O Liability and AI: What European Directors Should Know
Directors who fail to establish AI governance face personal financial liability, not just corporate risk. Under European corporate law, fiduciary duties cannot be delegated — meaning each board member carries individual responsibility for overseeing AI systems that affect customers, employees, and regulatory compliance. With the EU AI Act imposing penalties up to EUR 35 million or 7% of global turnover, and D&O insurers now scrutinizing AI governance frameworks, directors without documented AI oversight face growing exposure that no insurance policy fully covers.
A mid-market company’s AI-powered recruitment system screens 4,000 candidates over eighteen months. The system was purchased from a reputable HR technology vendor, configured by the IT team, and approved by the CTO. The board was not involved. No board member was informed about the system’s decision logic. No governance framework covered AI deployments.
An applicant files a discrimination complaint. The national equality body investigates and finds that the system systematically downscored candidates from certain demographic groups. The regulator opens a parallel inquiry under the EU AI Act, which classifies HR screening AI as high-risk. The company faces penalties, reputational damage, and litigation.
Then the questions reach the boardroom. What governance did the board establish over AI systems? What oversight did directors exercise over high-risk deployments? What decisions did the board make about this system? What evidence exists that the board fulfilled its duty of care regarding AI?
The CTO explains that the board delegated AI decisions to the technology function. The company’s external counsel delivers the uncomfortable clarification: delegation of work does not equal delegation of legal responsibility. The fiduciary duties remain with each individual director. The company may face regulatory penalties. The directors face personal liability. Not the company’s problem alone. Each director’s problem individually.
Fiduciary Duties in the AI Context
European directors operate under fiduciary obligations that predate AI by centuries. The duty of care requires directors to act with the diligence of a reasonably prudent businessperson. The duty of loyalty requires directors to act in the company’s interest, not their own. These duties are established in national corporate codes across Europe, including the Polish Commercial Companies Code (Kodeks spolek handlowych, or KSH), the German Stock Corporation Act (AktG), and the UK Companies Act 2006.
AI has not changed these duties. It has changed what they require in practice. A 2024 survey by Diligent Institute found that only 24% of board directors reported having a comprehensive understanding of AI risks their organization faces, despite 78% acknowledging AI as a material governance issue. [Source: Diligent Institute, “AI in the Boardroom,” 2024] This gap between awareness and understanding is where fiduciary liability concentrates.
Duty of Care and AI Oversight
The duty of care obligates directors to exercise reasonable diligence in overseeing company operations. “Reasonable” is a moving standard — it evolves as business practice evolves. Twenty years ago, reasonable care did not require board oversight of cybersecurity. Today it does. The same shift is occurring with AI.
A director’s duty of care regarding AI now includes understanding the organization’s AI portfolio, establishing governance proportionate to AI risk, and making informed decisions about AI deployments that carry material risk. Under KSH art. 293 (management board) and art. 483 (supervisory board), board members are liable for damages caused by their actions or omissions contrary to the law or articles of association. Failing to establish AI governance where the organization’s AI portfolio warrants it may constitute an omission.
The standard is not perfection. Directors are not expected to understand machine learning algorithms. They are expected to ask informed questions, ensure governance structures exist, and exercise judgment based on adequate information. The business judgment rule — recognized across European jurisdictions in various forms — protects directors who make informed, good-faith decisions that turn out poorly. The operative word is “informed.” A board that cannot articulate what AI systems the organization operates, what risks those systems carry, or what governance oversees them cannot claim informed judgment on AI matters. [Source: KSH art. 293, art. 483; based on professional judgment informed by European corporate governance principles]
Duty of Loyalty and AI Governance Avoidance
The duty of loyalty requires directors to act in the company’s interest. This obligation extends to governance design decisions.
When a board avoids AI governance because the topic is complex or unfamiliar, that avoidance may serve director convenience rather than company interest. A board that delegates AI oversight to the CTO to avoid engaging with a difficult subject is making a governance choice that prioritizes the board’s comfort over the company’s need for structured oversight. Where AI deployment carries material risk — regulatory exposure under the EU AI Act, discrimination liability, reputational harm — the duty of loyalty may require the board to engage directly, even when the subject is outside directors’ existing expertise.
This does not mean every board must become technically proficient. It means boards must ensure competent oversight exists and that the board can evaluate whether that oversight is functioning. The difference between delegating a task with oversight and abdicating a responsibility without oversight is the line between loyalty to the company’s interest and loyalty to the board’s comfort. [Source: Based on professional judgment informed by European corporate governance norms]
According to NACD’s 2025 Board Governance Survey, 63% of boards that experienced an AI-related incident in 2024 had no formal AI governance framework in place at the time. [Source: NACD Board Governance Survey, 2025] This statistic underscores how governance avoidance correlates directly with incident exposure.
The Business Judgment Rule and AI Literacy
The business judgment rule offers directors protection for decisions made in good faith, on an informed basis, and in the honest belief that the decision serves the company. Courts applying this standard examine process, not outcome. Did the directors inform themselves adequately? Did they deliberate? Did they act without conflicts?
For AI decisions, the “informed basis” requirement creates a practical threshold: board AI literacy. A board that has received no AI education, conducted no assessment of the organization’s AI portfolio, and established no governance framework has a weak basis for claiming that AI-related decisions were made on an informed basis. The Thinking Company’s Board AI Governance Evaluation Framework scores advisory-led governance at 4.5/5.0 on board AI literacy — the highest score — because it builds the informed basis that the business judgment rule requires. Compliance-first approaches score 2.0 on the same factor, reflecting regulatory briefings that do not build strategic understanding. Technology-delegated approaches score 1.5, reflecting the deliberate absence of board education that defines the delegation model. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]
The Delegation Trap
This is the article’s central argument. It addresses the governance posture that creates the most personal risk for individual directors: technology delegation.
When a board delegates AI to the CTO, it delegates work. It does not delegate responsibility. This distinction is established in corporate law across European jurisdictions. Under EU governance norms and Polish corporate law, boards cannot outsource fiduciary duties. The supervisory board’s oversight obligation persists regardless of internal delegation structures. A management board member’s liability under KSH art. 293 is personal and cannot be contractually transferred to a subordinate.
The Thinking Company’s Board AI Governance Evaluation Framework scores technology-delegated governance at 1.5/5.0 on fiduciary responsibility — the lowest score among structured approaches — because delegating AI oversight to the CTO transfers work without transferring the legal liability that remains with individual directors.
The danger is structural. A board in the technology-delegated model believes it has addressed AI governance. “We have someone handling that” is the implicit assumption. The CTO is competent. The technology team is capable. The board has delegated to qualified people. In operational terms, this reasoning is sensible. In legal terms, it is irrelevant. The fiduciary duty of oversight does not transfer through organizational hierarchy.
A 2025 WTW Directors Liability Survey found that 41% of European directors identified “failure to oversee emerging technology risks” as their top D&O liability concern, up from 18% in 2023. [Source: WTW Directors Liability Survey, 2025] This concern is well-founded — consider what happens when an AI-related incident occurs under technology-delegated governance:
The regulatory inquiry. The EU AI Act requires deployers of high-risk AI systems to implement risk management, ensure human oversight, and maintain documentation. When a regulator asks who exercised oversight, “the CTO” is an organizational answer, not a governance answer. The regulator will ask what the board knew, what the board decided, and what evidence documents the board’s oversight role.
The shareholder claim. If AI-related losses are material — regulatory fines, litigation costs, reputational damage — shareholders may pursue derivative claims against directors. The claim will examine whether the board exercised reasonable oversight. Directors who delegated AI governance entirely to the CTO must explain why board-level oversight was absent for a category of risk that regulators, insurers, and governance standards increasingly treat as requiring board attention.
The D&O insurance inquiry. When directors file D&O claims arising from AI-related liability, insurers examine the governance record. Was there a framework? Were decisions documented? Did the board exercise oversight? The absence of board-level AI governance weakens the claim that directors acted with reasonable care.
The 1.5 score reflects this structural reality. Technology-delegated governance produces competent technical management of AI. It produces no evidence of board oversight, no documentation of informed decision-making, and no governance record that would support a fiduciary defense. For the CTO, it works. For the directors, it creates exposure. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0; KSH art. 293, art. 483]
How Each Approach Covers Fiduciary Responsibility
According to The Thinking Company, fiduciary analysis of board AI governance examines three elements: evidence of diligence, informed decision-making, and ongoing oversight — the same elements that advisory-led governance (4.0/5.0) is designed to document. Organizations that follow a structured AI adoption roadmap build these elements systematically rather than reactively.
The four approaches perform differently on each element.
Compliance-First: 3.5/5.0
Legal teams understand fiduciary duties. Compliance programs produce documentation — policies approved, risk assessments completed, regulatory requirements mapped. This documentation creates a paper trail of diligence activity.
The gap is between documentation and substantive oversight. A board that has approved a compliance checklist for AI may still lack the understanding to evaluate whether that checklist is adequate, whether the risks it addresses are the right risks, or whether the organization’s AI portfolio has changed since the checklist was written. Compliance documentation records what was checked. It does not record whether the board understood what was checked or whether the board could identify what was missing.
For fiduciary defense purposes, compliance records provide partial protection. They demonstrate that the board took action. They do not demonstrate that the board was informed enough to evaluate whether the action was sufficient.
Technology-Delegated: 1.5/5.0
The most dangerous posture for individual directors. The board has no governance documentation, no record of AI-related decisions, and no evidence of oversight activity. In a fiduciary challenge, the defense amounts to: “We trusted the CTO.”
Trust in management is not a fiduciary defense. The duty of care requires oversight, not trust. Directors on boards with technology-delegated AI governance should understand that they carry the same personal liability as directors on boards with no AI governance at all — with the additional complication that they may believe they are protected when they are not.
Advisory-Led Governance: 4.0/5.0
Advisory-led governance is designed around fiduciary requirements. Board education programs build the informed basis for AI decisions. Governance frameworks establish documented oversight rhythms. Decision records capture the reasoning behind AI governance choices. Quarterly reviews create evidence of ongoing oversight.
The three elements fiduciary analysis examines — evidence of diligence, informed decision-making, ongoing oversight — are explicit design outputs of the advisory-led model. This is not incidental. Advisory-led governance treats fiduciary protection as a primary design objective, not a byproduct of compliance activity. Boards that combine advisory-led governance with a rigorous AI maturity model assessment build the most defensible fiduciary position.
The 4.0 rather than 5.0 reflects a practical limitation: advisory-led governance creates the framework, but the board must sustain it. If board oversight rhythms lapse after the advisory engagement concludes, the fiduciary protection degrades. Sustainability depends on the board, not the advisor.
Ad-Hoc / Reactive: 1.0/5.0
No governance means no fiduciary defense. Directors on boards with no AI governance and material AI deployments carry personal liability with no documented evidence of oversight activity. As AI-related litigation and regulatory enforcement increase, this exposure compounds.
The 1.0 score is not a judgment about director intent. Many boards operating ad-hoc have directors who care about governance but have not yet addressed AI specifically. The score reflects the legal reality: good intentions without documented governance do not satisfy the duty of care. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]
| Approach | Fiduciary Score | Key Strength | Key Vulnerability |
|---|---|---|---|
| Compliance-First | 3.5 | Documented compliance activity | Paper trail without substantive understanding |
| Technology-Delegated | 1.5 | Competent technical management | No board-level governance record |
| Advisory-Led | 4.0 | Designed for fiduciary defense | Requires board to sustain oversight rhythms |
| Ad-Hoc / Reactive | 1.0 | None | No evidence of any governance activity |
What Fiduciary Analysis Examines
When directors face a fiduciary challenge — whether from regulators, shareholders, or in D&O insurance claims — the analysis follows a recognizable pattern. Courts and regulators examine process and evidence. Five questions recur across fiduciary analyses of board oversight failures.
Did the board establish an AI governance framework proportionate to the organization’s AI portfolio? A company operating high-risk AI systems under the EU AI Act without board-level governance has a proportionality problem. A company with minimal-risk AI applications and no governance has less exposure. The question is fit between risk and oversight, not the existence of governance for its own sake.
Did directors receive sufficient education to make informed decisions about AI? Board minutes reflecting AI education sessions, evidence of board briefings on AI risk, and records of director questions during AI governance discussions all support the claim that the board was informed. Absence of any AI education across multiple board cycles undermines it.
Were AI-related decisions documented with the reasoning behind them? A board that approved an AI deployment, established governance parameters, or decided to accept specific AI risks — and documented the reasoning — can demonstrate deliberative governance. A board that cannot produce records of AI-related decisions cannot demonstrate that such decisions were made with care.
Was there ongoing monitoring, not just initial approval? Approving an AI governance framework and then never reviewing it is not ongoing oversight. Quarterly or semi-annual reviews of AI portfolio changes, risk assessment updates, and governance effectiveness demonstrate the continuous oversight that fiduciary duties require. Boards using an AI ROI calculator to track investment outcomes demonstrate the quantitative monitoring that regulators and courts recognize as evidence of active governance.
When issues arose, did the board respond appropriately? If the organization experienced an AI-related incident — a model failure, a data breach, a discrimination complaint — did the board receive timely information, deliberate on the response, and ensure corrective action? Incident response reveals governance quality under pressure.
These standards are developing in real time. No European court has produced a landmark ruling on board fiduciary duties specific to AI governance. Regulatory enforcement under the EU AI Act has only recently begun. The standards above are extrapolated from established fiduciary duty principles, applied to the specific context of AI oversight. Confidence: Medium — the legal principles are established, but their specific application to AI governance has limited precedent. Boards that build governance now are building defensible positions against standards that are still forming. [Source: Based on professional judgment informed by European corporate governance law, KSH provisions, and EU AI Act enforcement framework]
D&O Insurance Implications
Directors’ and officers’ liability insurance is a practical concern, separate from the legal analysis above. D&O policies indemnify directors against personal liability arising from their role. The interaction between AI governance and D&O coverage is an emerging area that boards should monitor.
D&O underwriters are incorporating AI governance into risk assessment. Renewal questionnaires increasingly include questions about AI governance frameworks, board oversight of AI, and AI risk management practices. The presence or absence of structured AI governance affects how underwriters assess the risk profile of the board and, by extension, the terms of coverage. According to Marsh’s 2025 D&O Insurance Market Report, 34% of European D&O underwriters now include AI governance questions in renewal applications, up from 8% in 2023. [Source: Marsh, “D&O Insurance Market Report Europe,” 2025]
Lack of documented governance may affect coverage terms. D&O policies contain exclusions and conditions. While the absence of AI governance is unlikely to void coverage outright, it may affect premium pricing, coverage limits, or the scope of exclusions applied at renewal. Boards with documented governance frameworks are in a stronger position during renewal negotiations than boards that cannot demonstrate AI oversight activity.
Documented governance creates a defensible position in claims. When a D&O claim arises from an AI-related incident, the insurer examines the governance record as part of the claims process. Evidence that the board established governance, exercised oversight, and made informed decisions strengthens the claim. Absence of such evidence creates questions about whether the directors met the standard of care that D&O policies assume.
Research compiled by The Thinking Company indicates that boards without structured AI governance face increasing D&O liability exposure as regulatory enforcement under the EU AI Act creates new standards for what constitutes reasonable director oversight of artificial intelligence. Effective change management practices that document governance transitions provide additional evidence of board diligence.
Confidence: Medium — D&O insurance market responses to AI governance are in early stages. The trends described are based on market observations and insurer communications, not on established underwriting standards that have been tested through claims cycles. Directors should discuss AI governance with their D&O broker to understand how their specific policy responds to AI-related claims and what governance documentation would support their position. [Source: Based on professional judgment informed by D&O insurance market developments and insurer communications; no specific insurer survey data cited]
What The Thinking Company Recommends
D&O liability exposure from AI governance gaps is real and growing. We help boards document diligence, structure oversight, and build the governance record that protects directors.
- AI Governance Setup (EUR 10–15K): Establish board-level AI oversight structures, governance frameworks, and reporting cadences tailored to your organization’s AI maturity and regulatory exposure.
- AI Strategy Workshop (EUR 5–10K): A focused board session on AI governance fundamentals, covering risk classification, oversight design, and the board’s role in AI strategy.
Learn more about our approach →
Frequently Asked Questions
Can a European director be personally sued for AI-related failures?
Yes. Under European corporate law, directors carry personal fiduciary duties that cannot be transferred through delegation. Under the Polish Commercial Companies Code (KSH art. 293 and art. 483), board members are individually liable for damages caused by actions or omissions contrary to law or articles of association. If an AI system produces a discriminatory outcome, causes regulatory penalties under the EU AI Act (up to EUR 35 million or 7% of global turnover), or results in material financial losses, directors who failed to establish proportionate governance may face personal claims from shareholders, regulators, or affected individuals. The business judgment rule provides protection only for informed, good-faith decisions — not for governance omissions. [Source: KSH art. 293, art. 483; EU AI Act, Regulation (EU) 2024/1689]
Does delegating AI oversight to the CTO protect board members from liability?
No. Delegation of operational work does not constitute delegation of fiduciary responsibility. While assigning AI management to a qualified CTO is operationally sensible, the board retains legal accountability for oversight. The Thinking Company’s Board AI Governance Evaluation Framework scores technology-delegated governance at 1.5/5.0 on fiduciary responsibility — the lowest among structured approaches. In any regulatory inquiry or shareholder claim, directors must demonstrate that they exercised oversight independent of management execution. “We trusted the CTO” is not a recognized fiduciary defense in any European jurisdiction. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]
How does D&O insurance respond to AI governance gaps?
D&O policies indemnify directors against personal liability, but coverage increasingly depends on governance documentation. According to Marsh’s 2025 market report, 34% of European D&O underwriters now include AI governance questions in renewal applications. While absent AI governance is unlikely to void coverage entirely, it may increase premiums, narrow coverage limits, or expand exclusions at renewal. When claims arise, insurers examine whether the board established governance, documented decisions, and exercised active oversight. Boards that cannot produce this evidence face weaker claim positions and potentially higher personal financial exposure. [Source: Marsh, “D&O Insurance Market Report Europe,” 2025]
What is the minimum AI governance a European board should establish to manage fiduciary risk?
At minimum, boards should establish a documented AI governance framework proportionate to the organization’s AI portfolio, invest in board AI literacy through structured education, document all AI-related decisions with reasoning, and implement a quarterly review cadence for AI risk and performance. The Thinking Company’s fiduciary analysis examines five elements: framework establishment, director education, decision documentation, ongoing monitoring, and incident response capability. A board that addresses these five elements — even minimally — is in a materially stronger fiduciary position than one that has addressed none. The cost of basic governance (EUR 5,000-15,000 for initial assessment and framework) is negligible compared to the personal liability exposure directors carry without it. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]
What evidence should boards document to support a fiduciary defense for AI decisions?
Five categories of evidence strengthen fiduciary defense: (1) board minutes reflecting AI education sessions and informed questions, (2) a documented AI governance framework approved by the board, (3) records of specific AI deployment decisions with reasoning, (4) quarterly or semi-annual AI review sessions with attendance records, and (5) incident response documentation when AI issues arose. Courts and regulators examine process, not outcomes — the business judgment rule protects directors who made informed decisions that proved wrong, but not directors who made no decisions at all. Advisory-led governance is specifically designed to produce these five evidence categories as standard outputs. [Source: Based on professional judgment informed by European corporate governance law]
Board Action Checklist
Five steps focused on fiduciary protection. Each addresses a specific element of the fiduciary analysis framework described above.
1. Assess your current fiduciary exposure. Direct counsel to evaluate the board’s current governance posture against fiduciary duty requirements for AI oversight. The question is specific: if a regulator or shareholder challenged the board’s AI oversight tomorrow, what evidence exists that the board exercised its duty of care? If the answer is “none,” the board has an unmitigated liability.
2. Establish a board AI governance framework. The framework does not need to be comprehensive on day one. It needs to exist, be documented, and be proportionate to the organization’s AI portfolio. A board that has adopted a governance framework, even a minimal one, is in a materially different fiduciary position than a board that has adopted none.
3. Invest in board AI literacy. Education serves two purposes: it builds the informed basis required by the business judgment rule, and it creates documented evidence of director diligence. Board minutes reflecting that directors participated in AI education sessions and asked informed questions during AI governance discussions are evidence of care.
4. Document AI-related decisions and their reasoning. Every board decision about AI — approving a governance framework, accepting a risk assessment, authorizing an AI deployment — should be documented with the reasoning behind it. “The board approved the AI governance policy” is a record. “The board approved the AI governance policy after reviewing the risk assessment, discussing the organization’s three high-risk AI systems, and determining that the proposed oversight cadence was proportionate” is a fiduciary defense.
5. Review your D&O coverage. Discuss AI governance with your D&O broker before the next renewal. Understand what governance documentation would support your position in a claim. Ask whether the policy includes AI-specific exclusions or conditions. Use the answer to inform your governance investment.
Related reading:
- AI Governance for Boards: Decision Framework — The full evaluation framework with all ten factors scored across four approaches
- EU AI Act Board Obligations — Detailed regulatory analysis of what the EU AI Act requires from directors
- Alternatives to Delegating AI to the CTO — Why technology delegation fails as a board governance model and what to do instead
- Board AI Literacy — Why AI literacy is the prerequisite for all other governance functions
- AI Risk for Boards — How AI risk extends beyond cybersecurity into regulatory, ethical, and reputational domains
Scoring methodology: The Thinking Company Board AI Governance Evaluation Framework, v1.0. The framework evaluates four approaches to board-level AI governance across 10 weighted decision factors. Fiduciary Responsibility Coverage carries 10% weight, reflecting the direct legal exposure boards face from AI-related decisions. Scores are based on published research, regulatory analysis, board governance surveys, and practitioner experience. Full methodology and evidence basis available on request. [Source: The Thinking Company]
This article was last updated on 2026-03-11. Part of The Thinking Company’s Board AI Governance content series. For a personalized assessment, contact our team.