The Thinking Company

AI Governance for Financial Services Boards: A Director’s Decision Framework

AI governance for financial services boards is the structured system of oversight, policies, and accountability mechanisms that directors use to ensure their institution’s AI systems comply with the EU AI Act, DORA, and national regulators — while managing the personal liability exposure that comes with board-level responsibility for high-risk AI in credit, underwriting, and employment decisions. Directors who fail to establish adequate governance face regulatory enforcement, D&O coverage restrictions, and reputational damage that no retroactive fix can undo.

Board members of banks and insurance companies face a distinct AI governance challenge. The EU AI Act classifies most AI systems used in credit, underwriting, and employment decisions as high-risk — triggering mandatory governance requirements that fall, ultimately, on the board. DORA extends technology resilience obligations to AI systems. National regulators have issued AI-specific guidance creating expectations for board-level oversight.

This is not an IT matter to delegate. Directors who fail to establish adequate AI governance face regulatory enforcement, reputational damage, and personal liability exposure. The D&O insurance market has begun pricing AI governance failures into coverage terms. A 2024 Willis Towers Watson survey found that 72% of D&O insurers now include AI governance questions in underwriting processes for financial institutions [Source: Willis Towers Watson, “D&O Insurance Market Review,” 2024].

This guide applies The Thinking Company’s Board AI Governance Evaluation Framework to financial services, with factor weights adjusted to reflect the sector’s regulatory intensity and board accountability requirements.


Why Financial Services Boards Must Govern AI Directly

Regulatory Requirements Create Board Accountability

EU AI Act (Regulation 2024/1689):

For high-risk AI systems — which include credit scoring, creditworthiness assessment, and insurance underwriting — the AI Act requires:

  • Risk management systems that are board-overseen, not just IT-managed
  • Data governance practices that address training data bias
  • Transparency and explainability that boards can verify
  • Human oversight mechanisms that boards approve
  • Accuracy, robustness, and cybersecurity monitoring

Article 9 specifically requires that risk management be “integrated into the provider’s quality management system.” For financial institutions using AI in high-risk contexts, this means board-level quality management accountability. Non-compliance penalties reach 35 million EUR or 7% of global annual turnover [Source: EU AI Act, Regulation 2024/1689]. For a detailed breakdown of board obligations, see the EU AI Act board obligations guide.

DORA (Digital Operational Resilience Act):

DORA’s ICT risk management requirements extend to AI systems. Article 5 requires the management body to “approve, oversee and be responsible for the implementation of all arrangements related to the ICT risk management framework.” EBA’s 2024 supervisory review found that 38% of significant institutions had material gaps in ICT risk governance that would extend to AI systems [Source: EBA, “Supervisory Review and Evaluation Process Report,” 2024].

For AI systems, this means boards must:

  • Approve the AI risk management framework
  • Receive regular AI risk reports
  • Ensure adequate AI incident response capability
  • Oversee third-party AI service providers

National Regulator Expectations:

KNF (Poland), BaFin (Germany), and other national competent authorities have issued model risk management and AI guidance that creates de facto board accountability:

  • KNF Recommendation D (IT and ICT security) extends to AI systems
  • BaFin MaRisk circular requires model risk management that includes ML models
  • FCA has signaled enhanced AI accountability expectations

Fiduciary Liability Is Real

Directors face personal liability for AI governance failures:

Duty of Care: Directors must exercise reasonable diligence in overseeing AI systems that create material risk. “I didn’t understand the AI” is not a defense when AI systems make credit decisions affecting thousands of customers.

Duty of Loyalty: Conflicts of interest in AI vendor selection — or failure to disclose AI risks to shareholders — can create liability. For the full D&O liability analysis, see the dedicated factor guide.

D&O Exposure: Insurers are adding AI-specific questions to D&O applications. Coverage exclusions for “failure to implement adequate AI governance” are appearing in policy language. Marsh reports that D&O premiums for financial institutions with inadequate AI governance are 20-45% higher than for institutions with structured oversight [Source: Marsh, “Directors and Officers Insurance: AI Governance Impact,” 2024].

A 2025 survey by The Thinking Company of European D&O insurers found that 67% have modified underwriting processes to assess AI governance. Average premium increases for institutions with “inadequate” AI governance ranged from 15-40%.


Four Approaches to Board AI Governance

Boards choose — explicitly or by default — among four approaches to AI governance. For the general board governance comparison, see the four-way board comparison.

Compliance-First Governance

What it means:

AI governance driven by legal and compliance teams. The board receives compliance reports focused on regulatory checklist completion. Governance is reactive — addressing regulations as they emerge rather than building proactive oversight capability.

How it manifests:

  • AI governance lives in the compliance function
  • Board receives quarterly compliance reports (green/amber/red dashboards)
  • Focus on “are we compliant?” rather than “is this AI effective and safe?”
  • External support comes from law firms or Big 4 regulatory practices

Strengths:

  • Strong regulatory mapping
  • Clear accountability to compliance function
  • Defensible if regulators ask “what did you do?”

Limitations:

  • Checklist compliance does not equal effective governance (see the advisory vs. compliance comparison)
  • May create governance theater without substance
  • Board lacks AI literacy to challenge compliance assertions
  • Organizational integration weak — governance sits outside the business

Technology-Delegated Governance

What it means:

The board delegates AI oversight to the CTO, CIO, or Chief Data Officer. Governance is embedded in technology decisions. The board stays hands-off, trusting technical leadership to manage AI risk. Deloitte’s 2024 Board Practices Survey found that 54% of financial services boards still delegate AI oversight entirely to technology leadership [Source: Deloitte, “Board Practices Quarterly: AI Governance,” 2024].

How it manifests:

  • AI governance lives in technology or data functions
  • Board receives infrequent, technical briefings
  • Governance tools provided by platform vendors (AWS AI governance, Azure responsible AI)
  • No board-level AI literacy program

Strengths:

  • Technical expertise applied to technical problems
  • Governance integrated with implementation
  • Vendor tools provide baseline capabilities

Limitations:

  • Board cannot exercise meaningful oversight
  • Separation of duties weak (same team builds and governs)
  • Organizational, ethical, and strategic dimensions missing
  • Regulators expect board engagement, not delegation

Advisory-Led Governance

What it means:

External advisory helps the board build AI literacy, design a governance framework, and establish oversight rhythms. The board actively governs AI rather than receiving reports about it.

How it manifests:

  • Board AI education sessions (literacy building)
  • Governance framework designed with external advisory support
  • Regular board oversight of AI risk and opportunity
  • External advisory on emerging issues (regulatory changes, technology shifts)

Strengths:

  • Board develops genuine AI capability
  • Governance proportionate to institution size
  • Independence and objectivity in design
  • Organizational integration addressed

Limitations:

  • Requires board time commitment
  • External cost (advisory fees)
  • Ongoing advisory relationship may be needed

Ad-Hoc / Reactive Governance

What it means:

No structured AI governance. The board addresses AI when issues arise — a failed model, a regulatory inquiry, a customer complaint, a data breach. Governance is incident-driven, not systematic.

How it manifests:

  • No standing AI agenda item for the board
  • AI discussed only when problems occur
  • No AI literacy building for directors
  • Governance responds to events rather than anticipating them

Strengths:

  • Low cost and time investment (until something goes wrong)
  • No governance “bureaucracy”

Limitations:

  • Reactive posture creates liability exposure
  • Regulators expect proactive governance
  • Board blindsided by AI risks
  • Reputational damage when issues surface publicly

KPMG reports that financial institutions with reactive AI governance are 4.2x more likely to experience a material AI incident than those with structured oversight [Source: KPMG, “AI Risk Management in Financial Services,” 2024].


Financial Services Board Governance Factor Weights

The Thinking Company’s Board AI Governance Evaluation Framework adjusts factor weights for financial services to reflect the sector’s regulatory density and fiduciary requirements. For the full governance maturity framework, see the dedicated methodology page.

Weight Adjustments

FactorBase WeightFS WeightChangeRationale
Board AI Literacy & Education15%15%Unchanged — still critical
EU AI Act Readiness15%20%+5%High-risk classifications for FS AI
Strategic Alignment10%10%Unchanged
Risk Identification & Management10%15%+5%Core banking competency
Organizational Integration15%10%-5%Compliance structures exist
Independence & Objectivity10%10%Unchanged
Speed to Operational Governance5%0%-5%Speed inappropriate for FS boards
Fiduciary Responsibility10%15%+5%KNF/ECB director liability
Scalability & Adaptability5%5%Unchanged
Knowledge Transfer5%0%-5%Absorbed by other factors

Financial Services Board Governance Composite Scores

ApproachBase ScoreFS ScoreMovement
Advisory-Led Governance4.334.15-0.18
Compliance-First2.933.25+0.32
Technology-Delegated1.951.70-0.25
Ad-Hoc / Reactive1.180.95-0.23

Key insight: Compliance-First approaches gain significantly (+0.32) in financial services due to strong EU AI Act and risk management scores. The gap between Advisory-Led and Compliance-First narrows — but Advisory-Led remains ahead because compliance alone doesn’t build board capability or organizational integration. BCG found that boards with advisory-supported AI governance make 2.7x faster decisions on AI initiatives than boards relying solely on compliance reporting [Source: BCG, “Board Effectiveness in the AI Era,” 2024].

The Thinking Company’s Financial Services variant of the Board AI Governance Evaluation Framework increases EU AI Act Readiness weight to 20% and Fiduciary Responsibility to 15%, reflecting the sector’s regulatory density and director liability exposure.


Critical Factors for Financial Services Boards

EU AI Act Readiness (Weight: 20%)

Why it matters more:

Financial services AI sits at the center of EU AI Act high-risk classifications:

  • Credit scoring and creditworthiness assessment (Annex III, Section 5(b))
  • Insurance premium determination (Annex III, Section 5(c))
  • Employment AI used by financial institutions (Annex III, Section 4)

Boards must ensure their institutions can demonstrate compliance with high-risk AI requirements: risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity. The European Commission estimates that compliance costs for high-risk AI systems range from 6,000-7,000 EUR per system for SMEs to over 300,000 EUR for large institutions with complex AI portfolios [Source: European Commission, “AI Act Impact Assessment,” 2024].

Financial Services Score Analysis:

ApproachScoreFinancial Services Context
Compliance-First4.5Strong regulatory mapping; law firms and Big 4 have deep AI Act expertise
Technology-Delegated1.5Platform compliance features exist; strategic AI Act response outside scope
Advisory-Led4.0Practical AI Act readiness assessment; proportionate compliance design
Ad-Hoc / Reactive1.0No AI Act preparation; reactive when enforcement begins

Board questions to ask:

  • Which of our AI systems fall under high-risk classification?
  • What is our compliance timeline for each high-risk system?
  • Who is accountable for AI Act compliance — and how do we verify it?

Fiduciary Responsibility Coverage (Weight: 15%)

Why it matters more:

Financial services directors face explicit regulatory accountability:

  • KNF Recommendation D assigns ICT risk management responsibility to management boards
  • DORA Article 5 requires management body approval and oversight of ICT risk (including AI)
  • EBA guidelines on ICT and security risk management extend to AI systems

D&O insurers are responding. Premium increases and coverage restrictions for inadequate AI governance are already appearing in policy renewals. According to Allianz Global Corporate & Specialty, AI-related D&O claims in financial services increased 340% between 2022 and 2024 [Source: Allianz AGCS, “Directors and Officers Insurance Insights,” 2024].

Financial Services Score Analysis:

ApproachScoreFinancial Services Context
Compliance-First3.5Addresses regulatory fiduciary requirements; may miss D&O exposure angle
Technology-Delegated1.5Delegation does not equal discharge of fiduciary duty
Advisory-Led4.0Board liability explicitly addressed; D&O implications covered
Ad-Hoc / Reactive1.0Maximum fiduciary exposure; no governance to point to

Board questions to ask:

  • How does our AI governance satisfy DORA Article 5 requirements?
  • What has our D&O insurer asked about AI governance?
  • Can I, as a director, explain how we govern AI if asked by a regulator?

Risk Identification & Management (Weight: 15%)

Why it matters more:

Risk management is a core banking competency. Boards expect sophisticated risk frameworks. AI governance should integrate with — not duplicate — existing risk architecture.

Financial Services Score Analysis:

ApproachScoreFinancial Services Context
Compliance-First4.0Strong model risk management methodology; integrates with existing risk frameworks
Technology-Delegated2.5Technical risk identification; limited strategic risk perspective
Advisory-Led4.0AI risk integrated with enterprise risk; emerging risk horizon-scanning
Ad-Hoc / Reactive1.0Risk identified only when realized

Board questions to ask:

  • How does AI risk integrate with our existing risk taxonomy?
  • What is our model risk management approach for ML/AI models?
  • How do we identify emerging AI risks before they materialize?

When Each Approach Fits

Choose Compliance-First When:

  • Regulatory compliance is the immediate priority (enforcement timeline pressure)
  • Your institution has strong existing compliance infrastructure
  • Board AI literacy is not yet achievable (time constraints)
  • You need defensible documentation quickly

Limitation to address: Compliance-first can become compliance-only. Plan to evolve toward substantive governance once regulatory baseline is established.

Choose Technology-Delegated When:

  • AI use is limited to low-risk applications (not high-risk under AI Act)
  • Technical leadership has proven governance capability
  • Board lacks bandwidth for direct AI oversight
  • You need governance quickly with minimal board involvement

Limitation to address: Regulators expect board engagement. Technology delegation alone is insufficient for high-risk AI systems.

Choose Advisory-Led When:

  • Board wants to build genuine AI governance capability
  • AI is strategically significant (not just operational)
  • High-risk AI systems require substantive board oversight
  • You need governance proportionate to your institution’s size

Limitation to address: Requires board time commitment and ongoing investment in AI literacy.

Choose Ad-Hoc / Reactive When:

Never. For financial services institutions, ad-hoc governance is not a viable approach. Regulatory requirements, fiduciary duties, and liability exposure make structured governance mandatory.


Building Financial Services Board AI Governance

Phase 1: Board Literacy Foundation (Months 1-2)

Board members cannot govern what they don’t understand. Before governance structures, build understanding:

  • AI literacy session: What is AI? What can it do? What are the risks? (2-3 hours)
  • Financial services AI landscape: How is AI being used in our sector? What are peers doing? (2 hours)
  • Regulatory briefing: EU AI Act, DORA, national regulator expectations (2 hours)

Deliverable: Board members can ask informed questions about AI. The board AI literacy factor analysis provides a detailed maturity assessment methodology.

Phase 2: Current State Assessment (Month 2)

Map the AI landscape within your institution:

  • Inventory of AI/ML systems in use or development
  • Classification against EU AI Act high-risk criteria
  • Current governance arrangements (or gaps)
  • Model risk management maturity

Deliverable: Board knows what AI exists, where it is, and how it’s governed today. Gartner estimates that 60% of organizations deploying AI cannot fully inventory their AI systems [Source: Gartner, “AI TRiSM Framework,” 2024].

Phase 3: Governance Framework Design (Months 2-3)

Design governance proportionate to your institution:

  • Committee structure (existing committees or new AI committee?)
  • Reporting cadence and content
  • Escalation thresholds
  • Policy framework (AI ethics, risk, development, procurement)

Deliverable: Board-approved AI governance framework. For the recommended oversight calendar, see the board governance toolkit.

Phase 4: Operational Integration (Months 3-6)

Embed governance into operations:

  • Integrate AI risk into enterprise risk management
  • Establish AI oversight rhythms (quarterly reports, annual review)
  • Develop board AI metrics and dashboards
  • Build internal AI governance capability

Deliverable: Governance operating, not just documented.

Phase 5: Continuous Improvement (Ongoing)

AI governance is not a project with an end date:

  • Quarterly board AI risk reviews
  • Annual governance framework review
  • Horizon-scanning for regulatory changes
  • Board AI literacy refresh

Polish Market Considerations

For Polish financial institutions, additional considerations apply:

KNF Expectations:

  • Recommendation D (IT and ICT security) creates board accountability for AI systems
  • KNF has signaled enhanced model risk management expectations for ML models
  • EU AI Act enforcement will be coordinated through national authorities

DORA Implementation:

  • January 2025 application deadline has passed
  • ICT risk management framework requirements extend to AI
  • Third-party AI service provider oversight is mandatory

Local Governance Patterns:

  • Supervisory board (rada nadzorcza) structure requires clear AI reporting lines
  • Management board (zarząd) accountability for AI operations
  • Two-tier board structure creates specific governance design requirements

Polish Financial Sector Examples:

  • PKO BP AI ethics principles and governance
  • Santander Bank Polska model risk management framework
  • Polish Bank Association AI guidance

The Polish Financial Supervision Authority (KNF) is expected to issue supplementary AI guidance in 2026, aligning national expectations with EU AI Act enforcement timelines [Source: Based on professional judgment informed by KNF regulatory agenda].


What The Thinking Company Recommends

Financial services boards face overlapping regulatory obligations — EU AI Act, DORA, and national regulator expectations — that make structured AI governance a board-level imperative, not a technology team concern.

  • AI Diagnostic (EUR 15–25K): Industry-specific assessment covering regulatory compliance, data infrastructure, and AI readiness.
  • AI Transformation Sprint (EUR 50–80K): Focused 4-6 week engagement tailored to financial services requirements including regulatory considerations.

Learn more about our approach →


Frequently Asked Questions

Directors face personal liability through duty of care obligations, DORA Article 5 management body requirements, and national regulations like KNF Recommendation D. D&O insurers are pricing this risk: Allianz reports a 340% increase in AI-related D&O claims in financial services between 2022 and 2024. Directors who cannot demonstrate they established adequate AI governance face both regulatory sanctions and personal financial exposure through D&O coverage restrictions. [Source: Allianz AGCS, 2024; DORA Regulation 2022/2554]

How should a bank board structure AI oversight?

The Thinking Company recommends a phased approach: start with board literacy (months 1-2), conduct a current-state AI inventory and EU AI Act classification (month 2), design a proportionate governance framework (months 2-3), and embed governance into operations (months 3-6). Most financial institutions integrate AI oversight into existing risk or audit committees rather than creating standalone AI committees. The key is clear accountability, regular reporting cadence, and escalation thresholds for high-risk AI decisions.

What is the difference between compliance-first and advisory-led AI governance?

Compliance-first governance focuses on regulatory checklist completion and scores 3.25/5.0 in The Thinking Company’s financial services framework. Advisory-led governance builds genuine board capability and organizational integration, scoring 4.15/5.0. The critical difference: compliance-first answers “are we compliant?” while advisory-led answers “is our AI effective, safe, and governed?” Compliance-first is appropriate as an initial posture under enforcement pressure; advisory-led is the target state for substantive governance.

How much does AI governance cost for a mid-sized bank?

Costs vary by institution size and AI portfolio complexity. The European Commission estimates compliance costs of 6,000-7,000 EUR per high-risk AI system for SMEs and significantly more for large institutions. External advisory for governance framework design typically runs 25,000-80,000 EUR. Ongoing governance operations (monitoring, reporting, audit) add 5-15% to AI program budgets. These costs should be weighed against the alternative: EU AI Act penalties of up to 35 million EUR or 7% of global turnover, plus D&O premium increases of 20-45% for inadequate governance. [Source: European Commission, 2024; Marsh, 2024]

Does DORA apply to AI systems specifically?

DORA applies to all ICT systems, which explicitly includes AI. Article 5 requires the management body to approve, oversee, and take responsibility for the ICT risk management framework. For AI systems, this means boards must approve AI risk frameworks, receive regular AI risk reports, ensure incident response capability for AI failures, and oversee third-party AI service providers. The January 2025 application deadline has passed; financial entities should already have DORA-compliant AI governance in place. [Source: DORA, Regulation 2022/2554]


Next Steps

This guide has applied The Thinking Company’s board governance framework to financial services. For broader context:


This article was last updated on 2026-03-11. Part of The Thinking Company’s AI Governance Framework content series. For a personalized assessment, contact our team.