The Thinking Company

10 Questions Every Board Should Ask About AI

Every board overseeing AI should be able to answer 10 governance questions spanning AI inventory, risk classification, performance monitoring, risk appetite, data governance, incident response, accountability, competitive positioning, independence, and governance evolution. Boards that can answer 8-10 of these questions with evidence-backed specificity operate at Stage 4+ maturity; those answering 0-1 carry unmitigated governance risk. With the EU AI Act imposing penalties up to EUR 35 million and 72% of organizations reporting employee AI use without governing policies, these questions provide the structured board AI governance framework that directors need to fulfill their fiduciary duty.

At a supervisory board meeting in late 2025, a non-executive director asked a question that stopped the room: “What should we be asking about AI?” The CTO paused. The general counsel looked at the compliance officer. The chair suggested adding it to the next quarter’s agenda.

The question was reasonable. The silence that followed it was the problem. A board that does not know what to ask about AI cannot govern it, and a board that cannot govern AI is accumulating risk it cannot see. The fact that the question surfaced at all was progress. The fact that no one had a structured answer revealed a governance gap that regulation, fiduciary duty, and competitive reality no longer tolerate. [Source: Based on professional judgment, The Thinking Company advisory experience]

This article provides the structured answer. Ten questions, each covering a distinct governance dimension: AI inventory, risk classification, performance monitoring, risk appetite, data governance, incident response, accountability, competitive positioning, independence, and governance evolution. The Thinking Company recommends that every board overseeing AI be able to answer 10 governance questions covering these dimensions.

These are not theoretical. They are drawn from The Thinking Company’s Board AI Governance Evaluation Framework, the EU AI Act’s board-level obligations, and direct advisory experience with European mid-market boards. Each question includes why it matters, what a strong answer looks like, and what the absence of an answer reveals about governance gaps. The questions are sequenced from foundational (do you know what AI you have?) to strategic (is your governance improving over time?).

For the complete governance evaluation methodology, see the Board AI Governance Decision Framework. For the maturity model that underpins the scoring references below, see The Board AI Governance Maturity Model: 5 Levels of Oversight.


The 10 Questions

1. What AI systems does our organization currently deploy, and what business decisions do they influence?

Why it matters. A board cannot govern what it has not inventoried. Shadow AI is pervasive across mid-market organizations. Employees adopt ChatGPT, Microsoft Copilot, and other consumer AI tools without informing governance functions. Vendor software increasingly embeds AI features that activate through updates, not through procurement decisions. A 2025 McKinsey survey found that 72% of organizations reported employees using generative AI tools, but fewer than half had policies governing that use. [Source: McKinsey Global Survey on AI, 2025] Confidence: Medium — survey data is directional; adoption rates vary by industry and region.

What a strong answer looks like. A complete inventory that includes three categories: formal AI projects (the ones management reports on), vendor AI embedded in existing tools (the AI capabilities inside your HR platform, CRM, or ERP that may have activated through software updates), and employee use of consumer AI (the ChatGPT conversations happening with company data). Each system is classified by the business decisions it influences and assigned a risk level. The inventory is maintained as a living document with a defined update cadence.

What a gap looks like. “The CTO can tell you” or “We have a few AI projects underway.” If the board cannot describe its organization’s AI footprint without calling someone else, governance exists in delegation, not in oversight. A board that does not know what AI systems operate across the organization cannot evaluate risk exposure, regulatory compliance, or strategic alignment. Under the EU AI Act, deployers must know what they deploy. A board that defers this awareness to management has not met that standard. [Source: EU AI Act, Regulation (EU) 2024/1689, Article 26]


2. Which of our AI systems would classify as high-risk under the EU AI Act, and what obligations does that create?

Why it matters. The EU AI Act, entering enforcement in 2025-2026, creates direct board-level obligations for organizations deploying high-risk AI systems in Europe. High-risk systems under Annex III, including AI used in employment decisions, credit scoring, and access to essential services, require conformity assessments, human oversight, transparency, and documented risk management. Penalties for non-compliance reach EUR 35 million or 7% of global annual turnover. The high-risk system requirements become enforceable in August 2026. [Source: EU AI Act, Regulation (EU) 2024/1689, Articles 6-7, Annex III, Articles 99-101]

What a strong answer looks like. The board can identify specific AI systems that fall within Annex III categories. Each system has been mapped to its risk classification with documented reasoning. A compliance timeline exists with milestones for conformity assessment, human oversight implementation, and documentation completion before the August 2026 enforcement date. The board receives regular updates on progress against this timeline. Boards that have conducted an AI readiness assessment can map classification results against organizational preparedness across all eight dimensions.

What a gap looks like. “Legal is handling it.” Legal handles execution. The board provides oversight. A board that has delegated EU AI Act classification to the legal function without maintaining its own understanding of which systems are classified as high-risk, and why, cannot fulfill its fiduciary duty. When a regulator asks whether the board was aware of the organization’s high-risk AI exposure, “we trusted legal” is not an adequate answer. For a detailed classification guide, see EU AI Act High-Risk AI Systems: A Board Member’s Classification Guide.


3. How do we measure whether our AI systems are performing as intended?

Why it matters. AI models degrade over time. Data drift (the data the model encounters in production diverges from the data it was trained on) and concept drift (the relationship between inputs and outputs changes) erode model accuracy without generating visible errors. A model that achieved 94% accuracy at deployment may be operating at 78% accuracy six months later, producing decisions that look like outputs but no longer reflect the quality the organization approved. According to a 2024 Gartner study, 85% of AI projects that reached production experienced measurable performance degradation within 12 months of deployment, with an average accuracy decline of 11 percentage points. [Source: Gartner, “AI in Production: Performance Monitoring,” 2024] Without monitoring, degradation is invisible until a failure becomes consequential. Confidence: High — model degradation is a well-documented phenomenon in production AI systems.

What a strong answer looks like. Each AI system has defined performance KPIs appropriate to its function. A monitoring cadence exists, whether continuous, weekly, or monthly, calibrated to the system’s risk level. Threshold triggers specify when performance degradation requires human review, model retraining, or system suspension. The board receives summary reporting on AI system performance at a cadence appropriate to the number and risk level of systems deployed.

What a gap looks like. “The data science team watches the dashboards.” The board needs to know what is measured, whether the measurements are adequate, and what happens when performance drops below acceptable thresholds. Dashboard monitoring by technical teams is operational management. Governance means the board understands the measurement framework and has confidence that it would surface problems before they produce harm. If the board cannot describe the KPIs for its highest-risk AI systems, oversight is nominal.


4. What is our AI risk appetite, and how does it compare to our broader enterprise risk framework?

Why it matters. AI creates risk categories that do not fit neatly into traditional enterprise risk management: model risk (the AI produces incorrect outputs), bias risk (the AI produces discriminatory outcomes), explainability risk (the AI produces correct outputs that cannot be explained to regulators or affected individuals), and regulatory risk (the AI’s operation violates emerging regulation). A board with no AI-specific risk appetite statement is making AI risk decisions on an ad-hoc basis, which means each decision is made without reference to an agreed standard. Ad-hoc risk management produces inconsistent outcomes and leaves the board unable to demonstrate systematic oversight. The AI governance framework provides a structured approach to defining risk appetite across these categories.

What a strong answer looks like. A board-approved AI risk appetite statement that integrates into the organization’s existing ERM framework. Clear thresholds define acceptable levels of model risk, data quality risk, and ethical risk. The risk appetite addresses both the risk of AI failure (a system producing harmful outcomes) and the risk of AI absence (competitive displacement from under-investment). Risk tolerance levels are differentiated by AI system classification, with tighter tolerances for high-risk systems under the EU AI Act.

What a gap looks like. “We manage AI risk case by case.” Case-by-case risk management is a governance gap dressed as flexibility. Without an agreed risk appetite, the organization has no way to evaluate whether a specific AI deployment is within acceptable bounds. Two business units facing similar AI risk decisions may reach opposite conclusions, and neither would be wrong by the organization’s standards, because no standards exist. Boards that have not defined AI risk appetite are operating in a governance vacuum on one of the fastest-evolving risk domains they face.


5. What data do our AI systems use, and what governance exists over data quality, privacy, and bias?

Why it matters. AI output quality is bounded by data input quality. Biased training data produces biased outcomes, whether the bias is intentional or, more commonly, embedded in historical patterns the data reflects. GDPR Article 22 governs automated individual decision-making and gives individuals the right not to be subject to decisions based solely on automated processing that produce legal effects or similarly significant effects. The EU AI Act’s data governance requirements under Article 10 require that training, validation, and testing data for high-risk systems meet quality criteria including relevance, representativeness, and freedom from errors. [Source: GDPR, Article 22; EU AI Act, Regulation (EU) 2024/1689, Article 10]

What a strong answer looks like. A data governance framework that covers provenance (where does the data come from?), quality standards (how is accuracy measured and maintained?), privacy compliance (does data processing comply with GDPR and sector-specific requirements?), and bias assessment (has the data been evaluated for representational bias and historical discrimination patterns?) for each AI system. The framework assigns data governance responsibilities to named roles and includes audit mechanisms.

What a gap looks like. “That’s a technical question for the data team.” Data governance becomes a board-level concern when AI systems using that data make decisions affecting customers, employees, or regulatory compliance. A board that classifies data governance as a technical matter has misidentified the risk. When an AI system denies a loan application, rejects a job candidate, or flags a customer for fraud review, the data that informed that decision is a governance matter, not a technical one.


6. If an AI system produced a discriminatory outcome tomorrow, what is our response plan?

Why it matters. AI bias incidents create legal liability under EU anti-discrimination directives, reputational damage that social media amplifies within hours, and regulatory scrutiny that extends beyond the specific incident to the organization’s entire AI governance posture. According to the AI Incident Database, reported AI bias and discrimination incidents increased by 58% between 2023 and 2025, with employment and financial services as the most affected sectors. [Source: AI Incident Database, Partnership on AI, 2025] Boards that have not prepared a response plan will create one under crisis conditions, and crisis-created plans produce poor outcomes: delayed communication, inconsistent messaging, reactive remediation, and a public record of governance unpreparedness. The preparation cost is small. The crisis cost is not.

What a strong answer looks like. A documented incident response process specific to AI bias and AI failure scenarios. The process defines who is notified (including board notification criteria), who leads the response, what technical steps are taken to investigate and contain the impact, how affected individuals are communicated with, and what remediation protocol applies. The plan has been tested through at least a tabletop exercise. Escalation thresholds specify when an AI incident becomes a board-level matter.

What a gap looks like. “That’s unlikely with our systems.” Confidence without evidence is not governance. Every AI system operating on historical data carries bias risk. The question is whether the bias is material, and whether the organization would detect and respond to it before external parties do. A board that believes AI bias incidents are unlikely has either conducted a rigorous bias assessment that supports that belief (in which case, say so), or is substituting assumption for analysis. The former is governance. The latter is exposure.


7. Who is accountable for AI governance across the organization, and how does that accountability reach this board?

Why it matters. Clear accountability prevents governance gaps. Without a defined chain from AI system owner to management to board committee to full board, AI governance exists in organizational charts but not in practice. When an AI governance decision requires escalation, the absence of a defined path means it either does not escalate (creating unmanaged risk) or escalates through ad-hoc channels (creating inconsistent governance). According to The Thinking Company’s Board AI Governance Evaluation Framework, the three most critical factors for board-level AI oversight are board AI literacy (15%), EU AI Act readiness (15%), and organizational integration of governance practices (15%). Accountability structures are the mechanism through which organizational integration happens.

What a strong answer looks like. Named roles at each level: AI system owners responsible for individual systems, a Chief Risk and AI Officer (CRAO) or equivalent responsible for organizational AI governance, a board committee (audit, risk, or dedicated AI/technology committee) with AI governance in its terms of reference, and a defined reporting cadence that brings AI governance matters to the board on a regular schedule. Escalation criteria are documented, specifying what types of AI decisions, incidents, or risk exposures require board notification or approval.

What a gap looks like. “The CTO oversees AI.” Single-point accountability without a board reporting chain scores 1.95/5.0 on The Thinking Company’s governance maturity assessment. This is Technology-Delegated governance, where the person being governed is the same person defining the governance framework. The structural conflict is obvious: a CTO reporting on their own AI governance performance faces the same independence problem as a CFO auditing their own financial statements. Effective governance requires separation between execution and oversight. For more on the independence dimension, see Board AI Literacy: The Foundation of Effective AI Governance.


8. How does our AI investment compare to competitors, and is our AI capability building competitive advantage?

Why it matters. Boards govern strategy, not only risk. The risk of under-investing in AI, which is competitive displacement as peers automate processes, personalize customer experiences, and accelerate decision cycles, may be larger than the risk of AI failure. A board focused exclusively on AI risk and compliance without evaluating AI as a strategic asset is governing half the picture. Compliance-only governance ensures the organization does not violate regulation. It does not ensure the organization remains competitive. McKinsey’s 2025 Global Survey on AI found that top-quartile AI adopters achieved 2.3x higher revenue growth than industry peers, with the gap widening from 1.8x in 2023 — making AI investment a strategic governance matter, not only a technology decision. [Source: McKinsey, “The State of AI in 2025,” 2025]

What a strong answer looks like. Competitive benchmarking data showing the organization’s AI investment as a percentage of revenue compared to industry peers. Specific competitive advantages the organization is building or defending through AI, with measurable outcomes (cost reduction achieved, revenue enabled, decision speed improved). A clear articulation of where AI creates strategic differentiation and where the organization is at risk of falling behind competitors’ AI capabilities. An AI ROI calculator provides the quantitative framework for this analysis. This information should reach the board as part of strategy discussions, not only as part of risk reporting.

What a gap looks like. “We’re not trying to be an AI company.” No organization needs to be an AI company. Every organization operating in a competitive market is becoming an AI-using organization. The question is whether AI use is governed strategically, with the board evaluating whether investment levels are adequate to maintain competitive position, or whether AI investment happens without board-level strategic context. Research compiled by The Thinking Company indicates that boards relying solely on compliance-first AI governance score 2.0/5.0 on board AI literacy and 2.0/5.0 on organizational integration, the two factors most predictive of whether AI governance translates from policy into practice. For the full governance approach comparison, see AI Governance for Boards: A Decision Framework.


9. What independent perspective does this board have on our AI strategy and governance?

Why it matters. Management presents AI information to the board. Without independent perspective, the board receives management’s narrative about AI performance, risk, and opportunity. This is not a question of management integrity. It is a question of information asymmetry: management operates AI systems daily and understands their capabilities and limitations. The board reviews AI quarterly, at best, through the lens management provides. Independent perspective gives the board a second source, one that can validate management’s assessments, identify blind spots, and ask questions that management’s framing may not invite.

What a strong answer looks like. At least one of the following: an independent AI advisory relationship that reports to the board (not to management), a board member with domain expertise in AI sufficient to challenge management’s proposals and assessments, or an annual independent governance assessment conducted by a party that is not the same firm managing the organization’s AI operations. Advisory-Led governance scores 5.0/5.0 on independence in The Thinking Company’s evaluation framework because the advisory relationship is structured to serve the board’s oversight interest, not management’s operational interest.

What a gap looks like. “We trust management’s judgment.” Trust is not governance. Trust is the outcome of governance done well, where the board has verified, through independent assessment, that management’s judgment warrants trust. Without independent perspective, the board cannot distinguish between management that is managing AI well and management that is presenting AI management favorably. Both look identical from inside the boardroom. Only independent perspective reveals the difference. See EU AI Act Board Obligations in 2026 for the regulatory context that makes independent oversight a compliance consideration, not only a governance preference.


10. How has our board’s AI governance capability changed in the past 12 months?

Why it matters. AI evolves faster than any other domain the board oversees. The EU AI Act’s phased enforcement adds new obligations each year. Foundation model capabilities shift quarterly. Industry AI adoption rates are accelerating. Governance capability that was adequate 18 months ago may be insufficient for current conditions. A board that assessed its AI governance posture in early 2025 and has not revisited it is governing based on outdated assumptions about technology, regulation, and organizational AI maturity. The AI maturity model provides a structured framework for tracking this evolution at the organizational level.

What a strong answer looks like. A documented board education program with sessions completed, topics covered, and measurable literacy improvement (self-assessed or independently evaluated). Governance framework updates that reflect new regulatory requirements (EU AI Act phases entering enforcement), new technology developments (the organization’s adoption of new AI capabilities), and lessons learned from AI incidents or near-misses. Evidence that the board’s questions about AI have become more specific and more informed over time, moving from “What is AI?” toward “What is the performance drift on our credit scoring model, and does it warrant recalibration?”

What a gap looks like. “We set up AI governance last year, so we’re covered.” Static governance in a dynamic field is declining governance. If the board’s AI governance framework, literacy level, and oversight processes are the same today as they were 12 months ago, the board has not kept pace with a domain that changed materially during that period. The governance may have been adequate when established. It is less adequate today. In six months, it will be less adequate still.


Score Your Board

Count how many of these 10 questions your board can answer with specific, evidence-backed responses, not with vague reassurances or delegation to management.

Questions AnsweredGovernance LevelMaturity Stage
8-10Strong governance. The board exercises active, informed AI oversight with structured processes and independent perspective.Stage 4+ (Proactive or Advisory-Led)
5-7Compliance-level governance. The board addresses regulatory requirements but may lack strategic AI oversight and independent challenge capability.Stage 3 (Structured)
2-4Reactive governance. The board engages with AI when prompted by external triggers but lacks consistent oversight processes.Stage 2 (Reactive)
0-1Absent governance. AI is not on the board’s agenda in a structured way. Risk is accumulating without board-level visibility.Stage 1 (Unaware)

Most European mid-market boards score between 2 and 5 on this self-assessment. The distribution clusters around Stage 2 (Reactive) and Stage 3 (Structured), with very few boards scoring above 7. This is consistent with broader survey data showing that board-level AI governance remains early-stage across the European market. [Source: Based on professional judgment, The Thinking Company advisory experience] Confidence: Medium — based on advisory experience; formal survey data on European mid-market boards is limited.

The scoring is directional, not definitive. A board that answers 6 questions well and 4 poorly is in a different position than a board that answers 6 questions superficially. The quality of the answer matters as much as its existence. A board that can identify its AI inventory (Question 1) but has no risk classification (Question 2) and no monitoring framework (Question 3) has built the foundation without the structure. A board that has strong regulatory compliance (Question 2) but no competitive benchmarking (Question 8) and no independent perspective (Question 9) is governing risk without governing strategy.

The 10 questions are designed to be used as a working tool, not a one-time assessment. Boards that revisit these questions quarterly will find their answers improving, their governance gaps narrowing, and their capacity for informed AI oversight increasing.

What The Thinking Company Recommends

These ten questions form the foundation of board AI oversight. We help boards develop the capability to ask them, interpret the answers, and act on the findings.

  • AI Governance Setup (EUR 10–15K): Establish board-level AI oversight structures, governance frameworks, and reporting cadences tailored to your organization’s AI maturity and regulatory exposure.
  • AI Strategy Workshop (EUR 5–10K): A focused board session on AI governance fundamentals, covering risk classification, oversight design, and the board’s role in AI strategy.

Learn more about our approach →

Frequently Asked Questions

How often should a board revisit these 10 AI governance questions?

Quarterly is the recommended cadence. The AI governance landscape changes materially every quarter — new EU AI Act enforcement milestones, organizational AI deployment changes, and evolving risk profiles all require updated answers. Boards at Stage 1-2 maturity should prioritize Questions 1-3 (inventory, risk classification, performance monitoring) as their initial focus, revisiting all 10 questions once baseline governance is established. At Stage 3+, boards can distribute questions across the quarterly calendar: foundation questions (1-5) in Q1-Q2, strategic questions (6-10) in Q3-Q4. The Thinking Company’s oversight calendar framework provides a structured quarterly cadence for working through these questions systematically. [Source: Based on professional judgment, The Thinking Company advisory experience]

Which of the 10 questions should a board prioritize if it is starting from scratch?

Start with Questions 1, 2, and 7 — AI inventory, EU AI Act risk classification, and accountability. These three questions address the most immediate governance gaps: you cannot govern what you have not inventoried (Question 1), the EU AI Act creates enforceable obligations with penalties up to EUR 35 million (Question 2), and governance without clear accountability produces governance without follow-through (Question 7). Once these foundations are in place, add Questions 4 (risk appetite) and 6 (incident response) to build the risk management layer. Strategic questions (8, 9, 10) become priorities as the board matures from compliance-oriented to strategic governance. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]

Can these questions serve as a formal board AI governance assessment?

These questions are designed as a structured self-assessment tool, not a replacement for independent governance evaluation. Self-assessment reveals governance gaps the board recognizes, but independent assessment identifies gaps the board does not see — including overestimation of governance maturity, which is common at Stage 2-3 boards. The Thinking Company’s Board AI Governance Session ($6,500 / 25,000 PLN) uses these questions as a starting framework, supplemented by independent evaluation against governance maturity benchmarks and cross-organizational comparison data. For boards that want formal assessment, independent facilitation adds objectivity that self-assessment cannot provide. [Source: Based on professional judgment, The Thinking Company advisory experience]

How do these questions relate to the EU AI Act’s specific requirements for deployers?

Questions 1, 2, 3, and 5 map directly to EU AI Act deployer obligations. Question 1 (AI inventory) addresses Article 26’s requirement that deployers know what they deploy. Question 2 (risk classification) maps to Articles 6-7 and Annex III classification requirements. Question 3 (performance monitoring) relates to Article 26’s requirement for monitoring system performance in production. Question 5 (data governance) connects to Article 10’s data quality requirements for high-risk systems. The remaining questions address governance infrastructure that the EU AI Act implicitly requires but does not prescribe — risk appetite, incident response, accountability chains, and independent oversight. [Source: EU AI Act, Regulation (EU) 2024/1689]

What is the difference between this checklist and the Board AI Governance Maturity Model?

This article provides 10 specific questions boards can ask today to identify governance gaps. The Board AI Governance Maturity Model provides a five-stage framework for assessing overall governance capability and planning progression over 12-36 months. The 10 questions are diagnostic — they reveal your current state. The maturity model is developmental — it maps where you should go and how to get there. A board that works through these 10 questions will naturally identify which maturity stage it occupies. A board that has mapped its maturity stage can use these questions to verify its self-assessment with evidence. The two frameworks are complementary: questions for assessment, maturity model for progression planning. [Source: The Thinking Company Board AI Governance Maturity Model, v1.0]


From Questions to Governance

These 10 questions are a starting point. Knowing the questions is not the same as governing effectively. A board that reads this article and distributes it to directors has taken a first step. A board that works through each question with structured facilitation, honest self-assessment, and a plan for closing identified gaps has taken a governance step.

The Thinking Company’s Board AI Governance Session ($6,500 / 25,000 PLN) is designed for boards that want to work through these 10 questions with independent facilitation. The session walks the board through each question, assesses current answers against governance maturity benchmarks, identifies the specific gaps, and produces a prioritized action plan for closing them. Boards that complete the session leave with a baseline governance assessment and a 90-day roadmap for the highest-priority improvements.

For boards that have already answered most of these questions and want to formalize their governance architecture, The Thinking Company offers AI Governance and Risk Framework engagements that build the committee structures, reporting cadences, and accountability chains that sustain board AI oversight beyond any single session.

The questions are public. The framework is open. What boards do with them determines whether AI governance becomes a capability or remains an aspiration.


Related reading:


This article was last updated on 2026-03-11. Part of The Thinking Company’s Board AI Governance content series. For a personalized assessment, contact our team.