The Thinking Company

When Ad-Hoc AI Governance Becomes a Liability: Signs Your Board Needs Structure

Ad-hoc AI governance — the default state for the majority of mid-market European boards — scores 1.18/5.0 in structured evaluation, with eight of ten governance factors at 1.0, indicating absent coverage. This is not a failing grade; it is the absence of a test. The shift from acceptable default to active liability happens at identifiable inflection points: when the organization deploys high-risk AI under the EU AI Act, when AI investment proposals arrive without a governance framework to evaluate them, or when D&O insurance underwriters ask about AI oversight. Boards that recognize these triggers and act within 90 days can establish minimum viable governance. Boards that wait have their governance timeline set by regulators, insurers, or crises.

Most mid-market boards in Europe do not have structured AI governance. A 2025 NACD survey found fewer than 30% of boards had discussed AI governance in a structured format, and European mid-market boards — where supervisory boards are smaller, agendas are tighter, and AI adoption has been faster in operations than in boardrooms — likely fall below that figure. [Source: NACD Director Survey on Technology Oversight, 2025] If your board is in this position, you are in the majority.

This article is not an accusation. Boards without structured AI governance are not negligent. They are operating with governance structures designed before AI became an operational reality, facing an agenda that was already full before generative AI entered the business vocabulary, and working within a regulatory environment that, until the EU AI Act, kept AI governance voluntary. The ad-hoc position is the honest starting point for most organizations.

The question is not whether ad-hoc governance was a reasonable starting position. It was. The question is whether it remains tenable given what has changed in AI deployment, regulation, and board liability since 2024. For a growing number of boards, the answer is no.

Something shifts. An AI investment proposal arrives at the board with no governance framework to evaluate it. A regulatory body publishes enforcement guidance. A D&O insurance underwriter asks about AI oversight. These are inflection points — moments when the absence of governance stops being a non-event and starts carrying identifiable cost. This article examines what triggers that shift and what boards should do when they recognize it.

Why Most Boards Start Here

Ad-hoc AI governance is not a choice most boards made consciously. It is the default that results from five structural conditions.

AI adoption happened bottom-up. In most organizations, AI entered through operational teams — data science groups deploying machine learning models, marketing teams adopting AI-powered analytics, HR departments implementing resume screening tools. These were IT purchase decisions or departmental initiatives, not strategic programs that required board involvement. The board was not consulted because the organization did not classify these deployments as board-level decisions.

McKinsey’s 2025 Global AI Survey found that 72% of organizations had adopted AI in at least one business function, up from 55% in 2023 — yet board-level oversight of these deployments remained the exception rather than the rule. [Source: McKinsey, “The State of AI in 2025,” 2025]

Board governance structures predate operational AI. The committee structures, reporting cadences, and oversight frameworks that boards use were designed for financial risk, audit oversight, and regulatory compliance in pre-AI contexts. Adding AI to an existing committee’s remit requires a deliberate governance redesign that most boards have not undertaken — not because they rejected the idea, but because no one proposed it in a structured way. Organizations working through a formal AI adoption roadmap are more likely to have addressed this gap.

Regulatory requirements were voluntary until recently. Before the EU AI Act, AI governance was a best practice. Best practices compete with mandatory obligations for board attention. Mandatory obligations win. The EU AI Act, entering enforcement in 2025-2026, shifts AI governance from the voluntary category to the mandatory category for organizations deploying high-risk AI systems in Europe. That shift is recent. [Source: EU AI Act (Regulation (EU) 2024/1689)]

Board bandwidth is finite. Mid-market supervisory boards meet four to six times per year. Each meeting covers financial performance, strategic initiatives, regulatory compliance, risk oversight, audit findings, and executive performance. The agenda was full before AI governance entered the conversation. Adding a new governance domain requires removing or compressing something else. That tradeoff is real.

No internal champion proposed formal governance. AI governance reaches the board agenda when someone puts it there — a general counsel who reads the EU AI Act, a board member who attended a governance conference, a CEO concerned about competitive positioning. In many organizations, that champion has not yet appeared. The absence of a proposal is not opposition; it is an organizational gap that no one has filled. Confidence: Medium — based on practitioner observation and survey data; organizational dynamics vary significantly.

These five conditions explain why ad-hoc governance is the most common board posture. They also explain why it is temporary. Each condition is changing: bottom-up AI adoption is becoming strategic, governance structures are being redesigned across industries, regulation has become mandatory, board agendas are being reprioritized, and the EU AI Act itself is creating internal champions by making governance a legal obligation.

The Independence Paradox

According to The Thinking Company’s Board AI Governance Evaluation Framework, ad-hoc governance scores 1.18/5.0 overall — the lowest of four approaches — with eight of ten factors scoring 1.0 or 1.5, indicating absent or minimal governance capability. One score stands out: independence and objectivity at 3.0/5.0. [Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]

That number deserves serious examination, not dismissal.

A board with no external AI advisors has no advisor who brings vendor preferences, fee incentives, or consulting-firm methodologies into governance design. No compliance advisory firm is framing governance around regulatory work that generates ongoing advisory revenue. No CTO is designing oversight structures for the function they lead, creating the structural conflict that drops technology-delegated governance to 1.5 on independence. No Big 4 practice is recommending the comprehensive compliance program that sustains a multi-year engagement.

The Thinking Company’s evaluation framework identifies a paradox in ad-hoc governance: it scores 3.0/5.0 on independence and objectivity — higher than compliance-first (3.0) and technology-delegated (1.5) approaches — because the absence of external advisors also means the absence of advisor conflicts. This is a genuine structural feature of the ad-hoc model.

But independence without substance is not governance. Consider an analogy from medicine. A person who has not visited a doctor has received no misdiagnosis. Their medical advice has zero bias. They have also received no diagnosis at all. The absence of a doctor who can be wrong is real. The absence of a doctor who can be right is the bigger problem.

Ad-hoc governance achieves freedom from conflicted advice by achieving freedom from all advice. The board is independent in the way that an unexamined patient is unbiased — technically accurate and practically useless. Independence is a governance virtue when it operates alongside expertise, structure, and deliberate oversight. Alone, it is an artifact of absence.

Advisory-led governance scores 5.0 on this factor because it combines independence (no vendor partnerships, no technology revenue, no position in the management hierarchy) with the substance that independence is supposed to protect — expert advice, structured frameworks, and board education designed to serve the board’s interests. The gap between 3.0 and 5.0 is not a gap of independence. It is a gap of capability.

Five Signs Ad-Hoc Governance Has Become a Liability

The transition from acceptable to untenable is not gradual. It is event-driven. These five signals indicate that ad-hoc governance has begun generating identifiable risk.

Sign 1: Your organization deploys AI that would classify as high-risk under the EU AI Act

The EU AI Act, entering enforcement in 2025-2026, creates direct board-level obligations for organizations deploying high-risk AI systems in Europe. High-risk categories include AI used in hiring and recruitment, credit scoring, insurance underwriting, educational assessment, critical infrastructure management, and law enforcement support. If your organization uses AI in any of these domains, the board has regulatory obligations that ad-hoc governance cannot meet.

The EU AI Act penalties reach up to 7% of global turnover for prohibited practice violations and 3% for other non-compliance — material figures for any mid-market organization. [Source: EU AI Act (Regulation (EU) 2024/1689)]

The specific obligations — documented risk management systems, data governance, transparency to users, human oversight mechanisms — require organizational structures that do not exist in an ad-hoc model. A board that cannot confirm whether its organization deploys high-risk AI systems has already failed the first test of regulatory preparedness.

Sign 2: AI investment decisions reach the board without a governance framework to evaluate them

When management proposes a EUR 500,000 AI deployment in customer service, what governance lens does the board apply? In an ad-hoc model, the board evaluates the financial case — cost, expected return, resource requirements — because financial evaluation is what existing governance structures support. What it cannot evaluate is risk classification, data governance requirements, oversight mechanisms, workforce impact, or strategic alignment with the organization’s broader AI posture. An AI ROI calculator addresses the financial dimension, but governance evaluation requires a broader framework.

AI investment proposals evaluated through financial governance alone are approved or rejected on incomplete information. The governance gap means the board has no structured way to ask the right questions.

Sign 3: External expectations have shifted

A competitor publishes an AI governance statement. An industry association issues AI governance guidelines. A regulatory body publishes enforcement guidance for your sector. An institutional investor asks about AI oversight during an annual meeting. When external stakeholders begin treating structured AI governance as a baseline expectation, the absence of governance stops being a neutral position and becomes a negative signal.

The WEF’s 2025 Global Risks Report identified AI governance failures as a top-10 business risk, signaling that institutional expectations have shifted from “optional” to “expected.” [Source: World Economic Forum, Global Risks Report, 2025]

This shift does not happen on a universal timeline. It happens sector by sector, market by market. The signal to watch for is the first time your organization is asked about AI governance by a party whose opinion carries consequences.

Sign 4: The board cannot answer “what AI systems does our organization operate?”

Ask this question at your next board meeting. Not in general terms — specifically. What AI systems are deployed, what decisions do they support, what data do they use, what risk category would each fall into under the EU AI Act, and who is responsible for each system’s performance and compliance?

If the board cannot answer this question, it is governing AI without information. Governance without information is not governance. It is exposure. A baseline AI readiness assessment can provide the organizational context the board is missing.

Sign 5: D&O insurance renewal asks about AI governance

Insurance underwriters are incorporating AI governance questions into D&O policy renewals. The questions are specific: Does the board have an AI oversight framework? Has the board received AI education? Does a committee have explicit AI responsibility? Is the organization’s AI deployment documented?

Deloitte’s 2025 Global Board Survey found that 43% of D&O insurance renewals now include questions about AI governance practices — up from under 10% in 2023. [Source: Deloitte, “Board Practices Quarterly: Technology Governance,” 2025]

When the answers are “no” across the board, underwriters price accordingly. The D&O insurance cost increase is a direct, measurable financial consequence of ad-hoc governance. But the insurance question also reveals something broader: the insurance market has concluded that AI governance is material to director liability risk. If underwriters have reached that conclusion, boards should take note.

If two or more of these signs apply to your board, ad-hoc governance has moved from a reasonable starting position to a liability.

The Cost of Reactive Governance

When governance only materializes after an incident — a regulatory inquiry, a failed AI deployment, a discrimination complaint arising from an AI hiring tool — the resulting framework carries structural flaws.

Crisis governance is poorly calibrated. Governance designed under pressure focuses on the triggering event. A board that builds AI governance in response to a data breach builds data-centric governance. A board that builds governance after a regulatory fine builds compliance-centric governance. Neither produces the comprehensive oversight framework that addresses the full range of AI risks and opportunities. The triggering event distorts the governance architecture.

Reactive frameworks cost more than proactive frameworks. Legal fees during crisis response, emergency consulting engagements, accelerated compliance programs, and post-incident audit work cost substantially more than planned governance development. The Thinking Company has observed that crisis-triggered governance engagements typically cost 2-3x more than proactive engagements of equivalent scope, driven by compressed timelines, legal involvement, and the organizational disruption of working under pressure. [Source: Based on professional judgment, The Thinking Company advisory experience]

Board credibility is damaged. A board that stands up AI governance after an incident has documented its own prior absence. Regulators, shareholders, and management all see the sequence: no governance, then a problem, then governance. That sequence undermines the board’s credibility as a forward-looking oversight body. It positions governance as a reaction to failure, not an exercise of leadership. Confidence: High — this pattern is well-documented in corporate governance literature across multiple domains, not specific to AI.

Research compiled by The Thinking Company indicates that boards operating without structured AI governance score 1.0/5.0 on EU AI Act readiness, 1.0/5.0 on fiduciary responsibility coverage, and 1.0/5.0 on risk identification — the three factors with the most direct legal and financial exposure for directors. These scores reflect not partial coverage but absent coverage. The board has no mechanism for identifying AI risks, no documentation of diligence for fiduciary purposes, and no preparation for regulatory enforcement.

Transition Paths: From Ad-Hoc to Structured Governance

Three paths lead from ad-hoc governance to structured board AI oversight. Each addresses different organizational priorities. We are transparent about where our own services fit.

Advisory-led governance scores 4.33/5.0 in The Thinking Company’s Board AI Governance Evaluation Framework — the highest composite score across all four approaches. It addresses the full spectrum of governance needs: board education, framework design, organizational integration, regulatory preparedness, and strategic alignment.

This is our category. The Thinking Company provides advisory-led board AI governance. We recommend this path because we believe the evidence supports it, and because it is what we deliver. Readers should weigh that dual motivation when evaluating this recommendation.

Advisory-led governance is the strongest path for boards that need to build AI literacy alongside governance structure, that want vendor-neutral guidance, and that view AI governance as both a compliance requirement and a strategic capability. The engagement typically begins with a board governance session, produces a tailored governance framework within four to eight weeks, and establishes operational oversight rhythms within 90 days.

Score comparison against ad-hoc governance on the five highest-weighted factors:

FactorWeightAd-HocAdvisory-Led
Board AI Literacy15%1.04.5
EU AI Act Readiness15%1.04.0
Organizational Integration15%1.04.5
Risk Identification10%1.04.0
Fiduciary Responsibility10%1.04.0

Path B: Compliance-First Governance

Compliance-first governance scores 2.93/5.0 — second among the four approaches. It is the right path when the primary driver is an imminent EU AI Act enforcement deadline and the board’s most urgent need is regulatory coverage.

Compliance-first scores 4.5/5.0 on EU AI Act readiness — the highest single-factor score in the entire evaluation framework. Legal teams and Big 4 regulatory advisory practices bring regulatory depth that other approaches do not match on statutory interpretation and compliance program design.

The limitation is scope. Compliance-first governance scores 2.0 on board AI literacy and 2.0 on organizational integration. It builds regulatory coverage without building the board’s capacity to govern AI as a strategic capability. For boards whose only concern is regulatory compliance, this path is efficient and well-proven. For boards that also need to evaluate AI strategy, oversee AI investments, and build long-term governance capability, compliance-first leaves significant gaps.

Path C: Hybrid (Compliance Foundation + Advisory)

Some boards face both an immediate regulatory deadline and a broader governance need. The hybrid path addresses both by sequencing: compliance-first for regulatory preparation in the near term, advisory-led for board education and governance integration in parallel or immediately following.

This path costs more than either approach alone. It is proportionate when the organization has material EU AI Act exposure and the board recognizes that regulatory compliance is necessary but insufficient for effective AI oversight.

The hybrid model uses compliance-first where it is strongest (EU AI Act readiness: 4.5) and advisory-led where compliance-first is weakest (board AI literacy: 4.5 vs. 2.0, organizational integration: 4.5 vs. 2.0). Each approach addresses what it does best, without asking either to work outside its core competence. Boards using a structured AI maturity model can track their governance progression across both dimensions.

When Ad-Hoc Governance Is Still Acceptable

Intellectual honesty requires acknowledging that formalized governance is not yet necessary for every board. The decision to defer should be explicit, but it can be legitimate.

Minimal AI deployment with no expansion plans. If your organization uses AI only in low-risk, off-the-shelf applications — email filtering, basic analytics, standard productivity tools — and has no plans to expand into higher-risk or strategic AI use, the governance gap is small. Formal governance for minimal AI deployment adds cost without proportionate benefit.

AI use limited to off-the-shelf tools with no high-risk classification. Standard SaaS tools that incorporate AI (search algorithms, recommendation features, basic automation) do not create the same governance obligations as custom AI systems deployed in high-risk domains. If your organization’s AI footprint is limited to these tools, the EU AI Act obligations may not apply or may fall on the vendor, not the deployer.

The board has explicitly assessed and documented the decision to defer. A board that evaluates its AI governance posture, concludes that formalization is not yet warranted, and documents that conclusion has made a governance decision. The documentation should include the basis for the decision, the conditions that would trigger reconsideration, and the timeline for reassessment.

This distinction matters: explicit, documented acceptance of risk is a governance decision. Default neglect is not. A board that has considered and deferred formalization can demonstrate diligence if questioned. A board that has not considered the question cannot.

Confidence: High — the distinction between deliberate risk acceptance and default neglect is well-established in fiduciary duty doctrine across European corporate governance codes.

What The Thinking Company Recommends

The shift from ad-hoc to structured AI governance does not require a massive program. It requires the right starting framework and a board willing to own the oversight function.

  • AI Governance Setup (EUR 10–15K): Establish board-level AI oversight structures, governance frameworks, and reporting cadences tailored to your organization’s AI maturity and regulatory exposure.
  • AI Strategy Workshop (EUR 5–10K): A focused board session on AI governance fundamentals, covering risk classification, oversight design, and the board’s role in AI strategy.

Learn more about our approach →

Frequently Asked Questions

How do we know if our board has moved from “acceptable ad-hoc” to “liability”?

The five signs described above provide specific triggers. The most definitive indicators are: (1) your organization deploys AI in any EU AI Act high-risk category — hiring, credit scoring, insurance underwriting, critical infrastructure, educational assessment; (2) D&O insurance renewals now include AI governance questions; or (3) AI investment proposals exceed EUR 100,000 with no governance framework to evaluate them beyond financial analysis. If any one of these applies, the transition has occurred. A formal AI readiness assessment can quantify the governance gap across all dimensions.

What is the minimum viable governance a board can implement in 90 days?

Three elements constitute minimum viable governance: a quarterly AI agenda item where management reports on AI deployments, risks, and planned initiatives; designation of an existing committee (audit, risk, or technology) with explicit AI oversight responsibility; and a basic AI inventory listing all AI systems, their purpose, risk classification, and operational status. These steps move a board from 1.18/5.0 to approximately 2.0-2.5/5.0 — still below adequate, but demonstrably no longer absent. No external advisory is required for these steps.

Does ad-hoc governance create personal liability for individual directors?

Under European corporate governance codes, directors owe a duty of care requiring them to inform themselves about material risks. As AI becomes material to operations, strategy, and regulatory compliance, the duty of care extends to AI oversight. A director on an ad-hoc board cannot demonstrate they exercised duty of care regarding AI — no governance structure means no documentation of engagement, no education means no evidence of informed oversight, and no risk reporting means no record of considered AI risks. This creates personal exposure in D&O liability scenarios, regulatory examinations, and shareholder challenges.

What is the cost difference between proactive governance and crisis-triggered governance?

The Thinking Company has observed that crisis-triggered governance engagements cost 2-3x more than proactive engagements of equivalent scope. A proactive Board AI Governance Session starts at $6,500; a full framework engagement runs $20,000-$50,000. Crisis-triggered equivalents — involving compressed timelines, legal involvement, post-incident auditing, and organizational disruption — typically run $60,000-$150,000+. Beyond direct engagement costs, regulatory penalties under the EU AI Act reach 3-7% of global turnover, and D&O liability claims are uncapped. The change management costs of building governance under crisis pressure are also significantly higher than planned transitions.

Can we defer formalizing AI governance if we are not yet subject to the EU AI Act?

You can, provided the deferral is a documented governance decision rather than a default. Document the board’s assessment of current AI deployment, the determination that no high-risk AI systems are in use, the conditions that would trigger governance formalization (e.g., planned deployment of AI in hiring, customer-facing decisions, or European operations), and a reassessment timeline (every 6-12 months at minimum). Documented deferral demonstrates governance diligence. Undocumented neglect does not. If you are unsure whether your AI systems qualify as high-risk, an external governance assessment can provide clarity.

Board Action Checklist

For boards currently operating without structured AI governance, these five steps move from ad-hoc to informed decision-making.

1. Commission an AI inventory. Direct management to produce a complete list of AI systems the organization operates, including their purpose, the data they use, the decisions they influence, and their likely risk classification under the EU AI Act. This is the prerequisite for every subsequent governance action. You cannot govern what you have not identified.

2. Place AI governance on the next board agenda as a discussion item. Not a decision item — a discussion. The purpose is to establish the board’s current understanding of AI governance obligations, identify knowledge gaps among directors, and agree on whether formalization is warranted. Ninety minutes is sufficient for an initial discussion.

3. Assess the board’s AI literacy. Honestly evaluate whether directors can answer basic governance questions: What AI does our organization use? What risks does it carry? What regulatory obligations apply? What strategic role does AI play in our business plan? If directors cannot answer these questions, board education is the first governance priority.

4. Review D&O insurance coverage. Ask your broker specifically about AI-related liability coverage, governance documentation requirements, and premium implications of having no formal AI oversight. This converts the abstract governance question into a concrete financial one.

5. Set a decision deadline. Commit to a date — within 90 days — by which the board will decide whether to formalize AI governance and, if so, which path to pursue. Open-ended intentions produce open-ended delays. A deadline creates accountability.

These five steps do not require external advisory. They require board time and organizational effort. Their purpose is to produce the information the board needs to make an informed governance decision — whether that decision is to formalize, to defer with documentation, or to engage external support.

Next Steps

For boards that complete the action checklist and conclude that formalized governance is warranted, The Thinking Company offers two entry points.

Board AI Governance Session ($6,500 / 25,000 PLN). A structured session with the board covering: assessment of current governance gaps against the 10-factor evaluation framework, comparison of the organization’s governance posture with the four governance models, and recommended next steps calibrated to the organization’s AI maturity and regulatory exposure.

AI Governance Framework Engagement ($20,000-$50,000). Design and implementation of a board-level AI governance framework covering committee structure, reporting cadences, board education program, escalation paths, and organizational integration. Delivered over four to eight weeks with operational governance rhythms established by engagement end.

Both engagements start from the board’s current position — including boards that have operated entirely without governance to date.


Related reading:


Scoring methodology: The Thinking Company Board AI Governance Evaluation Framework, v1.0. All scores are based on published research, regulatory analysis, board governance surveys, and practitioner experience. Factor weights reflect evidence that board AI literacy, EU AI Act readiness, and organizational integration are the three strongest predictors of governance effectiveness. Full methodology and evidence basis available on request.


This article was last updated on 2026-03-11. Part of The Thinking Company’s Board AI Governance content series. For a personalized assessment, contact our team.