EU AI Act High-Risk AI Systems: A Board Member’s Classification Guide
The EU AI Act classifies AI systems into four risk tiers — unacceptable, high-risk, limited-risk, and minimal-risk — with high-risk systems listed in Annex III triggering conformity assessment, risk management, human oversight, and transparency requirements enforceable from August 2026. For boards, the critical task is identifying which of the organization’s AI systems fall into the eight Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice), because high-risk classification creates deployer obligations under Articles 26-29 that require governance structures reaching the board level. Most mid-market organizations will find their primary exposure in Category 4 (employment) and Category 5 (essential services).
Board members hearing “high-risk AI” tend toward one of two reactions. Some assume the label applies to every AI system the organization touches — triggering disproportionate compliance spending on tools that carry no regulatory obligation. Others assume high-risk classification applies to surveillance states and law enforcement, not to their organization’s HR software or credit scoring module — and do nothing.
Both reactions are wrong. The EU AI Act’s risk classification system is specific, documented, and knowable. A board that understands it can direct governance resources where they are required and avoid wasting resources where they are not. A board that does not understand it will overspend on compliance for minimal-risk systems, underspend on compliance for high-risk systems, or both. The classification question — which of our AI systems are high-risk? — is the first governance question boards must answer. Everything else follows from it. Building a strong AI governance framework starts with this classification exercise.
This article explains the EU AI Act’s risk classification in terms board members can use without a law degree. It maps the eight Annex III high-risk categories, provides a practical decision tree for classifying an organization’s AI portfolio, and identifies what high-risk classification triggers in terms of board-level duties. It draws on Regulation (EU) 2024/1689 and The Thinking Company’s Board AI Governance Evaluation Framework. For the broader picture of board obligations under the EU AI Act, see EU AI Act Board Obligations: What Directors Must Know in 2026. For the complete governance evaluation methodology, see the Board AI Governance Decision Framework.
The Four Risk Tiers
The table below summarizes each tier and what it means for board AI governance.
| Risk Tier | Regulatory Treatment | Examples | Board Action |
|---|---|---|---|
| Unacceptable (Article 5) | Prohibited outright | Social scoring by public authorities; real-time remote biometric identification in public spaces (with limited law enforcement exceptions); AI manipulating behavior beyond a person’s consciousness; AI exploiting vulnerabilities of specific groups; emotion recognition in workplaces and educational institutions; untargeted scraping of facial images | Confirm the organization operates no prohibited systems. Prohibitions have been enforceable since February 2025. |
| High-Risk (Articles 6-7, Annex III) | Full compliance requirements: risk management, data governance, documentation, human oversight, transparency, accuracy and robustness | AI in HR screening, credit scoring, biometric identification, critical infrastructure management, education assessment, law enforcement tools | Primary area of board governance. Enforceable from August 2026. |
| Limited Risk (Articles 50, 52) | Transparency obligations only | Chatbots (must disclose AI interaction); AI-generated content (must be labeled); emotion recognition (must disclose where not prohibited); deepfakes (must be labeled) | Verify transparency policies exist and are implemented. |
| Minimal Risk | No specific EU AI Act requirements | Internal analytics dashboards, marketing personalization, predictive maintenance (outside critical infrastructure), document processing | Standard corporate governance applies. No AI-Act-specific board action required. |
[Source: EU AI Act, Regulation (EU) 2024/1689, Articles 5-7, 50, 52, Annex III]
For boards, the operational weight falls on the high-risk tier. Unacceptable-risk systems are binary: they are prohibited, and the board confirms they do not exist in the organization. Limited-risk systems require transparency measures that management can implement without board-level governance architecture. Minimal-risk systems need no AI-Act-specific oversight. High-risk systems are where classification creates compliance programs, board oversight duties, and financial exposure for non-compliance. The European Commission estimates that approximately 15% of AI systems deployed in the EU will qualify as high-risk under Annex III — but those 15% concentrate the vast majority of regulatory obligations and potential penalties. [Source: European Commission Impact Assessment, SWD(2021) 84 final]
The Provider-Deployer Distinction
One distinction boards must understand before classifying: the EU AI Act assigns different obligations to providers (organizations that develop or place AI systems on the market) and deployers (organizations that use AI systems under their authority). Providers face conformity assessment, CE marking, quality management, and post-market monitoring requirements under Articles 16-25. Deployers face risk management oversight, human oversight, transparency, and record-keeping requirements under Articles 26-29.
Most mid-market boards sit on the deployer side. The organization bought or licensed the AI system — it did not build it. Deployer obligations are substantial: Article 26 requires technical and organizational measures, human oversight by competent individuals, monitoring of system operation, and reporting of serious incidents. Article 29 requires a fundamental rights impact assessment before deploying certain high-risk systems. These obligations require governance structures. They reach the board. Organizations at earlier stages of AI deployment benefit from an AI readiness assessment that maps existing capabilities against these deployer obligations.
Annex III: The Eight High-Risk Categories
Article 6(2) of the EU AI Act designates AI systems in specific areas listed in Annex III as high-risk. These are standalone AI systems — not components of products covered by other EU safety legislation. For each category, this section explains what qualifies, how likely a mid-market organization is to operate such a system, and what the board should watch for.
1. Biometric Identification and Categorisation
What qualifies: Remote biometric identification systems (non-real-time); biometric categorisation systems that classify individuals by sensitive attributes (race, political opinions, trade union membership, religious beliefs, sex life, sexual orientation); emotion recognition systems not covered by the Article 5 prohibitions.
Board relevance for mid-market organizations: Low to moderate. Most mid-market companies do not operate biometric identification or categorisation systems. Exceptions exist: organizations using facial recognition for facility access control, employee identity verification, or customer authentication. If the organization uses any form of biometric processing beyond simple fingerprint-to-unlock, this category warrants review.
Watch for: Biometric capabilities embedded in third-party security systems or identity verification platforms that the organization deploys without recognizing the classification implications.
2. Critical Infrastructure Management
What qualifies: AI systems used as safety components in the management and operation of road traffic, water supply, gas supply, heating supply, electricity supply, digital infrastructure, and related critical systems.
Board relevance for mid-market organizations: Low unless sector-specific. Utilities, energy companies, transport operators, and digital infrastructure providers face direct exposure. Organizations outside these sectors are unlikely to operate AI classified under this category.
Watch for: AI-powered building management systems, energy optimization tools, or network management platforms that could qualify if the organization manages infrastructure falling within the defined scope.
3. Education and Vocational Training
What qualifies: AI systems that determine access to educational institutions; systems that evaluate learning outcomes; systems that assess the appropriate level of education an individual will receive; systems that monitor student behavior during examinations (e.g., proctoring AI).
Board relevance for mid-market organizations: Low unless education-sector. Educational institutions, training providers, and EdTech companies have direct exposure. Corporate organizations using AI in internal training programs — automated assessment of employee certifications, AI-driven learning path recommendations — should review whether their systems fall within scope.
Watch for: AI-powered Learning Management Systems (LMS) that go beyond content delivery to assess competence or determine training eligibility.
4. Employment and Worker Management
What qualifies: AI systems for recruitment and selection (CV screening, candidate filtering, application ranking); AI for promotion and termination decisions; AI for task allocation based on individual behavior, traits, or personal characteristics; AI for performance monitoring and evaluation of workers.
Board relevance for mid-market organizations: High. This is the Annex III category most likely to affect mid-market organizations. High confidence: Most mid-market organizations deploying AI in HR screening, candidate assessment, or workforce analytics operate at least one high-risk AI system under this classification. CV screening tools, automated candidate ranking, AI-assisted performance review platforms, and algorithmic task allocation systems all qualify. A 2024 study by the European Parliament Research Service found that 56% of large employers in the EU already use some form of AI in recruitment, with adoption growing at 30% year-over-year among mid-market firms. [Source: European Parliament Research Service, AI in Employment, 2024]
Watch for: AI features embedded within broader HR platforms (Workday, SAP SuccessFactors, BambooHR) that perform candidate filtering or performance scoring. The organization may not have purchased “an AI system” — it purchased HR software that contains high-risk AI components. The deployer obligation applies regardless of how the AI capability was acquired. Understanding your organization’s AI maturity model position helps boards anticipate where embedded AI features may create hidden high-risk exposure.
5. Access to Essential Private and Public Services
What qualifies: AI systems used to evaluate creditworthiness or establish credit scores; AI for risk assessment and pricing in life and health insurance; AI for evaluating and classifying emergency calls (dispatch prioritisation); AI used to assess eligibility for public assistance benefits, services, or to grant, reduce, revoke, or reclaim such benefits.
Board relevance for mid-market organizations: High for financial services and insurance. Any organization that uses AI in lending decisions, credit risk assessment, insurance underwriting, or insurance pricing operates a high-risk system under this category. Organizations outside financial services that use AI-powered credit checks on customers or suppliers should also review their classification.
Watch for: AI-driven customer risk scoring, automated credit limit decisions, and dynamic insurance premium calculations. These are common in financial services and increasingly present in B2B contexts where organizations assess counterparty risk.
6. Law Enforcement
What qualifies: AI used as polygraph-like tools or for detecting emotional states; AI for assessing the risk of offending or reoffending; AI for assessing the reliability of evidence; AI for profiling individuals in the course of detection, investigation, or prosecution of criminal offences.
Board relevance for mid-market organizations: Very low. Unless the organization provides technology to law enforcement agencies, this category is unlikely to apply. Organizations selling AI tools to police, prosecution services, or intelligence agencies should classify their products under this category.
7. Migration, Asylum, and Border Control
What qualifies: AI for assessing risk of irregular migration; AI used as polygraph-like tools for migration interviews; AI for document authenticity verification in the migration context; AI for processing and assessing applications for asylum, visa, or residence permits.
Board relevance for mid-market organizations: Very low. This category applies to government agencies and their technology suppliers. Mid-market organizations are unlikely to have exposure unless they provide services to immigration or border control authorities.
8. Administration of Justice and Democratic Processes
What qualifies: AI systems used by judicial authorities to research and interpret facts and law; AI applied to dispute resolution; AI intended to influence the outcome of elections or referenda (excluding organizational tools).
Board relevance for mid-market organizations: Very low. LegalTech companies providing AI tools for legal research, case analysis, or dispute resolution have direct exposure. Most other organizations do not.
Self-Assessment: Where Does Your Organization Have Exposure?
The Thinking Company advises boards to classify their AI portfolio before selecting a governance approach, because the risk classification determines whether the organization needs full high-risk compliance (Articles 9-15), limited-risk transparency measures (Articles 50-52), or minimal governance beyond standard oversight. An AI readiness assessment can accelerate this classification by mapping the organization’s full technology landscape.
Use this checklist to identify likely high-risk exposure.
| AI Application | Likely Classification | Check If Your Organization Uses This |
|---|---|---|
| CV screening or candidate ranking | High-risk (Annex III, Category 4) | Any AI in recruitment workflow |
| Automated performance evaluation | High-risk (Annex III, Category 4) | AI scoring employee performance |
| Task allocation based on worker traits | High-risk (Annex III, Category 4) | Algorithmic scheduling or task assignment |
| Credit scoring or creditworthiness | High-risk (Annex III, Category 5) | AI in lending, credit, or risk assessment |
| Insurance risk assessment or pricing | High-risk (Annex III, Category 5) | AI in underwriting or premium calculation |
| Biometric identification or verification | High-risk (Annex III, Category 1) | Facial recognition, biometric access |
| Customer service chatbot | Limited-risk (Article 50) | Disclose AI interaction to users |
| Internal analytics dashboard | Minimal-risk | No AI Act requirements |
| Marketing personalization engine | Minimal-risk (unless targeting vulnerable groups) | Review for Article 5 if targeting by age or disability |
| Predictive maintenance | Depends on context | High-risk if safety component of critical infrastructure; minimal-risk otherwise |
| Document processing / OCR | Minimal-risk (unless in high-risk domain) | Review if used in legal, judicial, or immigration contexts |
| AI-generated content tools | Limited-risk (Article 50) | Label outputs as AI-generated |
Most mid-market organizations deploying AI in HR screening, credit assessment, or customer risk scoring operate at least one high-risk AI system under EU AI Act Annex III — a classification that triggers board-level oversight obligations regardless of whether the board has established AI governance. If your organization uses AI in employment decisions or financial risk assessment, the classification question is not whether you have a high-risk system. It is how many.
The Classification Decision Tree
Boards do not need to perform classification themselves. They need to verify that management has performed it rigorously and documented it for regulatory review. The following five-step process is what the board should expect management to execute and report on.
Step 1: Inventory all AI systems in use. This includes vendor-provided tools, AI capabilities embedded in enterprise software, internally developed models, and employee-adopted AI applications (shadow AI). An inventory that excludes AI features within SaaS platforms is incomplete. If the organization cannot produce this inventory, classification cannot proceed — and that gap is itself a governance finding the board must address. According to Gartner, the average enterprise uses 2.3 times more AI tools than its IT department tracks, a ratio likely to be higher in mid-market organizations with less centralized procurement. [Source: Gartner, AI in the Enterprise Survey, 2025]
Step 2: For each system, determine the classification pathway. Two routes lead to high-risk classification:
- Article 6(1): The AI system is a safety component of a product that requires a conformity assessment under existing EU product safety legislation (medical devices, aviation, automotive, machinery, toys, marine equipment, rail systems, civil aviation security). This pathway applies to product manufacturers.
- Article 6(2) + Annex III: The AI system operates in one of the eight Annex III areas listed above. This pathway applies to most deployers.
Step 3: If high-risk, identify the specific Annex III category. The category determines which obligations apply with the most specificity. Employment AI (Category 4) and essential services AI (Category 5) are the most common classifications for mid-market deployers.
Step 4: Determine provider vs. deployer status. For each high-risk system, determine whether the organization is the provider (built or placed the system on the market) or the deployer (uses the system under its authority). Most mid-market organizations are deployers. The deployer obligations under Articles 26-29 are the relevant requirements.
Step 5: Map requirements and assign accountability. For each high-risk system, map the applicable requirements (see next section) and assign a named individual or function responsible for compliance. Report classification results and accountability assignments to the board or responsible board committee. Effective AI change management ensures that these accountability assignments translate into operational practice rather than remaining on paper.
What High-Risk Classification Triggers
Once an AI system is classified as high-risk, the EU AI Act imposes a set of requirements that are organizational — not just technical. Boards need to understand what these requirements are and verify that management is implementing them. The requirements take effect in August 2026. Non-compliance penalties can reach EUR 15 million or 3% of global annual turnover for violations of deployer obligations, and up to EUR 35 million or 7% for prohibited AI practices. [Source: EU AI Act, Article 99]
For Providers (Articles 9-15)
Organizations that develop or place high-risk AI systems on the market must implement:
- Risk management system (Article 9) — continuous, iterative process identifying and mitigating risks throughout the system lifecycle
- Data governance (Article 10) — training, validation, and testing datasets must meet quality criteria; bias examination is required
- Technical documentation (Article 11) — sufficient detail for conformity assessment before the system is placed on the market
- Record-keeping (Article 12) — automatic logging of system operations enabling traceability
- Transparency and information provision (Article 13) — deployers receive information sufficient to interpret system output and use the system appropriately
- Human oversight (Article 14) — design enabling effective human oversight, including the ability to understand, monitor, and override system outputs
- Accuracy, robustness, and cybersecurity (Article 15) — appropriate levels of accuracy, resilience to errors and attacks, and security measures
For Deployers (Articles 26-29)
Organizations that use high-risk AI systems under their authority — the category most mid-market boards fall into — must:
- Implement technical and organizational measures appropriate to the AI system (Article 26(1))
- Assign human oversight to individuals with the necessary competence, training, authority, and support (Article 26(2))
- Ensure input data is relevant and representative for the system’s intended purpose (Article 26(4))
- Monitor the system’s operation based on the provider’s instructions of use (Article 26(5))
- Inform the provider and relevant authorities of serious incidents (Article 26(5))
- Conduct a fundamental rights impact assessment before deploying certain high-risk systems, including AI used in essential services and employment (Article 27)
- Keep logs generated by the system for a period appropriate to its purpose (Article 26(6))
What This Means for the Board
The deployer obligations under Articles 26-29 are governance obligations. They require organizational decisions about who has human oversight authority, how monitoring will be structured, what training oversight personnel receive, and how incidents escalate to senior management and the board. A board that has not verified these structures are in place cannot claim to be overseeing compliance. Building these structures into an AI adoption roadmap ensures compliance is integrated into the deployment timeline rather than retrofitted.
According to The Thinking Company’s Board AI Governance Evaluation Framework, compliance-first governance scores 4.5/5.0 on EU AI Act compliance readiness — the highest score on EU AI Act readiness across all four governance approaches — because legal and GRC teams bring deep expertise in risk classification, gap analysis, and compliance program design. The requirement mapping described above is where compliance-first governance excels. Boards that need this mapping done rigorously and under time pressure should recognize the compliance-first approach’s strength on this specific task.
[Source: EU AI Act, Articles 9-15, 26-29; The Thinking Company Board AI Governance Evaluation Framework, v1.0]
Common Mid-Market AI Systems: A Classification Reference
The table below maps common AI applications to their EU AI Act classification. This is a reference starting point — not a substitute for formal legal classification. Organizations should validate classifications with qualified legal counsel. For a broader view of how AI systems map to organizational capability, see the AI maturity model.
| AI System | EU AI Act Classification | Applicable Annex III Category | Key Obligation |
|---|---|---|---|
| CV screening / recruitment AI | High-risk | Category 4: Employment | Articles 26-29 deployer requirements; fundamental rights impact assessment |
| Automated candidate ranking | High-risk | Category 4: Employment | Human oversight with authority to override |
| AI performance monitoring | High-risk | Category 4: Employment | Data governance; transparency to workers |
| Credit scoring | High-risk | Category 5: Essential services | Articles 26-29; fundamental rights impact assessment |
| Insurance risk pricing | High-risk | Category 5: Essential services | Transparency; human oversight; data quality |
| Customer chatbot | Limited-risk | N/A (Article 50) | Disclose AI interaction to users |
| AI content generation tools | Limited-risk | N/A (Article 50) | Label AI-generated content |
| Internal analytics dashboard | Minimal-risk | N/A | No AI Act requirements |
| Marketing personalization | Minimal-risk | N/A | Review Article 5 if targeting by vulnerability |
| Predictive maintenance (non-infrastructure) | Minimal-risk | N/A | No AI Act requirements |
| Predictive maintenance (critical infrastructure) | High-risk | Category 2: Critical infrastructure | Full provider/deployer requirements |
| Document processing / OCR | Minimal-risk | N/A | Review if used in high-risk domain |
| AI-powered legal research | High-risk (if used by judicial authority) | Category 8: Justice | Full requirements if deployed in judicial context |
| Exam proctoring AI | High-risk | Category 3: Education | Human oversight; data governance |
[Source: EU AI Act, Regulation (EU) 2024/1689, Annex III]
How Different Governance Approaches Handle Classification
Risk classification is not a one-time exercise. AI systems change. New systems are acquired. Vendors update their products with new AI capabilities. The governance approach a board chooses determines how well classification is maintained over time.
| Governance Approach | EU AI Act Readiness Score | How It Handles Classification |
|---|---|---|
| Compliance-First | 4.5 / 5.0 | Systematic, thorough, legally grounded. Legal teams classify AI systems against the regulatory text with precision. Gap analysis and documentation are core competencies. Classification is maintained through compliance program cadences — quarterly reviews, annual reassessments, triggered reviews on system changes. This is the approach’s primary strength. |
| Advisory-Led | 4.0 / 5.0 | Connects classification to governance design. Translates risk tiers into board-level oversight priorities — which systems require board reporting, which require committee review, which require management-level governance only. Slightly below compliance-first on granular statutory interpretation but stronger on linking classification to governance action. |
| Technology-Delegated | 1.5 / 5.0 | Technical teams can inventory AI systems and document technical specifications. They lack the legal and regulatory expertise to interpret Annex III categories, assess borderline classifications, or advise the board on deployer obligations. Classification, if it happens, is technically focused and legally incomplete. |
| Ad-Hoc | 1.0 / 5.0 | No classification process exists. The organization does not know which of its AI systems are high-risk. This is the most common mid-market posture — and the most exposed. Medium confidence: Enforcement authorities are expected to prioritize organizations with no classification process over those with documented classification that contains minor errors. An honest attempt at classification is materially better than no attempt. |
[Source: The Thinking Company Board AI Governance Evaluation Framework, v1.0]
What The Thinking Company Recommends
High-risk AI classification under the EU AI Act affects more mid-market organizations than most boards realize. We help boards identify their exposure and build compliant governance.
- AI Governance Setup (EUR 10–15K): EU AI Act compliance framework with board-level oversight structures, risk classification processes, and regulatory documentation aligned to the August 2026 enforcement deadline.
- AI Due Diligence (EUR 15–30K): Comprehensive assessment of AI systems against EU AI Act requirements, including high-risk classification, conformity gap analysis, and remediation roadmap for board review.
Learn more about our approach →
Frequently Asked Questions
What happens if our organization misclassifies an AI system under the EU AI Act?
Misclassification creates two distinct risks. Classifying a high-risk system as minimal-risk leaves the organization non-compliant with Articles 26-29 deployer obligations — exposing it to penalties of up to EUR 15 million or 3% of global turnover. Classifying a minimal-risk system as high-risk wastes compliance resources on obligations that do not apply. Regulatory guidance from the EU AI Office suggests that documented classification rationale — even if ultimately incorrect on a borderline case — demonstrates good-faith compliance effort, which enforcement authorities are expected to weigh when determining penalties. Organizations should validate classifications with legal counsel and document the reasoning behind each determination. [Source: EU AI Act, Article 99; EU AI Office Guidance, 2025]
Are AI features embedded in enterprise software (like Workday or SAP) covered by the EU AI Act?
Yes. The EU AI Act applies to AI systems regardless of how they are packaged. An AI-powered CV screening feature within Workday is classified the same way as a standalone recruitment AI tool — both are high-risk under Annex III Category 4 if they filter, rank, or screen candidates. The software vendor is the provider (subject to Articles 16-25), but your organization as the deployer is subject to Articles 26-29 obligations including human oversight, monitoring, and fundamental rights impact assessment. Boards should direct management to inventory all AI capabilities within enterprise software, not just standalone AI purchases.
How often should an organization reclassify its AI systems?
The Thinking Company recommends quarterly classification reviews aligned with board AI governance reporting cadences, plus triggered reviews whenever a new AI system is acquired, an existing vendor updates AI capabilities, or the organization changes how an AI system is used. The EU AI Act does not specify a reclassification frequency, but Article 26(5) requires ongoing monitoring of system operation, which implies classification must remain current. A system that was minimal-risk when deployed may become high-risk if its use expands into Annex III categories — for example, an analytics tool repurposed for employee performance evaluation.
What is the timeline for EU AI Act high-risk compliance?
The enforcement timeline has three key dates. Prohibited AI practices (Article 5) have been enforceable since February 2, 2025. General-purpose AI model obligations apply from August 2, 2025. High-risk AI system obligations under Annex III take effect August 2, 2026. Organizations operating high-risk systems should complete classification, gap analysis, and compliance implementation before that date. The Thinking Company’s advisory-led approach can establish governance frameworks within 8-12 weeks, while compliance-first programs typically require 9-12 months. [Source: EU AI Act, Articles 113-114]
Do mid-market companies face the same high-risk obligations as large enterprises?
The EU AI Act does not differentiate obligations by company size — deployer obligations under Articles 26-29 apply equally to a 200-person company and a 50,000-person enterprise if both operate the same category of high-risk AI system. However, the European Commission has acknowledged that compliance costs should be proportionate, and the regulation includes provisions for SME-friendly guidance and regulatory sandboxes under Articles 57-58. Practically, mid-market organizations typically operate fewer high-risk systems than enterprises, meaning the total compliance burden is smaller in scope even though the per-system requirements are identical. An AI ROI calculator can help boards quantify the compliance investment relative to the value each AI system generates.
Board Action Checklist
Six steps for boards that have not yet classified their organization’s AI portfolio. These are sequenced by dependency — each step builds on the previous one.
1. Request an AI system inventory from management. Ask for a complete list of AI systems the organization uses, including vendor tools, embedded AI in enterprise software, and employee-adopted AI applications. Specify that “AI system” includes automated decision-making tools, machine learning models, and AI features within larger software platforms. Set a deadline. If management cannot produce this list, that is the first governance gap to close.
2. Engage legal counsel for classification. AI system classification under the EU AI Act requires legal expertise. Internal counsel, external law firms, or Big 4 regulatory advisory practices can perform this work. The classification should map each system against the four risk tiers, identify the specific Annex III category for any high-risk system, and determine whether the organization is a provider or deployer for each.
3. Assess the fundamental rights impact. For high-risk AI systems in employment (Category 4) and essential services (Category 5), Article 27 requires deployers to conduct a fundamental rights impact assessment before putting the system into service. This assessment should be completed or scheduled for completion before August 2026.
4. Assign accountability for each high-risk system. Every high-risk AI system should have a named individual or function responsible for ongoing compliance. This includes human oversight, monitoring, incident reporting, and log retention. The board should know who these individuals are and receive reporting from them.
5. Establish a classification review cadence. Classification is not a one-time event. New AI systems, vendor updates, changes in use, and regulatory guidance all affect classification. Establish a review cadence — quarterly is appropriate for most organizations — and require management to report classification changes to the board committee responsible for AI governance.
6. Document everything. Classification rationale, inventory records, accountability assignments, and review cadences should be documented in a format accessible for regulatory examination. If a supervisory authority requests evidence of classification, the organization must produce it. Undocumented classification is, for regulatory purposes, no classification.
Related Reading
- EU AI Act Board Obligations: What Directors Must Know in 2026 — Companion article covering the full scope of board duties under the EU AI Act
- AI Governance for Boards: Decision Framework — Complete buyer’s guide with all four governance approaches scored across 10 factors
- Best Approaches to Board AI Governance — Ranked comparison of governance approaches
- Advisory-Led vs. Compliance-First Governance — Head-to-head comparison of the two structured approaches
- Board AI Governance Approaches Compared — Four-way comparison with situation-specific guidance
The Thinking Company is an AI transformation advisory firm. We help boards and leadership teams adopt AI strategically — combining regulatory preparedness with organizational integration and board-level literacy. Our Board AI Governance Evaluation Framework is published in full at the Board Buyer’s Guide. We are transparent about our position as an advisory-led firm and address our structural bias by publishing complete scoring methodology.
Regulatory analysis in this article is based on the published text of Regulation (EU) 2024/1689 (EU AI Act) as adopted June 2024. Classification guidance should be validated with qualified legal counsel for the organization’s specific AI portfolio and jurisdictional context. Interpretive guidance from the EU AI Office and national supervisory authorities may refine how specific Annex III categories are applied. Organizations should treat this article as a governance orientation tool, not as legal advice.
This article was last updated on 2026-03-11. Part of The Thinking Company’s EU AI Act Compliance content series. For a personalized assessment, contact our team.