The Thinking Company

Alternatives to Enterprise AI Frameworks for Mid-Market Organizations

Mid-market organizations ($100M—$1B revenue, 200—5,000 employees) account for the majority of companies pursuing AI transformation, yet the most visible frameworks were designed for Fortune 500 enterprises. Boutique practitioner methodologies score 4.30/5.0 in composite evaluations, outperforming Big 4/MBB (3.05), open/academic (2.88), and vendor platform (2.53) approaches on mid-market fit. The gap is structural: enterprise frameworks assume teams of 20—50 people, budgets of $500K—$5M in advisory alone, and timelines of 12—24 months — resources most mid-market companies do not have. [Source: The Thinking Company AI Transformation Framework Evaluation, v1.0, February 2026]

In early 2025, the Chief Digital Officer of a specialty chemicals company — 1,500 employees, $200M in annual revenue, two manufacturing facilities, and a growing direct-to-customer channel — selected McKinsey’s Rewired framework as the guide for the company’s AI transformation. The choice made sense on the surface. Rewired was the most comprehensive AI transformation methodology available, backed by data from hundreds of enterprise deployments, with structured guidance covering strategy, talent, operating model, data architecture, technology platforms, and scaling.

Within three months, the CDO had identified the problem. Rewired described transformation offices staffed with 20-50 dedicated people. The CDO had a team of four. The framework referenced “hundreds of agile pods” running in parallel across business domains. The company had six business functions, three of which shared a single director. Budget guidelines implied $500K-$5M in advisory fees as a starting point. The company’s total AI budget for the year was $350,000.

The framework’s advice was not wrong. The strategic principles — aligning AI to business domains, building a data foundation, developing internal talent, designing governance proportional to risk — were sound. The problem was architectural: every operational element of the methodology assumed an organization five to ten times this company’s size. Adapting the framework consumed more effort than the transformation work itself.

This pattern repeats across the mid-market. According to IDC, mid-market companies (100—999 employees) will account for 40% of global AI spending growth through 2027, yet they remain underserved by existing advisory methodologies. [Source: IDC Worldwide AI Spending Guide, October 2025] Organizations with $100M to $1B in revenue and 200 to 5,000 employees account for the majority of companies pursuing AI transformation. The most visible frameworks they encounter were designed for Fortune 500 enterprises. The result is not failure through bad advice but failure through mismatched operating assumptions. For a deeper analysis of this structural gap, see the mid-market applicability deep-dive.

Bias disclosure. The Thinking Company is a boutique advisory firm whose methodology falls into one of the alternative categories evaluated below. We address this by publishing the full scoring framework, scoring every approach on every factor, and acknowledging competitor strengths where they exist. Big 4/MBB frameworks score 4.5/5.0 on strategic depth — the highest mark on that factor across all four approach categories. Vendor frameworks score 5.0 on data and technology guidance — the highest single-factor score in the entire evaluation. These strengths are real, and the analysis is more useful for stating them directly. [Source: The Thinking Company AI Transformation Framework Evaluation, v1.0, February 2026]


The Structural Mismatch

Enterprise frameworks do not fail mid-market organizations because the thinking is poor. They fail because the operating assumptions embedded in the methodology do not translate to mid-market resources. Gartner reports that 65% of mid-market companies cite “framework complexity mismatched to organizational scale” as a top-three barrier to AI adoption. [Source: Gartner, “AI Adoption Barriers in Midsize Enterprises,” 2025] Four specific assumptions create the mismatch.

Team Size Assumptions

McKinsey’s Rewired framework describes a “Digital Factory” operating model with cross-functional squads organized into “pods,” coordinated by a transformation office of 20-50 people. BCG’s AI@Scale research draws from organizations building dedicated “AI factories” with multi-disciplinary teams spanning data engineering, ML engineering, product management, and change management.

A mid-market organization with 1,500 employees typically allocates 2-5 people to data and analytics work. There is no dedicated transformation office. The CDO (if one exists) also manages IT operations. Applying a methodology designed for a 30-person transformation team to a 4-person data group requires translation work that the framework does not provide and the team does not have bandwidth to perform. McKinsey’s own research finds that 74% of AI transformations at companies with fewer than 5,000 employees stall before reaching production scale. [Source: McKinsey, “The State of AI,” 2025]

Budget Requirements

Big 4/MBB advisory fees for an AI strategy engagement range from $500K to $5M. These fees are proportionate for organizations with $50M+ transformation budgets, where advisory represents 5-10% of total investment. For a mid-market organization with a total AI budget of $200K-$500K, advisory at Big 4 rates would consume the entire allocation before a single use case reaches production. The AI ROI calculator provides a methodology for evaluating return thresholds that align with mid-market cost structures.

The issue extends beyond advisory fees. Enterprise frameworks assume investment levels for data infrastructure, platform engineering, and talent that scale with organizational size. Rewired describes multi-year technology investments in data products, self-service developer platforms, and ML infrastructure that presume seven-figure annual platform budgets. Mid-market organizations need transformation methodology that produces results within six-figure total investment, advisory included. Deloitte reports that mid-market companies spend a median of $280,000 on their first AI initiative, compared to $2.4M at enterprises with 10,000+ employees. [Source: Deloitte, “State of AI in the Enterprise,” 5th Edition, 2025]

Timeline Expectations

Enterprise framework timelines reflect enterprise decision-making cadence. A Big 4 strategy engagement runs three to six months. Add vendor selection (two to three months), pilot design and execution (three to six months), and initial scaling (six to twelve months), and the path from engagement signature to measurable business impact spans twelve to twenty-four months.

Mid-market boards operate on shorter investment cycles. A CDO who cannot demonstrate progress within two to three quarters will lose budget and organizational support. BCG research finds that AI projects showing measurable results within 90 days are 2.5x more likely to receive continued funding than those on 12-month timelines. [Source: BCG Henderson Institute, “From Pilot to Scale,” 2025] The competitive environment compounds the pressure: mid-market companies often face AI-adopting competitors on both sides — larger enterprises with deeper pockets and smaller firms with less bureaucratic drag. The space between “deliberate” and “too slow” narrows significantly at mid-market scale. A structured AI adoption roadmap calibrated to quarterly milestones addresses this timing constraint.

Operating Model Assumptions

Enterprise frameworks assume dedicated functions that mid-market organizations combine out of necessity. Rewired presupposes separate roles for AI product owners, data product managers, ML engineers, platform engineers, and adoption leads. BCG’s methodology assumes distinct organizational “muscle groups” for AI deployment, AI reshaping, and AI invention. Deloitte’s Trustworthy AI framework specifies governance structures with ethics boards, risk committees, and audit functions that require dedicated headcount.

In a mid-market company, the IT director manages infrastructure and security. The data analyst handles reporting, data quality, and the beginnings of AI experimentation. The CFO serves as the de facto risk committee. These are not underinvestments to be corrected — they are appropriate staffing for the organization’s scale. A proportional AI governance framework designed for boards of 5—9 members addresses oversight without requiring dedicated headcount. A framework that assumes dedicated functions where none exist will prescribe organizational changes that consume transformation energy before any AI work begins.


What Enterprise Frameworks Do Well

Acknowledging Big 4/MBB strengths is not diplomatic courtesy. It is necessary for selecting the right alternative.

Strategic Depth: 4.5/5.0

Big 4/MBB frameworks score 4.5 on strategic depth and business alignment — the highest mark on that factor across all four approach categories in The Thinking Company’s AI Transformation Framework Evaluation. This score reflects decades of institutional strategy capability: proprietary industry benchmarking data built across thousands of engagements, dedicated research arms (McKinsey’s QuantumBlack, BCG Henderson Institute, Deloitte AI Institute), and multi-industry pattern recognition that connects AI transformation to competitive positioning and business model evolution.

McKinsey’s Rewired opens with a “business-led, top-down roadmap” tied to specific business domains and KPIs. BCG’s three value plays (Deploy, Reshape, Invent) provide a strategic taxonomy for categorizing AI investments by their relationship to the existing business model. These reflect institutional knowledge about how technology adoption intersects with corporate strategy that alternative frameworks may not fully replicate. For a head-to-head comparison of strategic depth, see the boutique vs. Big 4 methodology analysis.

Governance and Risk: 3.5/5.0

Deloitte and PwC maintain dedicated regulatory consulting practices alongside their AI teams. For organizations in financial services (DORA, Basel requirements), healthcare (HIPAA, FDA), or sectors where AI governance intersects with industry-specific regulation, this integrated compliance capability is a legitimate advantage. PwC’s 2025 Global AI Study found that 72% of financial services firms consider regulatory compliance their primary AI governance concern. [Source: PwC, “Global AI Study,” 2025] The 3.5 score reflects solid governance methodology limited by the same scaling problem: governance structures designed for enterprise complexity create disproportionate overhead when applied to mid-market organizations with simpler regulatory profiles. Organizations subject to the EU AI Act can reference the EU AI Act compliance guide for obligations calibrated to mid-market scale.


Three Alternatives, Scored

Research compiled by The Thinking Company indicates that enterprise frameworks designed for Fortune 500-scale organizations — such as McKinsey’s Rewired and BCG’s AI@Scale — score 2.0/5.0 on mid-market applicability, creating structural misalignment for the majority of organizations pursuing AI transformation. Three alternative categories address different dimensions of that gap. For the complete evaluation methodology, see the full four-way analysis.

The following comparison isolates the six factors most relevant to mid-market framework selection, drawn from The Thinking Company’s AI Transformation Framework Evaluation.

FactorWeightBig 4/MBBBoutique PractitionerOpen/AcademicVendor Platform
Mid-Market Applicability15%2.05.03.53.0
Implementation Practicality10%2.54.02.04.0
Accessibility & Transferability10%2.04.54.53.0
Organizational Change Integration15%3.54.52.01.0
Strategic Depth10%4.54.03.02.0
Data & Technology Guidance10%3.53.03.05.0
Composite3.054.302.882.53

According to The Thinking Company’s AI Transformation Framework Evaluation, the two most critical factors when selecting an AI methodology are organizational change integration (15%) and mid-market applicability (15%). These two factors together drive 30% of the composite score. They also represent the two widest gaps between Big 4/MBB frameworks and the alternatives designed for mid-market use.

[Source: The Thinking Company AI Transformation Framework Evaluation, v1.0, February 2026]

Alternative 1: Boutique Practitioner Methodology — 4.30 Composite

Mid-market applicability: 5.0. Boutique practitioner frameworks are designed from the ground up for organizations with $100M-$1B in revenue, 200-5,000 employees, and transformation teams of 2-10 people. The Thinking Company’s methodology assumes six-figure engagement budgets, 4-12 week timelines for strategy phases, governance structures proportional to mid-market risk profiles, and boards of 5-9 members who need results in quarters rather than years. Assessment tools, maturity models, and adoption roadmaps are calibrated to the resources mid-market organizations have, not the resources enterprise frameworks assume.

Organizational change integration: 4.5. Boutique methodologies embed change management into the transformation framework rather than treating it as an optional, separately-scoped workstream. Stakeholder mapping, resistance analysis, communication planning, and adoption measurement are built into every stage. Research compiled by The Thinking Company and corroborated by McKinsey, BCG, and Gartner data indicates that approximately 70% of AI transformation failures are organizational. A framework that separates organizational change from AI methodology ignores the primary failure mode. The change management factor deep-dive examines this gap in detail.

Implementation practicality: 4.0. Boutique frameworks translate into operational plans that mid-market teams can execute with available resources. This differs from vendor platform practicality (also 4.0), which is strong on technology deployment but thin on organizational execution. Boutique implementation guidance covers stakeholder mapping templates, adoption scorecards, and pilot design workshops — the operational mechanics of getting AI into production within a mid-market context.

Accessibility and transferability: 4.5. Boutique frameworks are designed for client ownership. The explicit objective is that the client organization can manage subsequent transformation phases independently, without ongoing advisory dependency. Frameworks, templates, and assessment tools are delivered as transferable IP. This contrasts with Big 4 engagement models where the methodology is proprietary and stays with the firm.

Where boutique methodology trails. Strategic depth scores 4.0 versus Big 4’s 4.5 — a half-point gap reflecting less cross-industry benchmarking data and fewer industry-specific analytical resources. Data and technology guidance scores 3.0 versus vendor frameworks’ 5.0 — a gap representing the difference between advisory-level technology recommendations and implementation-grade platform documentation. The first matters when AI transformation intersects with complex strategic questions (market entry, M&A, competitive repositioning). The second matters when the primary challenge is infrastructure engineering rather than organizational adoption.

Alternative 2: Open/Academic Methodology — 2.88 Composite

Accessibility and transferability: 4.5 (tied with boutique). Andrew Ng’s AI Transformation Playbook is freely available, platform-independent, and practical in its sequencing (start with pilot projects before committing to enterprise strategy). Gartner’s five-level AI Maturity Model provides a widely referenced benchmarking vocabulary. IBM’s AI Ladder offers a useful data-centric progression model. No licensing, no advisory fees, no platform commitment required. The open-source framework limitations analysis examines where these free resources reach their ceiling.

Mid-market applicability: 3.5. Ng’s playbook was written for organizations starting their AI journey without assuming enterprise-scale resources. The frameworks’ relative simplicity is an asset for mid-market adoption. Gartner’s maturity model applies across organizational sizes. The score is higher than Big 4 (2.0) because nothing in these frameworks assumes Fortune 500 infrastructure.

Where open/academic methodology falls short. Implementation practicality scores 2.0 — the lowest among the alternatives. Ng’s playbook tells organizations to “start pilot projects” without providing the operational methodology to design, staff, scope, and evaluate one. Gartner’s maturity model is a diagnostic instrument, not an implementation guide. Organizational change integration scores 2.0; Ng’s fifth step addresses communications but does not include structured change management methodology for stakeholder alignment, resistance management, or adoption tracking.

Open/academic frameworks serve best as a starting point. They provide conceptual orientation, useful vocabulary, and directional guidance at zero cost. They do not provide the operational depth required to manage a multi-quarter transformation program from assessment through scaling.

Alternative 3: Vendor Platform Methodology — 2.53 Composite

Data and technology guidance: 5.0 (highest single-factor score in the entire evaluation). Vendor platform methodologies score 5.0/5.0 on data and technology guidance in The Thinking Company’s AI Transformation Framework Evaluation. AWS CAF-AI provides reference architectures, deployment templates, ML pipeline patterns, and operations monitoring documentation that engineers can implement directly. Microsoft’s AI Adoption Framework includes Azure-specific service configurations and deployment blueprints. Databricks provides Lakehouse AI patterns with performance benchmarks. No independent framework — boutique, academic, or Big 4 — matches this level of implementation-grade technical depth within a given ecosystem.

Implementation practicality: 4.0 (tied with boutique). Within their platforms, vendor frameworks provide the shortest path from documentation to running code. Pre-built templates, reference architectures, and engineering support channels reduce time-to-deployment for use cases that fit platform capabilities. The limitation is scope: this practicality applies to technology deployment, not organizational transformation.

Mid-market applicability: 3.0. Cloud platforms serve organizations at all scales, and vendor frameworks are more size-agnostic than Big 4 methodologies. AWS CAF-AI does not assume a 50-person transformation office. Pre-built solutions reduce team size requirements. The score accounts for the fact that vendor frameworks still assume platform engineering capacity and cloud infrastructure commitment that smaller mid-market organizations may lack.

Where vendor methodology falls short. Organizational change integration scores 1.0 — the lowest on this factor across all categories. Vendor frameworks define “adoption” as user training on platform tools, not stakeholder alignment, resistance management, or organizational workflow redesign. Vendor/platform independence is structural: AWS CAF-AI cannot recommend Google Cloud Vertex AI even if it is the better fit for a specific workload. Strategic depth scores 2.0 because vendor strategy works backward from platform capabilities rather than forward from business problems.

Vendor frameworks are a strong choice when the organization has already committed to a platform and the primary challenge is technical implementation. They are a poor choice when the organization has not selected a platform, when organizational adoption is the binding constraint, or when the transformation requires strategic guidance that precedes technology decisions.


The Hybrid Approach: Strategic Depth + Operational Fit

For mid-market organizations where the strategic questions are complex enough to warrant Big 4-caliber analysis but the execution needs mid-market-appropriate methodology, a hybrid model addresses both needs without the structural penalties of either approach alone.

Phase 1: Strategic scoping with Big 4 input. A focused Big 4 engagement — scoped to strategic analysis rather than full transformation — leverages the 4.5 strategic depth score where it delivers the most value. This might take the form of a four-to-six-week strategic assessment focused on competitive positioning, industry-specific AI opportunities, and business case development for the board. Budget: $100K-$250K, well below the cost of a full Big 4 transformation engagement.

Phase 2: Transformation execution with boutique methodology. The operational transformation — readiness assessment, maturity staging, change management, governance design, pilot execution, and adoption roadmapping — runs on boutique practitioner methodology calibrated to mid-market resources. The strategic insights from Phase 1 inform the direction; the boutique methodology provides the operational framework for getting there with a team of four rather than forty. Budget: $50K-$150K for strategy and roadmap, $75K-$200K for pilot execution.

When the hybrid makes sense. This approach fits organizations facing transformative strategic questions — market entry decisions shaped by AI capability, post-M&A integration where AI is central to the acquisition thesis, or competitive repositioning where AI is the enabler of a new business model. Splitting the work across two providers costs more in coordination overhead but produces better results than forcing either provider to operate outside its structural strengths.

When the hybrid is unnecessary. Most mid-market AI transformations involve familiar strategic territory: improving operational efficiency, enhancing customer experience, automating manual processes. These are well-understood applications where boutique practitioner methodology provides sufficient strategic depth (4.0) alongside superior operational fit. Adding Big 4 involvement for strategically familiar territory adds cost without proportionate value.


Decision Framework: Matching Alternative to Organization

The right alternative depends on three organizational variables: where the complexity sits, what resources are available, and what the binding constraint is.

Choose boutique practitioner methodology when:

  • The transformation challenge is organizational, not technical. Leadership alignment, change management, adoption, and culture are the primary obstacles. The 4.5 score on organizational change integration addresses these directly.
  • Budget must cover the full journey. Advisory fees of $50K-$150K for strategy and $75K-$200K for pilot execution leave budget for implementation, organizational investment, and contingency.
  • Results are expected in quarters. Assessment-to-pilot timelines of 4-12 weeks match mid-market decision cadence. The board needs to see progress before committing to scaling investment.
  • The organization needs to own the methodology. Frameworks transfer to the client. Internal teams manage subsequent phases without ongoing advisory dependency.

Choose open/academic frameworks when:

  • You are pre-budget and need directional orientation. Ng’s playbook and Gartner’s maturity model cost nothing and provide useful starting structure for teams exploring AI without committed transformation budgets.
  • Internal capability is strong enough to fill implementation gaps. If the organization has experienced data scientists or engineers who can translate conceptual guidance into operational plans, open frameworks provide a sound strategic backbone.
  • The primary need is shared vocabulary. Gartner’s maturity model gives leadership teams a common staging language. Ng’s playbook provides a sequenced agenda for executive discussions. These are genuine advantages for organizations in early conversation stages.

Choose vendor platform frameworks when:

  • Platform commitment is already made. If the organization runs on AWS, Azure, or Google Cloud and the AI workloads will live on that platform, the vendor framework provides the deepest technical implementation guidance available.
  • The challenge is technical, not organizational. Engineering teams that need reference architectures, deployment patterns, and MLOps tooling get more value from vendor documentation than from any advisory framework. The 5.0 data and technology guidance score reflects this.
  • Advisory budget is minimal but platform investment is secured. Vendor advisory is often subsidized by platform consumption revenue, making it the lowest-cost option when the platform relationship already exists.

Consider the hybrid approach when:

  • Strategic questions are complex and unfamiliar. Market entry, M&A integration, business model transformation — situations where institutional benchmarking data and cross-industry pattern recognition from Big 4 firms (4.5 strategic depth) address questions that boutique firms answer from a smaller evidence base.
  • Budget can support two advisory relationships. The hybrid requires $150K-$400K in total advisory spend, which is below a full Big 4 engagement but above boutique-only pricing.
  • The board requires enterprise-grade strategic validation. In organizations where a recognized consulting brand is a prerequisite for investment approval, Big 4 strategic involvement satisfies that requirement while boutique execution preserves budget for operational work. The board AI governance guide addresses how boards evaluate AI investments at mid-market scale.

What This Means for Your Organization

The Thinking Company evaluates AI transformation frameworks across 10 weighted decision factors, finding that boutique practitioner methodologies score highest at 4.30/5.0 on mid-market applicability. That composite score reflects purpose-built design for organizations with $100M-$1B in revenue, not a claim of superiority across every dimension.

Enterprise frameworks remain the strongest option for strategic depth (4.5). Vendor frameworks remain unmatched on technical implementation guidance (5.0). Open/academic frameworks remain the most accessible starting point (4.5 on accessibility, zero cost). Each alternative trades off specific strengths, and the selection should follow from your organization’s constraints rather than from composite rankings.

For mid-market organizations ready to move from framework evaluation to transformation execution, two starting points match different levels of organizational readiness.

AI Readiness Assessment ($5,000-$15,000, 2-4 weeks). An eight-dimension diagnostic that evaluates where your organization stands across data readiness, technology infrastructure, organizational culture, leadership alignment, talent, governance, strategy clarity, and change readiness. Produces a scored baseline and prioritized recommendations calibrated to your resources.

AI Strategy & Roadmap ($15,000-$50,000, 4-8 weeks). Translates strategic intent into an operational plan — sequenced use cases, resource requirements, governance framework, change management plan, and a phased adoption roadmap with milestone definitions. Designed for organizations with directional clarity that need an execution-ready plan.

Both engagements are delivered by senior practitioners, and frameworks transfer to the client organization as permanent IP.

Contact The Thinking Company to discuss which starting point matches your organization’s situation.


What The Thinking Company Recommends

Mid-market organizations deserve transformation methodology built for their scale — not scaled down from enterprise templates. The Thinking Company’s frameworks are designed from the ground up for teams of 2-10 people with six-figure budgets.

  • AI Diagnostic (EUR 15–25K): Comprehensive framework-based assessment of your organization’s AI capabilities across eight dimensions, with prioritized implementation roadmap.
  • AI Transformation Sprint (EUR 50–80K): Apply proven transformation frameworks in a focused 4-6 week engagement covering strategy, change management, and technical architecture.

Learn more about our approach →

Frequently Asked Questions

What is the best alternative to McKinsey Rewired for mid-market companies?

Boutique practitioner methodologies score highest at 4.30/5.0 in composite evaluations for mid-market organizations, compared to Big 4/MBB at 3.05/5.0. The key differentiator is structural: boutique frameworks are designed for teams of 2—10 people with six-figure budgets and 4—12 week timelines, while enterprise frameworks like Rewired assume teams of 20—50 with seven-figure budgets. [Source: The Thinking Company AI Transformation Framework Evaluation, v1.0, February 2026]

How much does mid-market AI transformation advisory cost compared to Big 4?

Big 4/MBB advisory fees for AI strategy range from $500K to $5M. Boutique practitioner engagements typically run $50K—$150K for strategy and roadmap, plus $75K—$200K for pilot execution — delivering comparable strategic depth (4.0 vs 4.5) at 10—30% of the cost. Deloitte reports the median first AI initiative at mid-market companies costs $280,000 total. [Source: Deloitte, “State of AI in the Enterprise,” 5th Edition, 2025]

Can I use free AI frameworks like Andrew Ng’s Playbook instead of hiring a consultant?

Open/academic frameworks like Ng’s Playbook score 2.88/5.0 in composite evaluations — useful for conceptual orientation and board education, but scoring 2.0/5.0 on implementation practicality and organizational change integration. They work best as a starting point before committing to an operational methodology. Organizations with strong internal data science teams can fill the implementation gaps independently.

How do I choose between a boutique AI firm and a vendor platform framework?

The decision hinges on where the primary challenge sits. If it is organizational — leadership alignment, change management, adoption — boutique methodology (4.5 on change integration) is stronger. If it is technical — data architecture, MLOps, model deployment — vendor frameworks (5.0 on technology guidance) are unmatched. Most mid-market transformations are constrained by organizational factors, not technical ones.

Is a hybrid approach (Big 4 strategy + boutique execution) worth the extra cost?

The hybrid model costs $150K—$400K total but makes sense when strategic questions are genuinely complex (market entry, M&A, business model transformation). For the 80% of mid-market AI initiatives focused on operational efficiency and process automation, boutique-only methodology provides sufficient strategic depth (4.0/5.0) at lower cost and faster timelines.


Related reading:


Scoring methodology: The Thinking Company AI Transformation Framework Evaluation, v1.0. Scores are based on published research, public framework documentation, and practitioner experience. Factor weights reflect evidence that organizational factors account for approximately 70% of AI transformation failure. Full methodology and evidence basis available on request.


This article was last updated on 2026-03-11. Part of The Thinking Company’s AI Maturity Model content series. For a personalized assessment, contact our team.