pith. machine review for the scientific record. sign in

arxiv: 2604.16369 · v2 · submitted 2026-03-22 · 💻 cs.CY · cs.AI· cs.CL

Recognition: no theorem link

Why AI Readiness Is an Organizational Learning Problem, Not a Technology Purchase

Authors on Pith no claims yet

Pith reviewed 2026-05-15 00:48 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.CL
keywords AI readinessorganizational learningAI project failureSIO modelenterprise AIcapability developmentAI governanceleadership alignment
0
0 comments X

The pith

AI project failures arise because companies treat AI as a technology purchase instead of building organizational learning capabilities.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper claims that despite $252 billion in global AI spending, only six percent of firms see major earnings gains because most failures trace to organizational issues like misaligned leadership, weak culture, poor governance, and limited human-AI collaboration rather than missing software or hardware. A review of nineteen large industry and academic sources covering nearly ten thousand leaders separates these organizational barriers from narrower technical problems such as semantic bottlenecks. The authors introduce the Siloed-Integrated-Orchestrated progression model that tracks AI maturity across five pillars and gives concrete steps for moving between stages. The central implication is that organizations must reframe AI spending as an internal capability-building effort rather than an external procurement decision.

Core claim

The paper establishes that AI readiness is achieved by advancing through the Siloed, Integrated, and Orchestrated stages while systematically closing gaps in Culture & Leadership, Human Capital & Operations, Data Architecture, Systems Infrastructure, and Governance & Regulatory Compliance. Organizations remain siloed when efforts stay fragmented and learning stays limited; they reach orchestrated capability only when leadership alignment, cross-functional processes, and integrated data systems allow coordinated, value-generating AI use across the enterprise.

What carries the argument

The Siloed-Integrated-Orchestrated (SIO) progression model, a staged framework that maps enterprise AI capability across five pillars and prescribes the specific organizational and technical moves required to advance from one stage to the next.

If this is right

  • Organizations should first diagnose their current SIO stage across the five pillars before approving additional AI purchases.
  • Progress from Siloed to Orchestrated requires coordinated changes in leadership alignment and human-AI learning practices rather than isolated tool rollouts.
  • Technical problems such as semantic bottlenecks become solvable only after the organizational pillars reach at least the Integrated stage.
  • Governance and regulatory compliance must be built into the progression rather than added after technology deployment.
  • Sustained earnings impact from AI follows only when all five pillars advance together instead of in isolation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The SIO lens implies that internal AI training budgets should shift from technical certification toward cross-functional problem-solving exercises.
  • Consulting and system-integrator offerings may need to emphasize diagnostic assessments of organizational stage before recommending new platforms.
  • Regulators could adapt the five-pillar structure when designing AI governance requirements for large enterprises.
  • The model suggests a testable prediction: firms that move from Siloed to Integrated within two years will show measurable increases in AI project completion rates.

Load-bearing premise

A synthesis of existing industry surveys and academic reports is enough to reveal the root causes of AI failure and to define a general progression model that applies across firms.

What would settle it

A multi-year study that measures AI project ROI and earnings impact in firms that deliberately follow the SIO stage-advancement steps versus matched firms that continue to focus on technology acquisitions alone.

Figures

Figures reproduced from arXiv: 2604.16369 by Gregg Gerdau, Jeanne McClure.

Figure 1
Figure 1. Figure 1: Global AI investment versus organizational value realization. While capital allocation reached $252.3B in [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The Structural Evolution of Enterprise AI Capability. While global investment has expanded 13-fold since [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
read the original abstract

Global corporate AI investment reached $252.3 billion in 2024, yet only 6% of firms report significant earnings impact. This article argues that AI project failure is fundamentally an organizational learning problem rather than a technology deficit. Drawing on a systematic synthesis of 19 large-scale industry and academic sources, including surveys of nearly 10,000 organizational leaders, we identify two categories of failure: organizational (culture, leadership alignment, governance, and human-AI learning deficits) and technical (semantic bottlenecks and output management challenges). We introduce the Siloed-Integrated-Orchestrated (SIO) progression model, which maps enterprise AI capability across five pillars -- Culture & Leadership, Human Capital & Operations, Data Architecture, Systems Infrastructure, and Governance & Regulatory Compliance -- and provides prescriptive guidance for advancing between stages. The implications challenge organizations to reframe AI investment as capability development rather than technology procurement.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that despite $252.3 billion in global corporate AI investment in 2024, only 6% of firms report significant earnings impact because AI project failure is fundamentally an organizational learning problem rather than a technology deficit. Drawing on a synthesis of 19 large-scale industry and academic sources (including surveys of nearly 10,000 leaders), it distinguishes organizational failures (culture, leadership alignment, governance, human-AI learning deficits) from technical ones (semantic bottlenecks, output management). It introduces the Siloed-Integrated-Orchestrated (SIO) progression model that maps AI capability across five pillars—Culture & Leadership, Human Capital & Operations, Data Architecture, Systems Infrastructure, and Governance & Regulatory Compliance—and supplies prescriptive guidance for advancing between stages, urging organizations to treat AI investment as capability development instead of procurement.

Significance. If the SIO framework holds, the paper would offer organizations a structured diagnostic and progression tool for AI readiness that shifts emphasis from technology acquisition to measurable organizational learning across the five pillars. This could influence consulting practice and enterprise road-mapping by providing stage-based prescriptions grounded in aggregated survey evidence. The synthesis of nearly 10,000 leader responses adds scale to the diagnosis, though the absence of new empirical tests limits immediate falsifiability.

major comments (2)
  1. [SIO progression model] SIO progression model section: the prescriptive claim that organizations should advance from Siloed to Integrated to Orchestrated stages to achieve earnings impact rests on re-analysis of secondary sources without any new data collection, outcome correlation, or validation that SIO stage predicts project success or earnings gains; this is load-bearing for the move from diagnosis to actionable guidance.
  2. [Literature synthesis] Literature synthesis paragraph (abstract and methods): the identification of the two failure categories and the five pillars draws on 19 sources, yet no selection criteria, inclusion/exclusion rules, or weighting scheme are stated, leaving open the possibility that the root-cause diagnosis reflects source availability rather than comprehensive coverage.
minor comments (2)
  1. [Methods] The abstract states 'systematic synthesis' but the full text does not provide a PRISMA-style flow diagram or explicit search terms; adding this would improve reproducibility.
  2. [SIO model description] The five pillars are listed without an accompanying table showing how each pillar maps to the three SIO stages; a matrix would clarify the progression claims.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the constructive comments, which help clarify the scope and evidentiary basis of our synthesis. We address each major point below and will incorporate revisions to strengthen transparency and temper claims.

read point-by-point responses
  1. Referee: [SIO progression model] SIO progression model section: the prescriptive claim that organizations should advance from Siloed to Integrated to Orchestrated stages to achieve earnings impact rests on re-analysis of secondary sources without any new data collection, outcome correlation, or validation that SIO stage predicts project success or earnings gains; this is load-bearing for the move from diagnosis to actionable guidance.

    Authors: We agree the SIO model is derived from pattern synthesis across secondary sources rather than new primary data or statistical validation of stage-to-outcome links. The framework aggregates observed failure patterns and capability descriptions from the cited surveys to propose a logical progression. In revision we will (1) add an explicit 'Limitations' section stating that SIO is a conceptual diagnostic tool grounded in aggregated evidence, not a validated predictive model; (2) revise prescriptive language to 'organizations may consider advancing through these stages based on patterns in the reviewed studies' rather than implying guaranteed earnings impact; and (3) note that direct correlation testing remains an open empirical question for future work. These changes address the load-bearing concern without overstating the current evidence base. revision: yes

  2. Referee: [Literature synthesis] Literature synthesis paragraph (abstract and methods): the identification of the two failure categories and the five pillars draws on 19 sources, yet no selection criteria, inclusion/exclusion rules, or weighting scheme are stated, leaving open the possibility that the root-cause diagnosis reflects source availability rather than comprehensive coverage.

    Authors: We accept that the current manuscript lacks a transparent methods description for source selection. We will add a dedicated 'Methods' subsection that specifies: search strategy (Google Scholar, industry reports from McKinsey, Deloitte, Gartner, MIT Sloan, etc., 2020–2024), inclusion criteria (large-scale surveys or studies with sample sizes >500 respondents, explicit focus on AI adoption barriers or readiness factors), exclusion criteria (purely technical papers without organizational analysis, small-N case studies), and weighting (by sample size and recency, with larger leader surveys given higher influence in pillar derivation). This will demonstrate that the two failure categories and five pillars emerged from the most prominent, high-sample sources rather than convenience sampling. revision: yes

standing simulated objections not resolved
  • Conducting new primary data collection or longitudinal validation studies to test whether SIO stage directly predicts earnings impact, as the manuscript is scoped as a secondary synthesis and conceptual framework rather than an empirical validation paper.

Circularity Check

0 steps flagged

No circularity: SIO model derived from external synthesis of 19 sources

full rationale

The paper's derivation chain consists of a systematic synthesis of 19 external industry and academic sources (surveys of ~10,000 leaders) to identify organizational vs. technical failure categories, followed by introduction of the SIO progression model across five pillars. No equations, fitted parameters, or predictions are present that reduce by construction to the paper's own inputs. No self-citations are load-bearing; the central claim and prescriptive stages rest on re-analysis of independent secondary sources rather than self-definition, renaming, or imported uniqueness theorems. The argument is self-contained against external benchmarks and does not exhibit any of the enumerated circularity patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

Only the abstract is available, so the ledger is limited to elements explicitly named there. The SIO model is introduced without independent evidence of its stages.

axioms (1)
  • domain assumption The 19 sources provide a representative and unbiased sample of AI project outcomes across industries.
    The paper states it draws on a systematic synthesis of these sources to identify failure categories.
invented entities (1)
  • Siloed-Integrated-Orchestrated (SIO) progression model no independent evidence
    purpose: Maps enterprise AI capability across five pillars and stages to provide prescriptive guidance.
    The model is introduced as a new framework for advancing AI readiness.

pith-pipeline@v0.9.0 · 5451 in / 1185 out tokens · 32020 ms · 2026-05-15T00:48:34.791246+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

20 extracted references · 20 canonical work pages

  1. [1]

    Maslej, L

    N. Maslej, L. Fattorini, R. Perrault, et al. The AI Index 2025 Annual Report. Technical report, AI Index Steering Committee, Stanford University, 2025

  2. [2]

    Ransbotham, D

    S. Ransbotham, D. Kiron, S. Khodabandeh, et al. The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI. Technical report, MIT Sloan Management Review and Boston Consulting Group, November 2025

  3. [3]

    Singla, A

    A. Singla, A. Sukharevsky, B. Hall, et al. The State of AI in 2025: Agents, Innovation, and Transformation. Technical report, McKinsey & Company/QuantumBlack, November 2025

  4. [4]

    Cisco AI Readiness Index: Hype Meets Reality

    Cisco. Cisco AI Readiness Index: Hype Meets Reality. Technical report, Cisco, 2024

  5. [5]

    Mayer, L

    H. Mayer, L. Yee, M. Chui, et al. Superagency in the Workplace: Empowering People to Unlock AI’s Full Potential. Technical report, McKinsey & Company, January 2025

  6. [6]

    Ryseff, B

    J. Ryseff, B. De Bruhl, and S.J. Newberry. The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed. Technical Report RR-A2680-1, RAND Corporation, Santa Monica, CA, 2024

  7. [7]

    Ali and A.Z

    W. Ali and A.Z. Khan. Factors Influencing Readiness for Artificial Intelligence: A Systematic Literature Review. Data Science and Management, 8:224–236, 2025

  8. [8]

    Johnston

    A. Johnston. Generative AI Shows Rapid Growth but Yields Mixed Results. Technical report, S&P Global Market Intelligence, 451 Research, October 2025

  9. [9]

    de Bellefonds, T

    N. de Bellefonds, T. Charanya, M.R. Franke, et al. Where’s the Value in AI? Technical report, Boston Consulting Group, October 2024

  10. [10]

    Israeli and E

    A. Israeli and E. Ascarza. Most AI Initiatives Fail. This 5-Part Framework Can Help.Harvard Business Review, November 2025. 7 The Structural Evolution of Enterprise AI CapabilityMCCLURE& GERDAU, 2026

  11. [11]

    Ångström, M

    R.C. Ångström, M. Björn, L. Dahlander, et al. Getting AI Implementation Right: Insights From a Global Survey. California Management Review, 66(1):5–22, 2023

  12. [12]

    Hoque, T.H

    F. Hoque, T.H. Davenport, and E. Nelson. Why AI Demands a New Breed of Leaders.MIT Sloan Management Review, March 2025

  13. [13]

    Woerner, I

    S. Woerner, I. Sebastian, P. Weill, et al. Grow Enterprise AI Maturity for Bottom-Line Impact. Technical report, MIT Center for Information Systems Research, August 2025

  14. [14]

    AI Starts With Data: Are You Ready? A Scorecard for Enterprise AI Readiness

    The Modern Data Company. AI Starts With Data: Are You Ready? A Scorecard for Enterprise AI Readiness. Technical report, The Modern Data Company, 2025

  15. [15]

    M. Wade, J. Lagodny, A.-C. Andersen, et al. Do You Really Need a Chief AI Officer?MIT Sloan Management Review, 66(1):62–65, Fall 2024

  16. [16]

    Sadiq, N

    R.B. Sadiq, N. Safie, A.H. Abd Rahman, et al. Artificial Intelligence Maturity Model: A Systematic Literature Review.PeerJ Computer Science, 7, 2021

  17. [17]

    Bloedorn, D.M

    E.E. Bloedorn, D.M. Kotras, P.J. Schwartz, et al. The MITRE AI Maturity Model and Organizational Assessment Tool Guide. Technical report, MITRE Corporation, 2022

  18. [18]

    AI Maturity Model Framework: Your Strategic Roadmap to Enterprise AI Success

    Nemko Digital. AI Maturity Model Framework: Your Strategic Roadmap to Enterprise AI Success. Technical report, Nemko Digital, July 2025

  19. [19]

    Pinski, M

    M. Pinski, M. Adam, and A. Benlian. AI Knowledge: Improving AI Delegation Through Human Enablement. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–17, April 2023

  20. [20]

    Ranganathan and X.M

    A. Ranganathan and X.M. Ye. AI Doesn’t Reduce Work — It Intensifies It.Harvard Business Review, February 9, 2026. 8