Recognition: unknown
AI as Consumer and Participant: A Co-Design Agenda for MBSE Substrates and Methodology
Pith reviewed 2026-05-07 16:13 UTC · model grok-4.3
The pith
MBSE models must be co-designed with their construction methods to serve as machine-queryable knowledge substrates for AI tools.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
AI tools deployed over MBSE models produce reasoning drawn from training rather than retrieved from the model itself, and different tools over the same model produce different results with nothing in the record to adjudicate between them. The model, in other words, is functioning as a prompt rather than as a knowledge base. Attaching better tools to the same model does not resolve this. The model and the methodology that governs its construction need to be designed together for AI participation, treating the model as a machine-queryable knowledge substrate rather than a structured artefact for human navigation, and that co-design has not yet happened in any systematic way.
What carries the argument
Co-design of MBSE models as machine-queryable knowledge substrates together with the methodologies that govern their construction
If this is right
- Attaching better AI tools to existing models will not eliminate inconsistent reasoning drawn from training data.
- Systematic co-design of models and methodologies is required to make models reliable sources for AI consumption.
- A workflow scenario reveals the concrete gaps in treating current models as AI inputs.
- Architectural decisions on AI integration in MBSE require a methodological foundation that current practices lack.
Where Pith is reading between the lines
- Successful co-design could allow AI to move from passive consumer to active participant in MBSE processes such as validation and evolution of models.
- The same joint-design principle could be tested in adjacent modeling approaches outside SysML-based MBSE.
- A practical test would involve building a prototype substrate optimized for machine queries and measuring whether AI outputs become consistent and traceable to model content.
- Without this shift, reliance on AI over MBSE risks embedding unverifiable reasoning into safety-critical systems engineering.
Load-bearing premise
Redesigning MBSE models and their construction methodology specifically for AI consumption is feasible and will produce more reliable results than current practices without sacrificing utility for human engineers.
What would settle it
A demonstration that unmodified MBSE models with advanced prompting enable multiple AI tools to produce consistent, model-grounded reasoning that matches the model's explicit content across sessions without relying on external training data would falsify the need for co-design.
read the original abstract
AI tools are being deployed over MBSE models today, and those models were not designed for this kind of consumption. The problem is not simply that tools hallucinate: well-prompted frontier models produce competent, useful output over a conformant SysML model, but the reasoning they produce is drawn from training rather than retrieved from the model itself, and different tools over the same model produce different results with nothing in the record to adjudicate between them. The model, in other words, is functioning as a prompt rather than as a knowledge base. Attaching better tools to the same model does not resolve this. The model and the methodology that governs its construction need to be designed together for AI participation, treating the model as a machine-queryable knowledge substrate rather than a structured artefact for human navigation, and that co-design has not yet happened in any systematic way. This paper works through a concrete workflow scenario to show what that gap looks like in practice, proposes three principles that jointly characterise what model and methodology must achieve together, and closes with a call to the community to begin this work before the architectural decisions about AI integration settle without the methodological foundation they require.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper argues that MBSE models were not designed for AI consumption and currently function as prompts rather than knowledge bases: even well-prompted frontier models draw reasoning from training data instead of retrieving from the model, and different tools produce inconsistent results over the same model with no adjudication mechanism. It illustrates this gap via a concrete workflow scenario, asserts that attaching improved tools will not suffice, proposes three principles for jointly co-designing the model (as a machine-queryable knowledge substrate) and its construction methodology, and issues a community call to action before AI integration architectures are finalized without this foundation.
Significance. If the co-design agenda can be realized, the work could improve reliability and consistency of AI-assisted MBSE by shifting models from prompt-like artefacts to verifiable, queryable substrates, potentially reducing hallucinations and tool disagreements while preserving utility for human engineers. The paper's observational and forward-looking character means its significance hinges on subsequent operationalization and validation of the three principles.
major comments (2)
- [Abstract and workflow scenario] Abstract and workflow scenario section: the central claim that 'attaching better tools to the same model does not resolve this' is load-bearing but rests on an observational assertion without a detailed breakdown of the scenario showing why tool-level fixes (e.g., better retrieval-augmented generation) cannot enforce model-based retrieval over training-data generation.
- [Three principles section] Section proposing the three principles: the principles are presented as jointly characterizing what model and methodology must achieve, yet no concrete mapping, example application, or pseudocode is supplied demonstrating how they would enforce retrieval from the model while preserving structured human navigation; this leaves the feasibility of avoiding the noted trade-off unaddressed.
minor comments (1)
- [Abstract and conclusion] The abstract and closing call to action could more explicitly reference related work on knowledge representation in MBSE or AI-augmented modeling to situate the novelty of the co-design agenda.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed review. The comments correctly identify areas where the manuscript's observational character leaves key claims in need of further elaboration. We address each point below and will incorporate targeted revisions to strengthen the argument while preserving the paper's scope as a position piece.
read point-by-point responses
-
Referee: Abstract and workflow scenario section: the central claim that 'attaching better tools to the same model does not resolve this' is load-bearing but rests on an observational assertion without a detailed breakdown of the scenario showing why tool-level fixes (e.g., better retrieval-augmented generation) cannot enforce model-based retrieval over training-data generation.
Authors: We agree that the claim is central and that the current workflow scenario presents the inconsistency and training-data reliance observationally rather than through an exhaustive analysis of tool-level alternatives. The scenario is meant to show that frontier models, even when prompted competently over a conformant SysML artefact, still generate rather than retrieve. To address the referee's point, we will expand the scenario section with a short breakdown of why approaches such as RAG do not resolve the underlying issue: such techniques still require the MBSE model to be treated as an unstructured or loosely indexed corpus whose semantics must be reconstructed at query time, rather than as a substrate whose structure itself encodes the necessary relationships for consistent, model-grounded reasoning. This elaboration will be added without changing the paper's forward-looking stance. revision: yes
-
Referee: Section proposing the three principles: the principles are presented as jointly characterizing what model and methodology must achieve, yet no concrete mapping, example application, or pseudocode is supplied demonstrating how they would enforce retrieval from the model while preserving structured human navigation; this leaves the feasibility of avoiding the noted trade-off unaddressed.
Authors: The principles are offered at a conceptual level to characterise the joint requirements on model and methodology. We accept that the absence of a concrete mapping or illustrative application leaves the feasibility of avoiding the human-navigation versus machine-retrieval trade-off insufficiently demonstrated. In revision we will add a concise worked example applying the three principles to a representative MBSE element (e.g., a block with ports and constraints), showing the minimal structural annotations required for queryability alongside the corresponding methodological steps that preserve diagrammatic readability for engineers. A brief pseudocode sketch of a retrieval interface consistent with the principles will also be included to illustrate how model-based retrieval can be enforced. These additions will remain short so as not to shift the paper from its position-paper character. revision: yes
Circularity Check
No circularity: observational proposal with no derivations or self-referential reductions
full rationale
The paper contains no equations, parameters, derivations, or formal chains that could reduce to inputs by construction. It is a forward-looking position paper that identifies a gap in MBSE-AI integration through concrete workflow examples, proposes three high-level principles for co-design, and issues a community call to action. These elements are presented as recommendations rather than results derived from prior claims within the paper itself. No self-citations appear as load-bearing justifications for uniqueness or ansatzes, and the argument does not rename known results or smuggle in fitted inputs as predictions. The central thesis—that models and methodologies must be co-designed for machine-queryable use—rests on described practical inconsistencies rather than any self-referential loop.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Well-prompted frontier models produce competent but training-derived output over conformant SysML models
- domain assumption Different tools over the same model produce different results with no record to adjudicate
Reference graph
Works this paper leans on
-
[1]
doi: 10.1515/zwf-2024-0123. Kira X. Campo, Thomas Teper, Cassandra E. Eaton, Ashley M. Shipman, Gaurav Bhatia, and Bryan Mesmer. Model-based systems engineering: Evaluating perceived value, metrics, and evidence through literature.Systems Engineering, 26(1):104–129,
-
[2]
Maroun Chami and Jean-Michel Bruel
doi: 10.1002/sys.21644. Maroun Chami and Jean-Michel Bruel. A survey on MBSE adoption challenges. In INCOSE EMEA Sector Systems Engineering Conference (EMEASEC 2018), Berlin, Germany,
-
[3]
Karissa Henderson and Alejandro Salado
doi: 10.1002/sys.70032. Karissa Henderson and Alejandro Salado. Value and benefits of model-based systems engineering (MBSE): Evidence from the literature.Systems Engineering, 24(1):51–66,
-
[4]
Karissa Henderson, Thomas McDermott, and Alejandro Salado
doi: 10.1002/sys.21566. Karissa Henderson, Thomas McDermott, and Alejandro Salado. MBSE adoption experi- ences in organizations: Lessons learned.Systems Engineering, 27:214–239,
-
[5]
International Council on Systems Engineering
doi: 10.1002/sys.21717. International Council on Systems Engineering. Systems engineering vision
-
[6]
Object Management Group
URLhttps://www.incose.org/publications/ se-vision-2035. Object Management Group. OMG systems modeling language (SysML) version 2.0. Tech- nical report, Object Management Group, September
2035
-
[7]
Automotive Engineering-Centric Agentic AI Workflow Framework
URLhttps://www.omg. org/spec/SysML/2.0/. Tong Duy Son, Zhihao Liu, Piero Brigida, Yerlan Akhmetov, Gurudevan Devarajan, Kai Liu, and Ajinkya Bhave. Automotive engineering-centric agentic AI workflow frame- work. arXiv preprint arXiv:2604.07784,
work page internal anchor Pith review Pith/arXiv arXiv
-
[8]
doi: 10.1002/sys.70011. Caihua Zhu et al. MBSE 2.0: Toward more integrated, comprehensive, and intelligent MBSE.Systems, 13(7):584,
-
[9]
doi: 10.3390/systems13070584. 15
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.