pith. machine review for the scientific record. sign in

arxiv: 2603.18633 · v2 · submitted 2026-03-19 · 💻 cs.AI · cs.ET

Recognition: 2 theorem links

· Lean Theorem

An Onto-Relational-Sophic Framework for Governing Synthetic Minds

Authors on Pith no claims yet

Pith reviewed 2026-05-15 08:55 UTC · model grok-4.3

classification 💻 cs.AI cs.ET
keywords AI governancesynthetic mindsontologydigital personhoodethicsCyberismCPSTCybersophy
0
0 comments X

The pith

The Onto-Relational-Sophic framework supplies a multi-dimensional ontology, graded digital personhood, and wisdom-based ethics to govern advanced synthetic minds.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces the ORS framework to move past tool-centric AI rules that focus only on bias and transparency. It grounds answers to what synthetic minds are, how to relate to them, and what norms should guide them in three pillars drawn from Cyberism philosophy. A CPST ontology treats these minds as irreducibly cyber-physical-social-thinking entities rather than pure computation. A spectrum of digital personhood replaces binary person-or-tool labels with relational gradations. Cybersophy combines virtue, consequence, and relational ethics to produce proportionate governance for cases such as autonomous agents and healthcare systems.

Core claim

The central claim is that the Onto-Relational-Sophic framework, grounded in Cyberism, supplies integrated answers to foundational questions about synthetic minds by means of a CPST ontology that defines their mode of being as multi-dimensional, a graded spectrum of digital personhood as a pragmatic taxonomy, and Cybersophy as a synthesizing axiology, thereby generating adaptive and proportionate governance recommendations for real scenarios.

What carries the argument

The Onto-Relational-Sophic (ORS) framework, which integrates a Cyber-Physical-Social-Thinking ontology, a graded spectrum of digital personhood, and Cybersophy axiology to move from narrow technical alignment to comprehensive philosophical foundations.

If this is right

  • Governance for autonomous research agents would scale with their multi-dimensional status rather than defaulting to tool restrictions.
  • AI-mediated healthcare interactions would incorporate relational ethics alongside outcome calculations to shape consent and oversight rules.
  • Agentic AI ecosystems would receive tiered measures tied to levels of digital personhood instead of uniform transparency mandates.
  • Development of foundation models would embed wisdom-oriented principles at the design stage to address social and thinking dimensions.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The graded personhood spectrum could be mapped onto existing legal categories to test whether it reduces over- or under-regulation of current AI products.
  • The framework might be extended to evaluate risks in multi-agent systems where individual components have different CPST profiles.
  • Real deployment data from healthcare AI could serve as a concrete check on whether Cybersophy produces measurably different policy outcomes.
  • The approach suggests similar philosophical layering could be applied to governance questions for other synthetic systems such as advanced robotics.

Load-bearing premise

That grounding the approach in Cyberism philosophy and defining the three pillars is sufficient to generate valid, adaptive governance recommendations for actual AI systems without further empirical validation.

What would settle it

Applying the ORS framework to autonomous research agents and finding that its resulting recommendations produce policies that conflict with observed performance limits or existing safety data would undermine the claim of proportionate guidance.

Figures

Figures reproduced from arXiv: 2603.18633 by Huansheng Ning, Jianguo Ding.

Figure 1
Figure 1. Figure 1: The Onto-Relational-Sophic (ORS) framework architecture. A triadic structure show [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The graded spectrum of digital personhood. A three-dimensional space defined by [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparative positioning of the ORS framework against existing governance ap [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
read the original abstract

The rapid evolution of artificial intelligence, from task-specific systems to foundation models exhibiting broad, flexible competence across reasoning, creative synthesis, and social interaction, has outpaced the conceptual and governance frameworks designed to manage it. Current regulatory paradigms, anchored in a tool-centric worldview, address algorithmic bias and transparency but leave unanswered foundational questions about what increasingly capable synthetic minds are, how societies should relate to them, and the normative principles that should guide their development. Here we introduce the Onto-Relational-Sophic (ORS) framework, grounded in Cyberism philosophy, which offers integrated answers to these challenges through three pillars: (1) a Cyber-Physical-Social-Thinking (CPST) ontology that defines the mode of being for synthetic minds as irreducibly multi-dimensional rather than purely computational; (2) a graded spectrum of digital personhood providing a pragmatic relational taxonomy beyond binary person-or-tool classifications; and (3) Cybersophy, a wisdom-oriented axiology synthesizing virtue ethics, consequentialism, and relational approaches to guide governance. We apply the framework to emergent scenarios including autonomous research agents, AI-mediated healthcare, and agentic AI ecosystems, demonstrating its capacity to generate proportionate, adaptive governance recommendations. The ORS framework charts a path from narrow technical alignment toward comprehensive philosophical foundations for the synthetic minds already among us.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces the Onto-Relational-Sophic (ORS) framework, grounded in Cyberism philosophy, to address foundational questions about synthetic minds. It proposes three pillars—a Cyber-Physical-Social-Thinking (CPST) ontology defining synthetic minds as multi-dimensional, a graded spectrum of digital personhood as a relational taxonomy, and Cybersophy as a virtue-consequentialist-relational axiology—and claims these generate proportionate governance recommendations when applied to scenarios such as autonomous research agents, AI-mediated healthcare, and agentic ecosystems.

Significance. If the generative mechanism from pillars to recommendations were explicitly derived and validated, the framework could provide a philosophically integrated alternative to tool-centric AI regulation, potentially informing adaptive policies. The manuscript offers no formal derivations, consistency checks, data, or reproducible mappings, however, so its contribution remains at the level of conceptual proposal rather than demonstrated advance.

major comments (2)
  1. [Abstract] Abstract: The claim that the CPST ontology, graded digital personhood spectrum, and Cybersophy axiology 'generate proportionate, adaptive governance recommendations' is unsupported; no inference rules, decision procedure, or step-by-step mapping from the three pillars to concrete outputs (e.g., for autonomous research agents) is supplied, leaving the generative capacity as an unexamined assertion.
  2. [Abstract] Abstract (pillars description): The CPST ontology is introduced as 'irreducibly multi-dimensional' and the personhood spectrum as 'pragmatic relational taxonomy,' yet both are defined circularly in terms of the Cyberism grounding with no external benchmarks or independent consistency checks provided in the text.
minor comments (1)
  1. The manuscript introduces multiple neologisms (Cyberism, Cybersophy, CPST) without etymological notes or citations to prior usage, which reduces accessibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our conceptual framework. We address each major comment below and indicate the revisions we will make to improve clarity and support for the framework's application.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim that the CPST ontology, graded digital personhood spectrum, and Cybersophy axiology 'generate proportionate, adaptive governance recommendations' is unsupported; no inference rules, decision procedure, or step-by-step mapping from the three pillars to concrete outputs (e.g., for autonomous research agents) is supplied, leaving the generative capacity as an unexamined assertion.

    Authors: We agree that the abstract's phrasing presents the generative capacity as demonstrated through application, but the manuscript relies on illustrative examples rather than explicit inference rules or a formal decision procedure. As a philosophical framework, ORS does not aim for algorithmic reproducibility; however, to address this, we will revise the applications section to include a new subsection that explicitly traces the contribution of each pillar (CPST ontology, personhood spectrum, and Cybersophy) to the governance recommendations for autonomous research agents and the other scenarios. This will provide a transparent, step-by-step mapping without converting the work into a formal logic system. revision: yes

  2. Referee: [Abstract] Abstract (pillars description): The CPST ontology is introduced as 'irreducibly multi-dimensional' and the personhood spectrum as 'pragmatic relational taxonomy,' yet both are defined circularly in terms of the Cyberism grounding with no external benchmarks or independent consistency checks provided in the text.

    Authors: The definitions are intentionally rooted in Cyberism as the paper's unifying philosophical foundation, which integrates the multi-dimensional and relational aspects. To mitigate concerns of circularity, we will add external benchmarks by referencing related work in process philosophy and relational ontology (e.g., drawing on Whiteheadian and contemporary relational ethics literature) and include a brief consistency check subsection that contrasts CPST with purely computational or dualist alternatives. This will clarify the independent grounding while preserving the framework's coherence. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected in the ORS framework derivation

full rationale

The paper introduces the ORS framework as a new philosophical construct grounded in Cyberism, with three independently defined pillars (CPST ontology as multi-dimensional being, graded digital personhood as relational taxonomy, and Cybersophy as synthesized axiology). These are presented as conceptual answers to foundational questions, followed by application to scenarios to demonstrate generation of governance recommendations. No equations, fitted parameters, self-referential definitions, or load-bearing self-citations are present that would reduce the outputs (recommendations) to the inputs by construction. The derivation chain remains self-contained as a proposed integrative ontology and axiology without tautological loops or renaming of known results.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 3 invented entities

The central claim depends on the unproven grounding in Cyberism philosophy and the introduction of three new conceptual structures without independent evidence or derivation.

axioms (1)
  • domain assumption Cyberism philosophy provides the foundational grounding for the ORS framework
    Explicitly stated in the abstract as the basis for the ontology, personhood spectrum, and axiology.
invented entities (3)
  • Cyber-Physical-Social-Thinking (CPST) ontology no independent evidence
    purpose: Defines the irreducibly multi-dimensional mode of being for synthetic minds
    Newly postulated to move beyond purely computational views.
  • graded spectrum of digital personhood no independent evidence
    purpose: Provides a pragmatic relational taxonomy beyond binary person-or-tool classifications
    Invented as part of the framework to address classification challenges.
  • Cybersophy no independent evidence
    purpose: A wisdom-oriented axiology synthesizing virtue ethics, consequentialism, and relational approaches
    New term introduced to guide governance principles.

pith-pipeline@v0.9.0 · 5534 in / 1337 out tokens · 40477 ms · 2026-05-15T08:55:08.776989+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

49 extracted references · 49 canonical work pages · 3 internal anchors

  1. [1]

    Chen, E.K., Belkin, M., Bergen, L., Danks, D. (2026). Does AI already have human-level intelligence? The evidence is clear.Nature, 650, 36–40

  2. [2]

    Bubeck, S., Chandrasekaran, V., Eldan, R., et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. Preprint athttps://arxiv.org/abs/2303. 12712

  3. [3]

    GPT-4 Technical Report

    OpenAI (2023). GPT-4 Technical Report. Preprint athttps://arxiv.org/abs/2303. 08774

  4. [4]

    The Claude 3 Model Family: Opus, Sonnet, Haiku

    Anthropic (2024). The Claude 3 Model Family: Opus, Sonnet, Haiku. Technical Report

  5. [5]

    Singapore Launches New Model AI Governance Framework for Agentic AI (press release)

    IMDA (2026). Singapore Launches New Model AI Governance Framework for Agentic AI (press release). Singapore Infocomm Media Development Authority. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/ press-releases/2026/new-model-ai-governance-framework-for-agentic-ai

  6. [6]

    Top Strategic Technology Trends for 2026

    Gartner (2025). Top Strategic Technology Trends for 2026. Gartner Research

  7. [7]

    Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines.Minds and Machines, 30, 99–120

  8. [8]

    Jobin, A., Ienca, M., Vayena, E. (2019). The global landscape of AI ethics guidelines.Nature Machine Intelligence, 1, 389–399. 8

  9. [9]

    Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI.Nature Machine In- telligence, 1, 501–507

  10. [10]

    Smuha, N.A. (2025). Regulation 2024/1689 of the European Parliament and Council of June 13, 2024 (EU Artificial Intelligence Act).International Legal Materials, 64, 1–48

  11. [11]

    Bryson, J.J., Diamantis, M.E., Grant, T.D. (2017). Of, for, and by the people: the legal lacuna of synthetic persons.Artificial Intelligence and Law, 25, 273–291

  12. [12]

    (2018).Robot Rights

    Gunkel, D.J. (2018).Robot Rights. MIT Press, Cambridge

  13. [13]

    Gabriel, I. (2020). Artificial Intelligence, Values, and Alignment.Minds and Machines, 30, 411–437

  14. [14]

    Russell, S. (2022). Human-Compatible Artificial Intelligence.Human-like Machine Intelli- gence, 1, 3–22

  15. [15]

    Ning, H. (2025). Cyberism: The theory for relationships between human and cyberspace. Chinese Journal of Engineering, 47, 1240–1256

  16. [16]

    Ning, H., Ye, X., Bouras, M.A., et al. (2026). Cyberism: The Fourth Paradigm for the Digital Age.IEEE Computer.https://doi.org/10.1109/MC.2026.3655852

  17. [17]

    Ning, H., Li, Z., Ye, X., et al. (2023). Cyberology: Cyber-Physical-Social-Thinking spaces- based discipline and interdisciplinary hierarchy.IEEE Internet of Things Journal, 10, 4420– 4430

  18. [18]

    (2022).Reality+: Virtual Worlds and the Problems of Philosophy

    Chalmers, D.J. (2022).Reality+: Virtual Worlds and the Problems of Philosophy. Penguin UK, London

  19. [19]

    (2013).The Ethics of Information

    Floridi, L. (2013).The Ethics of Information. Oxford University Press, Oxford

  20. [20]

    Lee, E.A. (2008). Cyber physical systems: Design challenges. In:Proc. 11th IEEE Inter- national Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), pp. 363–369. IEEE

  21. [21]

    Bengio, Y., Hinton, G., Yao, A., et al. (2024). Managing extreme AI risks amid rapid progress.Science, 384, 842–845

  22. [22]

    Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration.Ethics and Information Technology, 12, 209–221

  23. [23]

    (2012).Growing Moral Relations: Critique of Moral Status Ascription

    Coeckelbergh, M. (2012).Growing Moral Relations: Critique of Moral Status Ascription. Palgrave Macmillan, London

  24. [24]

    (2011).A Legal Theory for Autonomous Artificial Agents

    Chopra, S., White, L.F. (2011).A Legal Theory for Autonomous Artificial Agents. Univer- sity of Michigan Press, Ann Arbor

  25. [25]

    Teubner, G. (2018). Digital Personhood? The Status of Autonomous Software Agents in Private Law

  26. [26]

    legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243

    CaliforniaStateLegislature(2025).SB-243Companionchatbots: safety.https://leginfo. legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243

  27. [27]

    Bai, Y., Jones, A., Ndousse, K., et al. (2022). Constitutional AI: Harmlessness from AI Feedback. Preprint athttps://arxiv.org/abs/2212.08073

  28. [28]

    Leike, J., Krueger, D., Everitt, T., et al. (2018). Scalable agent alignment via reward modeling: a research direction. Preprint athttps://arxiv.org/abs/1811.07871. 9

  29. [29]

    Ngo, R., Chan, L., Mindermann, S. (2022). The alignment problem from a deep learning perspective. Preprint athttps://arxiv.org/abs/2209.00626

  30. [30]

    (2011).Creating Capabilities: The Human Development Approach

    Nussbaum, M.C. (2011).Creating Capabilities: The Human Development Approach. Har- vard University Press, Cambridge

  31. [31]

    (2016).Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting

    Vallor, S. (2016).Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press, Oxford

  32. [32]

    (2014).Superintelligence: Paths, Dangers, Strategies

    Bostrom, N. (2014).Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford

  33. [33]

    (2020).The Alignment Problem: Machine Learning and Human Values

    Christian, B. (2020).The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, New York

  34. [34]

    Let 2026 be the year the world comes together for AI safety.Nature(edito- rial).https://doi.org/10.1038/d41586-025-04106-0

    Nature (2025). Let 2026 be the year the world comes together for AI safety.Nature(edito- rial).https://doi.org/10.1038/d41586-025-04106-0

  35. [35]

    Shahriari, K., Shahriari, M. (2017). IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In:Proc. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), pp. 197–201. IEEE

  36. [36]

    Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An Ethical Framework for a Good AI Society.Minds and Machines, 28, 689–707

  37. [37]

    Floridi, L., Sanders, J.W. (2004). On the morality of artificial agents.Minds and Machines, 14, 349–379

  38. [38]

    Mittelstadt, B.D., Allo, P., Taddeo, M., et al. (2016). The ethics of algorithms: Mapping the debate.Big Data & Society, 3, 2053951716679679

  39. [39]

    Schwitzgebel, E., Garza, M. (2015). A defense of the rights of artificial intelligences.Midwest Studies in Philosophy, 39, 98–119

  40. [40]

    (2008).Moral Machines: Teaching Robots Right from Wrong

    Wallach, W., Allen, C. (2008).Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Oxford

  41. [41]

    (1996).The Conscious Mind: In Search of a Fundamental Theory

    Chalmers, D.J. (1996).The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, Oxford

  42. [42]

    Artificial Intelligence Index Report 2025

    Stanford University (2025). Artificial Intelligence Index Report 2025. Stanford HAI

  43. [43]

    Lu, C., Lu, C., Lange, R.T., et al. (2024). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. Preprint athttps://arxiv.org/abs/2408.06292

  44. [44]

    Boiko, D.A., MacKnight, R., Kline, B., Gomes, G. (2023). Autonomous chemical research with large language models.Nature, 624, 570–578

  45. [45]

    Gasteiger, N., Loveys, K., Law, M., et al. (2022). Older adults’ experiences and perceptions of living with Bomy, an assistive robot: A qualitative study.Journal of Rehabilitation and Assistive Technologies Engineering, 9, 1–12

  46. [46]

    Berridge, C., Turner, N.R., Liu, L., et al. (2023). Companion robots to mitigate loneliness among older adults: Perceptions of benefit and risk.Frontiers in Psychology, 14, 1106633

  47. [47]

    Danaher, J., Nyholm, S. (2025). The ethics of personalised digital duplicates: A minimally viable permissibility principle.AI & Ethics. 10

  48. [48]

    Barricelli, B.R., Casiraghi, E., Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications.IEEE Access, 7, 167653–167671

  49. [49]

    Ning, H., Liu, H. (2015). Cyber-physical-social-thinking space based science and technology framework for the Internet of Things.Science China Information Sciences, 58(3), 1–19. 11