Recognition: 2 theorem links
· Lean TheoremAn Onto-Relational-Sophic Framework for Governing Synthetic Minds
Pith reviewed 2026-05-15 08:55 UTC · model grok-4.3
The pith
The Onto-Relational-Sophic framework supplies a multi-dimensional ontology, graded digital personhood, and wisdom-based ethics to govern advanced synthetic minds.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that the Onto-Relational-Sophic framework, grounded in Cyberism, supplies integrated answers to foundational questions about synthetic minds by means of a CPST ontology that defines their mode of being as multi-dimensional, a graded spectrum of digital personhood as a pragmatic taxonomy, and Cybersophy as a synthesizing axiology, thereby generating adaptive and proportionate governance recommendations for real scenarios.
What carries the argument
The Onto-Relational-Sophic (ORS) framework, which integrates a Cyber-Physical-Social-Thinking ontology, a graded spectrum of digital personhood, and Cybersophy axiology to move from narrow technical alignment to comprehensive philosophical foundations.
If this is right
- Governance for autonomous research agents would scale with their multi-dimensional status rather than defaulting to tool restrictions.
- AI-mediated healthcare interactions would incorporate relational ethics alongside outcome calculations to shape consent and oversight rules.
- Agentic AI ecosystems would receive tiered measures tied to levels of digital personhood instead of uniform transparency mandates.
- Development of foundation models would embed wisdom-oriented principles at the design stage to address social and thinking dimensions.
Where Pith is reading between the lines
- The graded personhood spectrum could be mapped onto existing legal categories to test whether it reduces over- or under-regulation of current AI products.
- The framework might be extended to evaluate risks in multi-agent systems where individual components have different CPST profiles.
- Real deployment data from healthcare AI could serve as a concrete check on whether Cybersophy produces measurably different policy outcomes.
- The approach suggests similar philosophical layering could be applied to governance questions for other synthetic systems such as advanced robotics.
Load-bearing premise
That grounding the approach in Cyberism philosophy and defining the three pillars is sufficient to generate valid, adaptive governance recommendations for actual AI systems without further empirical validation.
What would settle it
Applying the ORS framework to autonomous research agents and finding that its resulting recommendations produce policies that conflict with observed performance limits or existing safety data would undermine the claim of proportionate guidance.
Figures
read the original abstract
The rapid evolution of artificial intelligence, from task-specific systems to foundation models exhibiting broad, flexible competence across reasoning, creative synthesis, and social interaction, has outpaced the conceptual and governance frameworks designed to manage it. Current regulatory paradigms, anchored in a tool-centric worldview, address algorithmic bias and transparency but leave unanswered foundational questions about what increasingly capable synthetic minds are, how societies should relate to them, and the normative principles that should guide their development. Here we introduce the Onto-Relational-Sophic (ORS) framework, grounded in Cyberism philosophy, which offers integrated answers to these challenges through three pillars: (1) a Cyber-Physical-Social-Thinking (CPST) ontology that defines the mode of being for synthetic minds as irreducibly multi-dimensional rather than purely computational; (2) a graded spectrum of digital personhood providing a pragmatic relational taxonomy beyond binary person-or-tool classifications; and (3) Cybersophy, a wisdom-oriented axiology synthesizing virtue ethics, consequentialism, and relational approaches to guide governance. We apply the framework to emergent scenarios including autonomous research agents, AI-mediated healthcare, and agentic AI ecosystems, demonstrating its capacity to generate proportionate, adaptive governance recommendations. The ORS framework charts a path from narrow technical alignment toward comprehensive philosophical foundations for the synthetic minds already among us.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the Onto-Relational-Sophic (ORS) framework, grounded in Cyberism philosophy, to address foundational questions about synthetic minds. It proposes three pillars—a Cyber-Physical-Social-Thinking (CPST) ontology defining synthetic minds as multi-dimensional, a graded spectrum of digital personhood as a relational taxonomy, and Cybersophy as a virtue-consequentialist-relational axiology—and claims these generate proportionate governance recommendations when applied to scenarios such as autonomous research agents, AI-mediated healthcare, and agentic ecosystems.
Significance. If the generative mechanism from pillars to recommendations were explicitly derived and validated, the framework could provide a philosophically integrated alternative to tool-centric AI regulation, potentially informing adaptive policies. The manuscript offers no formal derivations, consistency checks, data, or reproducible mappings, however, so its contribution remains at the level of conceptual proposal rather than demonstrated advance.
major comments (2)
- [Abstract] Abstract: The claim that the CPST ontology, graded digital personhood spectrum, and Cybersophy axiology 'generate proportionate, adaptive governance recommendations' is unsupported; no inference rules, decision procedure, or step-by-step mapping from the three pillars to concrete outputs (e.g., for autonomous research agents) is supplied, leaving the generative capacity as an unexamined assertion.
- [Abstract] Abstract (pillars description): The CPST ontology is introduced as 'irreducibly multi-dimensional' and the personhood spectrum as 'pragmatic relational taxonomy,' yet both are defined circularly in terms of the Cyberism grounding with no external benchmarks or independent consistency checks provided in the text.
minor comments (1)
- The manuscript introduces multiple neologisms (Cyberism, Cybersophy, CPST) without etymological notes or citations to prior usage, which reduces accessibility.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our conceptual framework. We address each major comment below and indicate the revisions we will make to improve clarity and support for the framework's application.
read point-by-point responses
-
Referee: [Abstract] Abstract: The claim that the CPST ontology, graded digital personhood spectrum, and Cybersophy axiology 'generate proportionate, adaptive governance recommendations' is unsupported; no inference rules, decision procedure, or step-by-step mapping from the three pillars to concrete outputs (e.g., for autonomous research agents) is supplied, leaving the generative capacity as an unexamined assertion.
Authors: We agree that the abstract's phrasing presents the generative capacity as demonstrated through application, but the manuscript relies on illustrative examples rather than explicit inference rules or a formal decision procedure. As a philosophical framework, ORS does not aim for algorithmic reproducibility; however, to address this, we will revise the applications section to include a new subsection that explicitly traces the contribution of each pillar (CPST ontology, personhood spectrum, and Cybersophy) to the governance recommendations for autonomous research agents and the other scenarios. This will provide a transparent, step-by-step mapping without converting the work into a formal logic system. revision: yes
-
Referee: [Abstract] Abstract (pillars description): The CPST ontology is introduced as 'irreducibly multi-dimensional' and the personhood spectrum as 'pragmatic relational taxonomy,' yet both are defined circularly in terms of the Cyberism grounding with no external benchmarks or independent consistency checks provided in the text.
Authors: The definitions are intentionally rooted in Cyberism as the paper's unifying philosophical foundation, which integrates the multi-dimensional and relational aspects. To mitigate concerns of circularity, we will add external benchmarks by referencing related work in process philosophy and relational ontology (e.g., drawing on Whiteheadian and contemporary relational ethics literature) and include a brief consistency check subsection that contrasts CPST with purely computational or dualist alternatives. This will clarify the independent grounding while preserving the framework's coherence. revision: yes
Circularity Check
No significant circularity detected in the ORS framework derivation
full rationale
The paper introduces the ORS framework as a new philosophical construct grounded in Cyberism, with three independently defined pillars (CPST ontology as multi-dimensional being, graded digital personhood as relational taxonomy, and Cybersophy as synthesized axiology). These are presented as conceptual answers to foundational questions, followed by application to scenarios to demonstrate generation of governance recommendations. No equations, fitted parameters, self-referential definitions, or load-bearing self-citations are present that would reduce the outputs (recommendations) to the inputs by construction. The derivation chain remains self-contained as a proposed integrative ontology and axiology without tautological loops or renaming of known results.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Cyberism philosophy provides the foundational grounding for the ORS framework
invented entities (3)
-
Cyber-Physical-Social-Thinking (CPST) ontology
no independent evidence
-
graded spectrum of digital personhood
no independent evidence
-
Cybersophy
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/RealityFromDistinctionreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
three pillars: (1) a Cyber-Physical-Social-Thinking (CPST) ontology... (2) a graded spectrum of digital personhood... (3) Cybersophy, a wisdom-oriented axiology
-
IndisputableMonolith/Cost/FunctionalEquationwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Cybersophy... synthesizing virtue ethics, consequentialism, and relational approaches
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Chen, E.K., Belkin, M., Bergen, L., Danks, D. (2026). Does AI already have human-level intelligence? The evidence is clear.Nature, 650, 36–40
work page 2026
-
[2]
Bubeck, S., Chandrasekaran, V., Eldan, R., et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. Preprint athttps://arxiv.org/abs/2303. 12712
work page 2023
-
[3]
OpenAI (2023). GPT-4 Technical Report. Preprint athttps://arxiv.org/abs/2303. 08774
work page 2023
-
[4]
The Claude 3 Model Family: Opus, Sonnet, Haiku
Anthropic (2024). The Claude 3 Model Family: Opus, Sonnet, Haiku. Technical Report
work page 2024
-
[5]
Singapore Launches New Model AI Governance Framework for Agentic AI (press release)
IMDA (2026). Singapore Launches New Model AI Governance Framework for Agentic AI (press release). Singapore Infocomm Media Development Authority. https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/ press-releases/2026/new-model-ai-governance-framework-for-agentic-ai
work page 2026
-
[6]
Top Strategic Technology Trends for 2026
Gartner (2025). Top Strategic Technology Trends for 2026. Gartner Research
work page 2025
-
[7]
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines.Minds and Machines, 30, 99–120
work page 2020
-
[8]
Jobin, A., Ienca, M., Vayena, E. (2019). The global landscape of AI ethics guidelines.Nature Machine Intelligence, 1, 389–399. 8
work page 2019
-
[9]
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI.Nature Machine In- telligence, 1, 501–507
work page 2019
-
[10]
Smuha, N.A. (2025). Regulation 2024/1689 of the European Parliament and Council of June 13, 2024 (EU Artificial Intelligence Act).International Legal Materials, 64, 1–48
work page 2025
-
[11]
Bryson, J.J., Diamantis, M.E., Grant, T.D. (2017). Of, for, and by the people: the legal lacuna of synthetic persons.Artificial Intelligence and Law, 25, 273–291
work page 2017
- [12]
-
[13]
Gabriel, I. (2020). Artificial Intelligence, Values, and Alignment.Minds and Machines, 30, 411–437
work page 2020
-
[14]
Russell, S. (2022). Human-Compatible Artificial Intelligence.Human-like Machine Intelli- gence, 1, 3–22
work page 2022
-
[15]
Ning, H. (2025). Cyberism: The theory for relationships between human and cyberspace. Chinese Journal of Engineering, 47, 1240–1256
work page 2025
-
[16]
Ning, H., Ye, X., Bouras, M.A., et al. (2026). Cyberism: The Fourth Paradigm for the Digital Age.IEEE Computer.https://doi.org/10.1109/MC.2026.3655852
-
[17]
Ning, H., Li, Z., Ye, X., et al. (2023). Cyberology: Cyber-Physical-Social-Thinking spaces- based discipline and interdisciplinary hierarchy.IEEE Internet of Things Journal, 10, 4420– 4430
work page 2023
-
[18]
(2022).Reality+: Virtual Worlds and the Problems of Philosophy
Chalmers, D.J. (2022).Reality+: Virtual Worlds and the Problems of Philosophy. Penguin UK, London
work page 2022
-
[19]
(2013).The Ethics of Information
Floridi, L. (2013).The Ethics of Information. Oxford University Press, Oxford
work page 2013
-
[20]
Lee, E.A. (2008). Cyber physical systems: Design challenges. In:Proc. 11th IEEE Inter- national Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), pp. 363–369. IEEE
work page 2008
-
[21]
Bengio, Y., Hinton, G., Yao, A., et al. (2024). Managing extreme AI risks amid rapid progress.Science, 384, 842–845
work page 2024
-
[22]
Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration.Ethics and Information Technology, 12, 209–221
work page 2010
-
[23]
(2012).Growing Moral Relations: Critique of Moral Status Ascription
Coeckelbergh, M. (2012).Growing Moral Relations: Critique of Moral Status Ascription. Palgrave Macmillan, London
work page 2012
-
[24]
(2011).A Legal Theory for Autonomous Artificial Agents
Chopra, S., White, L.F. (2011).A Legal Theory for Autonomous Artificial Agents. Univer- sity of Michigan Press, Ann Arbor
work page 2011
-
[25]
Teubner, G. (2018). Digital Personhood? The Status of Autonomous Software Agents in Private Law
work page 2018
-
[26]
legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243
CaliforniaStateLegislature(2025).SB-243Companionchatbots: safety.https://leginfo. legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB243
work page 2025
-
[27]
Bai, Y., Jones, A., Ndousse, K., et al. (2022). Constitutional AI: Harmlessness from AI Feedback. Preprint athttps://arxiv.org/abs/2212.08073
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[28]
Leike, J., Krueger, D., Everitt, T., et al. (2018). Scalable agent alignment via reward modeling: a research direction. Preprint athttps://arxiv.org/abs/1811.07871. 9
work page internal anchor Pith review Pith/arXiv arXiv 2018
- [29]
-
[30]
(2011).Creating Capabilities: The Human Development Approach
Nussbaum, M.C. (2011).Creating Capabilities: The Human Development Approach. Har- vard University Press, Cambridge
work page 2011
-
[31]
(2016).Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting
Vallor, S. (2016).Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press, Oxford
work page 2016
-
[32]
(2014).Superintelligence: Paths, Dangers, Strategies
Bostrom, N. (2014).Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford
work page 2014
-
[33]
(2020).The Alignment Problem: Machine Learning and Human Values
Christian, B. (2020).The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, New York
work page 2020
-
[34]
Nature (2025). Let 2026 be the year the world comes together for AI safety.Nature(edito- rial).https://doi.org/10.1038/d41586-025-04106-0
-
[35]
Shahriari, K., Shahriari, M. (2017). IEEE standard review—Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In:Proc. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), pp. 197–201. IEEE
work page 2017
-
[36]
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—An Ethical Framework for a Good AI Society.Minds and Machines, 28, 689–707
work page 2018
-
[37]
Floridi, L., Sanders, J.W. (2004). On the morality of artificial agents.Minds and Machines, 14, 349–379
work page 2004
-
[38]
Mittelstadt, B.D., Allo, P., Taddeo, M., et al. (2016). The ethics of algorithms: Mapping the debate.Big Data & Society, 3, 2053951716679679
work page 2016
-
[39]
Schwitzgebel, E., Garza, M. (2015). A defense of the rights of artificial intelligences.Midwest Studies in Philosophy, 39, 98–119
work page 2015
-
[40]
(2008).Moral Machines: Teaching Robots Right from Wrong
Wallach, W., Allen, C. (2008).Moral Machines: Teaching Robots Right from Wrong. Oxford University Press, Oxford
work page 2008
-
[41]
(1996).The Conscious Mind: In Search of a Fundamental Theory
Chalmers, D.J. (1996).The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press, Oxford
work page 1996
-
[42]
Artificial Intelligence Index Report 2025
Stanford University (2025). Artificial Intelligence Index Report 2025. Stanford HAI
work page 2025
-
[43]
Lu, C., Lu, C., Lange, R.T., et al. (2024). The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery. Preprint athttps://arxiv.org/abs/2408.06292
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[44]
Boiko, D.A., MacKnight, R., Kline, B., Gomes, G. (2023). Autonomous chemical research with large language models.Nature, 624, 570–578
work page 2023
-
[45]
Gasteiger, N., Loveys, K., Law, M., et al. (2022). Older adults’ experiences and perceptions of living with Bomy, an assistive robot: A qualitative study.Journal of Rehabilitation and Assistive Technologies Engineering, 9, 1–12
work page 2022
-
[46]
Berridge, C., Turner, N.R., Liu, L., et al. (2023). Companion robots to mitigate loneliness among older adults: Perceptions of benefit and risk.Frontiers in Psychology, 14, 1106633
work page 2023
-
[47]
Danaher, J., Nyholm, S. (2025). The ethics of personalised digital duplicates: A minimally viable permissibility principle.AI & Ethics. 10
work page 2025
-
[48]
Barricelli, B.R., Casiraghi, E., Fogli, D. (2019). A survey on digital twin: Definitions, characteristics, applications, and design implications.IEEE Access, 7, 167653–167671
work page 2019
-
[49]
Ning, H., Liu, H. (2015). Cyber-physical-social-thinking space based science and technology framework for the Internet of Things.Science China Information Sciences, 58(3), 1–19. 11
work page 2015
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.