Recognition: no theorem link
AI Trust OS -- A Continuous Governance Framework for Autonomous AI Observability and Zero-Trust Compliance in Enterprise Environments
Pith reviewed 2026-05-10 18:43 UTC · model grok-4.3
The pith
A telemetry-first operating layer discovers undocumented AI systems and synthesizes ongoing compliance evidence from existing observability tools.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
AI Trust OS reconceptualizes compliance as an always-on, telemetry-driven operating layer in which AI systems are discovered through observability signals, control assertions are collected by automated probes, and trust artifacts are synthesized continuously. The framework rests on four principles: proactive discovery, telemetry evidence over manual attestation, continuous posture over point-in-time audit, and architecture-backed proof over policy-document trust. It operates through a zero-trust telemetry boundary using ephemeral read-only probes that validate structural metadata without ingressing source code or payload-level PII, with an AI Observability Extractor Agent that scans Langmith
What carries the argument
The zero-trust telemetry boundary using ephemeral read-only probes and the AI Observability Extractor Agent that scans existing platforms to register undocumented systems automatically.
If this is right
- Undocumented AI systems are registered automatically from observability signals rather than requiring self-reporting.
- Trust artifacts for multiple regulatory standards are synthesized continuously instead of at discrete audit points.
- Governance evidence shifts from policy documents and attestations to empirical machine observation.
- The same infrastructure supports compliance across ISO 42001, EU AI Act, SOC 2, GDPR, and HIPAA simultaneously.
- Enterprise trust in AI is produced through architectural mechanisms instead of administrative processes.
Where Pith is reading between the lines
- Organizations could respond to new AI deployments with near-real-time governance actions rather than delayed reviews.
- The approach could be tested by seeding an environment with known AI instances and measuring registration accuracy and artifact completeness.
- Over time the framework might encourage auditors to treat continuous telemetry streams as primary evidence.
- Extending probe coverage to additional observability sources would increase the fraction of AI systems that can be governed.
Load-bearing premise
Ephemeral read-only probes on standard telemetry platforms can discover every undocumented AI system and generate sufficient compliance artifacts for the listed standards without source code access or exposure of payload data.
What would settle it
Introduce a functional AI system into a test environment that produces no telemetry in the monitored platforms and observe whether the framework registers it or produces incomplete compliance documentation.
Figures
read the original abstract
The accelerating adoption of large language models, retrieval-augmented generation pipelines, and multi-agent AI workflows has created a structural governance crisis. Organizations cannot govern what they cannot see, and existing compliance methodologies built for deterministic web applications provide no mechanism for discovering or continuously validating AI systems that emerge across engineering teams without formal oversight. The result is a widening trust gap between what regulators demand as proof of AI governance maturity and what organizations can demonstrate. This paper proposes AI Trust OS, a governance architecture for continuous, autonomous AI observability and zero-trust compliance. AI Trust OS reconceptualizes compliance as an always-on, telemetry-driven operating layer in which AI systems are discovered through observability signals, control assertions are collected by automated probes, and trust artifacts are synthesized continuously. The framework rests on four principles: proactive discovery, telemetry evidence over manual attestation, continuous posture over point-in-time audit, and architecture-backed proof over policy-document trust. The framework operates through a zero-trust telemetry boundary in which ephemeral read-only probes validate structural metadata without ingressing source code or payload-level PII. An AI Observability Extractor Agent scans LangSmith and Datadog LLM telemetry, automatically registering undocumented AI systems and shifting governance from organizational self-report to empirical machine observation. Evaluated across ISO 42001, the EU AI Act, SOC 2, GDPR, and HIPAA, the paper argues that telemetry-first AI governance represents a categorical architectural shift in how enterprise trust is produced and demonstrated.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes AI Trust OS, a conceptual governance architecture for continuous, autonomous AI observability and zero-trust compliance in enterprise settings. It identifies a gap in existing compliance methods for emerging AI systems (LLMs, RAG pipelines, multi-agent workflows) and introduces four principles—proactive discovery, telemetry evidence over manual attestation, continuous posture over point-in-time audit, and architecture-backed proof over policy-document trust—implemented via ephemeral read-only probes on platforms such as LangSmith and Datadog. An AI Observability Extractor Agent is described as automatically registering undocumented AI systems and synthesizing compliance artifacts for ISO 42001, EU AI Act, SOC 2, GDPR, and HIPAA without source-code access or payload PII. The central claim is that this telemetry-first approach constitutes a categorical architectural shift from self-report to empirical machine observation.
Significance. If the completeness and reliability assumptions hold, the proposal could stimulate discussion on shifting AI governance from periodic audits to always-on observability layers, particularly for organizations already using LLM telemetry platforms. The explicit mapping to five regulatory frameworks and the zero-trust boundary design are constructive elements. However, as a purely conceptual proposal with no empirical evaluation, no comparison to existing tools (e.g., model registries, policy-as-code systems, or observability extensions), no implementation details, and no coverage or failure-mode analysis, the work's immediate significance is limited to framing a research direction rather than delivering a validated framework.
major comments (2)
- [Abstract] Abstract and framework description: The central claim that telemetry-first governance via the AI Observability Extractor Agent represents a 'categorical architectural shift' rests on the unargued assumption that scanning LangSmith/Datadog telemetry will discover every undocumented AI system, extract structural metadata sufficient for the five listed regulations, and synthesize compliance artifacts. No mechanism, coverage argument, or failure-mode analysis is provided for systems that emit no telemetry to these platforms, use custom runtimes, or are deliberately isolated. This completeness assumption is load-bearing for the shift from self-report to empirical observation.
- [Abstract] Abstract and evaluation section: The paper asserts that the framework has been 'evaluated across ISO 42001, the EU AI Act, SOC 2, GDPR, and HIPAA' yet supplies no mapping, checklist, or artifact examples showing how ephemeral read-only probes produce the required evidence. Without such specification, the regulatory coverage claim cannot be assessed and remains circular with the four principles.
minor comments (1)
- [Abstract] The abstract introduces the term 'AI Trust OS' and 'AI Observability Extractor Agent' without a dedicated nomenclature or acronym table, which would aid readability in a conceptual paper.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our conceptual framework for AI Trust OS. We address the major comments point by point below, acknowledging areas where the manuscript requires clarification and expansion.
read point-by-point responses
-
Referee: [Abstract] Abstract and framework description: The central claim that telemetry-first governance via the AI Observability Extractor Agent represents a 'categorical architectural shift' rests on the unargued assumption that scanning LangSmith/Datadog telemetry will discover every undocumented AI system, extract structural metadata sufficient for the five listed regulations, and synthesize compliance artifacts. No mechanism, coverage argument, or failure-mode analysis is provided for systems that emit no telemetry to these platforms, use custom runtimes, or are deliberately isolated. This completeness assumption is load-bearing for the shift from self-report to empirical observation.
Authors: We agree that the completeness assumption requires explicit treatment and that the manuscript would be strengthened by a dedicated discussion of scope. The framework targets enterprise environments that deploy observability platforms such as LangSmith and Datadog for LLM workloads, which is a common pattern in production AI adoption. It does not claim to discover or govern AI systems that emit no telemetry or are intentionally isolated. The 'categorical architectural shift' is framed as a move from manual self-attestation to automated empirical observation where telemetry is available, rather than universal coverage. We will revise the abstract to qualify the claim and add a new section titled 'Scope, Assumptions, and Limitations' that provides a coverage argument for telemetry-emitting systems, describes the registration mechanism of the Extractor Agent, and analyzes failure modes for non-observable or custom-runtime systems. revision: yes
-
Referee: [Abstract] Abstract and evaluation section: The paper asserts that the framework has been 'evaluated across ISO 42001, the EU AI Act, SOC 2, GDPR, and HIPAA' yet supplies no mapping, checklist, or artifact examples showing how ephemeral read-only probes produce the required evidence. Without such specification, the regulatory coverage claim cannot be assessed and remains circular with the four principles.
Authors: The evaluation in the current manuscript is conceptual, showing how the four principles and zero-trust telemetry boundary align with the governance requirements of each regulation. We concur that explicit mappings and illustrative artifacts are needed to allow independent assessment. We will expand the evaluation section with tables that map key requirements from each of the five frameworks to the structural metadata and compliance artifacts generated by the ephemeral read-only probes. We will also include example synthesized artifacts (e.g., high-level compliance summaries derived from telemetry without payload access) to demonstrate the process concretely. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The paper is a descriptive proposal of an architectural framework defined by four explicit principles (proactive discovery, telemetry evidence, continuous posture, architecture-backed proof) and an observability agent. No equations, fitted parameters, predictions, or self-citations appear in the provided text. The claim of a 'categorical architectural shift' is an assertion about the proposed system rather than a derivation that reduces by construction to its inputs. The argument is self-contained as a high-level governance model without load-bearing steps that equate outputs to definitions or prior fits.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Telemetry signals from LangSmith, Datadog, and similar platforms contain enough structural metadata to discover undocumented AI systems and validate compliance without source code or payload PII.
- domain assumption Continuous collection of control assertions by automated probes can replace point-in-time manual attestation for the listed regulations.
invented entities (2)
-
AI Trust OS
no independent evidence
-
AI Observability Extractor Agent
no independent evidence
Forward citations
Cited by 1 Pith paper
-
Decision Evidence Maturity Model for Agentic AI: A Property-Level Method Specification
DEMM defines four executable evidence-sufficiency categories plus a conflicting category for agentic AI decisions and rolls per-property verdicts into a five-level maturity rubric.
Reference graph
Works this paper leans on
-
[1]
Sculley, G
D. Sculley, G. Holt, D. Golovin, E. Davydov, T. Phillips, D. Ebner, V. Chaudhary, M. Young, J.-F. Crespo, D. Dennison, Hidden technical debt in machine learning systems 28 (2015) 2503–2511
2015
-
[2]
On the Opportunities and Risks of Foundation Models
R. Bommasani, D. A. Hudson, E. Adeli, et al., On the opportunities and risks of foundation models, Tech. Rep. arXiv:2108.07258, Stanford University Center for Research on Foundation Models (2021). URLhttps://arxiv.org/abs/2108.07258
work page internal anchor Pith review Pith/arXiv arXiv 2021
-
[3]
S. Arora, B. Yang, S. Eyuboglu, A. Narayan, A. Hojel, I. Trummer, C. R´ e, Language models enable simple systems for generating structured views of heterogeneous data lakes, arXiv preprint arXiv:2304.09433 (2023)
-
[4]
Lewis, E
P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. K¨ uttler, M. Lewis, W.-t. Yih, T. Rockt¨ aschel, S. Riedel, D. Kiela, Retrieval-augmented generation for knowledge-intensive NLP tasks, in: Advances in Neural Information Processing Systems (NeurIPS), Vol. 33, Curran Associates, 2020, pp. 9459–9474
2020
-
[5]
R. Sapkota, N. Bhusal, A. Morshed, M. Whaiduzzaman, Ai agents vs. agentic AI: A conceptual taxonomy, applications and challenges, arXiv preprint arXiv:2505.10468 (2025). URLhttps://arxiv.org/abs/2505.10468
-
[6]
D. B. Acharya, K. Kuppan, B. Divya, Agentic ai: Autonomous intelli- gence for complex goals–a comprehensive survey, IEEE Access (2025)
2025
-
[7]
E. Bandara, R. Gore, P. Foytik, S. Shetty, R. Mukkamala, A. Rahman, X. Liang, S. H. Bouk, A. Hass, S. Rajapakse, et al., A practical guide for designing, developing, and deploying production-grade agentic ai work- flows, arXiv preprint arXiv:2512.08769 (2025). 33
-
[8]
G. Disterer, ISO/IEC 27000, 27001 and 27002 for information secu- rity management, Journal of Information Security 4 (2) (2013) 92–100. doi:10.4236/jis.2013.42011
-
[9]
I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, P. Barnes, Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing (2020) 33–44doi:10.1145/3351095.3372873
-
[10]
Ethical and social risks of harm from Language Models
L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, Z. Kenton, S. Brown, W. Hawkins, T. Stepleton, C. Biles, A. Birhane, J. Haas, L. Rimell, L. A. Hendricks, W. Isaac, S. Legassick, G. Irving, I. Gabriel, Eth- ical and social risks of harm from language models, arXiv preprint arXiv:2112.04359...
work page internal anchor Pith review arXiv 2021
-
[11]
J. M¨ okander, J. Morley, M. Taddeo, L. Floridi, Auditing artificial in- telligence: Towards a comprehensive and dynamic framework for exter- nal algorithmic audits, Philosophy and Technology 36 (1) (2023) 1–32. doi:10.1007/s13347-022-00582-6
-
[12]
European Parliament and Council of the European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council — ar- tificial intelligence act, Regulation L 2024/1689, Official Journal of the European Union, Brussels, Belgium (2024)
2024
-
[13]
International Organization for Standardization, ISO/IEC 42001:2023 — information technology — artificial intelligence — management system, International standard, International Organization for Standardization, Geneva, Switzerland (2023)
2023
-
[14]
M. Gupta, R. Sharma, S. Taneja, Continuous compliance monitoring in cloud-native environments: A telemetry-driven framework, Journal of Cloud Computing: Advances, Systems and Applications 12 (1) (2023) 45–67. doi:10.1186/s13677-023-00412-x
-
[15]
M. Silic, A. Back, Shadow IT — a view from behind the curtain, Com- puters and Security 45 (2014) 274–283. doi:10.1016/j.cose.2014.06.007. 34
-
[16]
L. E. Lwakatare, I. Crnkovic, B. Rombaut, J. Bosch, From a handful to thousands: Scaling up llm evaluations to mitigate language model risks, in: Proceedings of the International Conference on Software Engineering (ICSE): Software Engineering in Practice, ACM, 2020, pp. 201–210. doi:10.1145/3377813.3381349
-
[17]
E. Bandara, R. Gore, S. Shetty, S. Rajapakse, I. Kularathna, P. Karunarathna, R. Mukkamala, P. Foytik, S. H. Bouk, A. Rahman, et al., A practical guide to agentic ai transition in organizations, arXiv preprint arXiv:2602.10122 (2026)
-
[18]
Timeloop: A Systematic Approach to DNN Accelerator Evaluation
S. Amershi, A. Begel, C. Bird, R. DeLine, H. Gall, E. Kamar, N. Na- gappan, B. Nushi, T. Zimmermann, Software engineering for machine learning: A case study, in: Proceedings of the 41st International Confer- ence on Software Engineering: Software Engineering in Practice (ICSE- SEIP), IEEE, 2019, pp. 291–300. doi:10.1109/ICSE-SEIP.2019.00042
-
[19]
Towards responsi- ble and explainable ai agents with consensus-driven reasoning,
E. Bandara, T. Hewa, R. Gore, S. Shetty, R. Mukkamala, P. Foytik, A. Rahman, S. H. Bouk, X. Liang, A. Hass, et al., Towards respon- sible and explainable ai agents with consensus-driven reasoning, arXiv preprint arXiv:2512.21699 (2025)
-
[20]
S. Rose, O. Borchert, S. Mitchell, S. Connelly, Zero trust architecture, Tech. Rep. NIST SP 800-207, National Institute of Standards and Tech- nology, Gaithersburg, MD (2020). doi:10.6028/NIST.SP.800-207
-
[21]
LangChain Inc., LangSmith: Observability and evaluation platform for LLM applications,https://smith.langchain.com, accessed: 2024 (2023)
2024
-
[22]
J. Chou, B. Lin, S. Majumdar, L. E. Lwakatare, Automated auditing pipelines for machine learning systems in enterprise environments, in: Proceedings of the IEEE International Conference on Software Engineer- ing for Machine Learning Systems (SEMLS), IEEE, 2022, pp. 112–124. doi:10.1109/SEMLS.2022.00019
-
[23]
Shavit, S
Y. Shavit, S. Agarwal, M. Brundage, S. Adler, C. O’Keefe, R. Camp- bell, T. Lee, P. Mishkin, T. Eloundou, A. Hickey, et al., Practices for governing agentic ai systems, Research Paper, OpenAI (2023). 35
2023
-
[24]
Silic, D
M. Silic, D. Silic, K. Kind-Tr¨ uller, From shadow it to shadow ai–threats, risks and opportunities for organizations, Strategic Change (2025)
2025
-
[25]
Kindervag, No more chewy centers: Introducing the zero trust model of information security (2010)
J. Kindervag, No more chewy centers: Introducing the zero trust model of information security (2010)
2010
-
[26]
Shavit, S
Y. Shavit, S. Agarwal, M. Brundage, S. Adler, C. O’Keefe, R. Camp- bell, T. Lee, P. Mishkin, T. Eloundou, A. Hickey, et al., Practices for governing agentic ai systems, Research Paper, OpenAI (2023)
2023
-
[27]
E. Bandara, S. H. Bouk, S. Shetty, R. Mukkamala, A. Rahman, P. Foytik, R. Gore, X. Liang, N. W. Keong, K. De Zoysa, Sre-llama–fine- tuned meta’s llama llm, federated learning, blockchain and nft enabled site reliability engineering (sre) platform for communication and net- working software services, arXiv preprint arXiv:2511.08282 (2025)
-
[28]
Joshi, Ai governance by design for agentic systems: A framework for responsible development and deployment (2025)
H. Joshi, Ai governance by design for agentic systems: A framework for responsible development and deployment (2025)
2025
-
[29]
E. Bandara, S. H. Bouk, S. Shetty, R. Gore, S. Kompella, R. Mukkamala, A. Rahman, P. Foytik, X. Liang, N. W. Keong, K. De Zoysa, Bassa-llama — fine-tuned meta’s llama llm, blockchain and nft enabled real-time network attack detection platform for wind energy power plants, in: 2025 International Wireless Com- munications and Mobile Computing (IWCMC), 2025,...
-
[30]
American Institute of Certified Public Accountants, Trust services cri- teria for security, availability, processing integrity, confidentiality, and privacy, Technical standard, AICPA, New York, NY (2023)
2023
-
[31]
European Parliament and Council of the European Union, Regulation (EU) 2016/679 — general data protection regulation (GDPR), Regula- tion L 119/1, Official Journal of the European Union, Brussels, Belgium (2018)
2016
-
[32]
United States Congress, Health insurance portability and accountability act of 1996 (HIPAA), Federal Legislation Public Law 104-191, United States Department of Health and Human Services, Washington, DC (1996). 36
1996
-
[33]
Zheng, Y
Y. Zheng, Y. Hu, T. Yu, A. Quinn, Agentsight: System-level observ- ability for ai agents using ebpf, in: Proceedings of the 4th Workshop on Practical Adoption Challenges of ML for Systems, 2025, pp. 110–115
2025
-
[34]
Ignore Previous Prompt: Attack Techniques For Language Models
E. Perez, M. T. Ribeiro, Ignore previous prompt: Attack techniques and defences for large language models, arXiv preprint arXiv:2211.09527 (2022). URLhttps://arxiv.org/abs/2211.09527
work page internal anchor Pith review arXiv 2022
-
[35]
Bandara, S
E. Bandara, S. Shetty, A. Rahman, R. Mukkamala, P. Foytik, X. Liang, Rmf-gpt—openai gpt-3.5 llm, blockchain, nft, model cards and open- scap enabled intelligent rmf automation system, in: 2024 International Conference on Computing, Networking and Communications (ICNC), IEEE, 2024, pp. 653–658
2024
-
[36]
White, C
J. White, C. Daniels, Continuous cybersecurity management through blockchain technology, in: 2019 IEEE Technology & Engineering Man- agement Conference (TEMSCON), IEEE, 2019, pp. 1–5
2019
-
[37]
Bonawitz, V
K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, K. Seth, Practical secure aggregation for privacy-preserving machine learning, in: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 1175–1191
2017
-
[38]
Abadi, A
M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Tal- war, L. Zhang, Deep learning with differential privacy, in: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 308–318
2016
-
[39]
E. Bandara, R. Gore, A. Yarlagadda, A. H. Clayton, P. Samuel, C. K. Rhea, S. Shetty, Standardization of psychiatric diagnoses–role of fine- tuned llm consortium and openai-gpt-oss reasoning llm enabled decision support system, arXiv preprint arXiv:2510.25588 (2025)
-
[40]
Governance-as-a-service: A multi-agent frame- work for AI system compliance and policy enforcement
S. Gaurav, J. Heikkonen, J. Chaudhary, Governance-as-a-service: A multi-agent framework for ai system compliance and policy enforcement, arXiv preprint arXiv:2508.18765 (2025)
-
[41]
Agentsway–software development methodology for ai agents- based teams,
E. Bandara, R. Gore, X. Liang, S. Rajapakse, I. Kularathne, P. Karunarathna, P. Foytik, S. Shetty, R. Mukkamala, A. Rahman, 37 et al., Agentsway–software development methodology for ai agents-based teams, arXiv preprint arXiv:2510.23664 (2025)
- [42]
-
[43]
I. H. Sarker, Llm potentiality and awareness: a position paper from the perspective of trustworthy and responsible ai modeling, Discover Artificial Intelligence 4 (1) (2024) 40
2024
-
[44]
Dwivedi, D
R. Dwivedi, D. Dave, H. Naik, S. Singhal, R. Omer, P. Patel, B. Qian, Z. Wen, T. Shah, G. Morgan, et al., Explainable ai (xai): Core ideas, techniques, and solutions, ACM computing surveys 55 (9) (2023) 1–33
2023
-
[45]
del Pilar Salas-Z´ arate, G
M. del Pilar Salas-Z´ arate, G. Alor-Hern´ andez, R. Valencia-Garc´ ıa, L. Rodr´ ıguez-Mazahua, A. Rodr´ ıguez-Gonz´ alez, J. L. L. Cuadrado, Ana- lyzing best practices on web development frameworks: The lift approach, Science of computer programming 102 (2015) 1–19
2015
-
[46]
Bandara, X
E. Bandara, X. Liang, P. Foytik, S. Shetty, K. De Zoysa, A blockchain and self-sovereign identity empowered digital identity platform, in: 2021 International Conference on Computer Communications and Networks (ICCCN), IEEE, 2021, pp. 1–7
2021
-
[47]
Nousiainen, The potential of webassembly in edge computing (2024)
A. Nousiainen, The potential of webassembly in edge computing (2024)
2024
-
[48]
Amazon Web Services, The confused deputy problem — AWS iden- tity and access management,https://docs.aws.amazon.com/IAM/ latest/UserGuide/confused-deputy.html, accessed: 2024 (2023)
2024
-
[49]
E. Bandara, A. Hass, R. Gore, S. Shetty, R. Mukkamala, S. H. Bouk, X. Liang, N. W. Keong, K. De Zoysa, A. Withanage, et al., Astride: A security threat modeling platform for agentic-ai applications, arXiv preprint arXiv:2512.04785 (2025)
-
[50]
E. Bandara, R. Gore, S. Shetty, R. Mukkamala, C. Rhea, A. Yarlagadda, S. Kaushik, L. De Silva, A. Maznychenko, I. Sokolowska, et al., Stan- dardization of neuromuscular reflex analysis–role of fine-tuned vision- language model consortium and openai gpt-oss reasoning llm enabled decision support system, arXiv preprint arXiv:2508.12473 (2025). 38
-
[51]
P. Nair, Comprehensive identity and access management in aws: Au- thentication, authorization, and policy control mechanisms, American International Journal of Computer Science and Technology 2 (4) (2020) 1–10
2020
- [52]
-
[53]
Drljevic, D
N. Drljevic, D. A. Aranda, V. Stantchev, Perspectives on risks and stan- dards that affect the requirements engineering of blockchain technology, Computer Standards & Interfaces 69 (2020) 103409
2020
-
[54]
E. Bertino, M. Kantarcioglu, Data sovereignty and privacy protection in multi-tenant cloud environments: Isolation mechanisms and access con- trol architectures, IEEE Transactions on Dependable and Secure Com- puting 19 (4) (2022) 2831–2847. doi:10.1109/TDSC.2022.3154321. 39
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.