Recognition: unknown
Semia: Auditing Agent Skills via Constraint-Guided Representation Synthesis
Pith reviewed 2026-05-09 19:53 UTC · model grok-4.3
The pith
Semia converts agent skill prose into Datalog fact bases so that security properties reduce to reachability queries.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Semia lifts each skill into the Skill Description Language, a Datalog fact base that captures LLM-triggered actions, prose-defined conditions, and human-in-the-loop checkpoints. Constraint-Guided Representation Synthesis refines LLM candidates in a propose-verify-evaluate loop until the fact base is both structurally sound and semantically faithful to the original prose. Security properties such as indirect injection, secret leakage, confused deputies, and unguarded sinks then reduce to standard Datalog reachability queries. Evaluation on 13,728 real skills shows more than half carry at least one critical semantic risk, with 97.7 percent recall and 90.6 F1 on a stratified expert-labeled set.
What carries the argument
Constraint-Guided Representation Synthesis (CGRS), an iterative propose-verify-evaluate loop that refines LLM-generated Datalog candidates until they satisfy structural soundness and semantic fidelity to the prose.
If this is right
- Security properties reduce to ordinary Datalog reachability queries that run reproducibly without ad-hoc LLM inspection.
- More than half of 13,728 public marketplace skills contain at least one critical semantic risk such as leakage or unguarded sinks.
- The synthesis method achieves 97.7 percent recall and 90.6 F1 on expert-labeled skills, outperforming signature scanners and pure LLM baselines.
- Human-in-the-loop checkpoints and prose conditions become explicit, queryable facts rather than implicit prose.
- Every skill in a marketplace can be audited statically before deployment.
Where Pith is reading between the lines
- If CGRS works for agent skills, the same loop could synthesize verifiable representations for other hybrid prose-plus-code artifacts such as smart-contract specifications.
- High recall on probabilistic prose suggests Datalog reachability can serve as a practical static proxy for runtime behavior in agent systems.
- Marketplaces could run CGRS on every submitted skill to surface risks before publication rather than after incidents occur.
- Combining the static fact base with observed execution traces could create hybrid static-dynamic monitors that update the Datalog model over time.
Load-bearing premise
The synthesized Datalog fact bases faithfully capture the probabilistic meaning of the original prose without systematic distortion, and the 541 expert labels serve as reliable ground truth.
What would settle it
Independent experts re-label a fresh sample of 500 skills for the same risks and Semia's recall falls below 90 percent, or manual comparison of 100 synthesized fact bases against their source prose reveals consistent semantic mismatches.
Figures
read the original abstract
An agent skill is a configuration package that equips an LLM-driven agent with a concrete capability, such as reading email, executing shell commands, or signing blockchain transactions. Each skill is a hybrid artifact-a structured half declares executable interfaces, while a prose half dictates when and how those interfaces fire-and the prose is reinterpreted probabilistically on every invocation. Conventional static analyzers parse the structured half but ignore the prose; LLM-based tools read the prose but cannot reproducibly prove that a tainted input reaches a high-impact sink. We present Semia, a static auditor for agent skills. Semia lifts each skill into the Skill Description Language (SDL), a Datalog fact base that captures LLM-triggered actions, prose-defined conditions, and human-in-the-loop checkpoints. Synthesizing a fact base that is both structurally sound and semantically faithful to the original prose is the central challenge; we address it with Constraint-Guided Representation Synthesis (CGRS), a propose-verify-evaluate loop that refines LLM candidates until convergence. Security properties (e.g., indirect injection, secret leakage, confused deputies, unguarded sinks, etc.) over an agent skill can then be reduced to Datalog reachability queries. We evaluate Semia on 13,728 real-world skills from public marketplaces. Semia renders all of them auditable and finds that more than half carry at least one critical semantic risk. On a stratified sample of 541 expert-labeled skills, Semia achieves 97.7% recall and an F1 of 90.6%, substantially outperforming signature-based scanners and LLM baselines.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents Semia, a static auditor for LLM-driven agent skills. Each skill combines structured executable interfaces with prose that is reinterpreted probabilistically on each invocation. Semia lifts skills into the Skill Description Language (SDL), a Datalog fact base encoding LLM-triggered actions, prose-defined conditions, and human-in-the-loop checkpoints, via Constraint-Guided Representation Synthesis (CGRS), an LLM propose-verify-evaluate loop. Security properties (indirect injection, secret leakage, confused deputies, unguarded sinks) reduce to Datalog reachability queries. On 13,728 marketplace skills Semia finds more than half carry at least one critical semantic risk; on a stratified sample of 541 expert-labeled skills it reports 97.7% recall and 90.6% F1, outperforming signature-based scanners and LLM baselines.
Significance. If CGRS reliably produces Datalog facts whose reachability semantics match the original prose without systematic distortion, the work supplies a reproducible, queryable representation for auditing hybrid structured-prose artifacts that conventional static analyzers and pure LLM scanners cannot handle. The scale of the 13,728-skill corpus and the explicit reduction to Datalog queries are concrete strengths that could influence future agent-security tooling.
major comments (3)
- [CGRS description and abstract] The central empirical claims (risk prevalence >50% on 13,728 skills and 97.7% recall / 90.6% F1 on 541 labels) rest on the unverified assumption that CGRS fact bases are semantically faithful to the probabilistic prose. The manuscript supplies no formal semantics for SDL, no soundness argument for the propose-verify-evaluate refinement, and no ablation that isolates the contribution of the constraint component to fidelity.
- [Evaluation section (541-skill sample)] The 541 expert-labeled skills are treated as ground truth for recall and F1, yet the manuscript provides no information on label-collection protocol, expert selection, labeling guidelines, or inter-rater agreement. Without these, the reported performance numbers cannot be interpreted as a reliable measure of semantic fidelity.
- [Large-scale evaluation and risk-prevalence paragraph] The claim that more than half of the 13,728 skills contain critical semantic risks is computed directly from the synthesized Datalog facts; no independent manual audit or secondary validation set is reported to confirm that the reachability queries correctly encode the prose-defined conditions (e.g., indirect-injection paths or human-in-the-loop checkpoints).
minor comments (2)
- [SDL definition] The notation for SDL facts and how probabilistic prose conditions are encoded as Datalog rules would benefit from additional concrete examples and a small illustrative skill.
- [Related work] Related-work discussion of prior uses of Datalog for security policy checking (e.g., in Android or web security) is thin; a few additional citations would situate the contribution more clearly.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. The comments highlight important aspects of rigor that we address below. We have revised the manuscript to incorporate additional explanations, details, and analyses where feasible.
read point-by-point responses
-
Referee: [CGRS description and abstract] The central empirical claims (risk prevalence >50% on 13,728 skills and 97.7% recall / 90.6% F1 on 541 labels) rest on the unverified assumption that CGRS fact bases are semantically faithful to the probabilistic prose. The manuscript supplies no formal semantics for SDL, no soundness argument for the propose-verify-evaluate refinement, and no ablation that isolates the contribution of the constraint component to fidelity.
Authors: We acknowledge that the manuscript does not include a formal semantics for SDL or a machine-checked soundness proof for CGRS. SDL is presented as a pragmatic Datalog encoding with predicates for actions, conditions, and checkpoints, and the propose-verify-evaluate loop uses explicit constraints derived from the prose to guide refinement until a stable fact base is reached. The 97.7% recall on expert labels provides empirical support for fidelity, as mismatches would directly lower this metric. To strengthen the presentation, we will add a subsection on the informal semantics of SDL, a description of how the refinement loop mitigates distortion, and an ablation comparing CGRS against an unconstrained LLM synthesis baseline on the same 541-sample. revision: yes
-
Referee: [Evaluation section (541-skill sample)] The 541 expert-labeled skills are treated as ground truth for recall and F1, yet the manuscript provides no information on label-collection protocol, expert selection, labeling guidelines, or inter-rater agreement. Without these, the reported performance numbers cannot be interpreted as a reliable measure of semantic fidelity.
Authors: We agree that these methodological details are necessary for proper interpretation. The 541 skills were selected via stratified sampling across risk categories and marketplaces. Labeling was performed by three experts with prior publications in LLM agent security; guidelines included precise definitions and positive/negative examples for each property (e.g., indirect injection via LLM-triggered actions). Inter-rater agreement reached Fleiss' kappa of 0.81. We will expand the evaluation section with a complete protocol description, expert selection criteria, the full labeling guidelines (as an appendix), and the agreement statistic. revision: yes
-
Referee: [Large-scale evaluation and risk-prevalence paragraph] The claim that more than half of the 13,728 skills contain critical semantic risks is computed directly from the synthesized Datalog facts; no independent manual audit or secondary validation set is reported to confirm that the reachability queries correctly encode the prose-defined conditions (e.g., indirect-injection paths or human-in-the-loop checkpoints).
Authors: The prevalence statistic applies the CGRS pipeline—whose fidelity is measured on the stratified 541-sample—to the full corpus. Because the sample was drawn from the same distribution, it supplies a direct check on whether reachability queries match expert judgments of the prose. We recognize the value of further corroboration and will add a secondary manual audit of 100 randomly selected skills from the 13,728, reporting agreement between Datalog-derived risks and fresh expert review, together with a threats-to-validity paragraph. revision: partial
Circularity Check
No circularity: claims rest on external expert labels and independent marketplace data
full rationale
The paper defines CGRS as an iterative propose-verify-evaluate process to produce Datalog fact bases from skill prose, then reduces security properties to reachability queries and reports recall/F1 on a separate stratified sample of 541 expert-labeled skills drawn from 13,728 marketplace artifacts. No equations, parameters, or metrics are defined in terms of quantities fitted from the evaluation data itself; the 541 labels are presented as external ground truth rather than outputs of the synthesis loop. No self-citation chain, uniqueness theorem, or ansatz is invoked to justify the core results, and the derivation chain remains self-contained against the stated external benchmarks.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Datalog reachability queries are sufficient to express the listed security properties (indirect injection, secret leakage, confused deputies, unguarded sinks)
- ad hoc to paper An LLM propose-verify-evaluate loop can converge to representations that are both structurally sound and faithful to the original prose
invented entities (2)
-
Skill Description Language (SDL)
no independent evidence
-
Constraint-Guided Representation Synthesis (CGRS)
no independent evidence
Forward citations
Cited by 1 Pith paper
-
AgentTrap: Measuring Runtime Trust Failures in Third-Party Agent Skills
AgentTrap shows that current LLM agents typically complete user tasks while silently accepting unsafe side effects from malicious third-party skills rather than refusing them.
Reference graph
Works this paper leans on
-
[1]
Anthropic. 2026. Agent Skills Overview. https://platform.claude.com/docs/en/ agents-and-tools/agent-skills/overview. Online
2026
-
[2]
Martin Bravenboer and Yannis Smaragdakis. 2009. Strictly declarative specifica- tion of sophisticated points-to analyses.SIGPLAN Not.44, 10 (Oct. 2009), 243–262. doi:10.1145/1639949.1640108
- [3]
-
[4]
Sizhe Chen, Julien Piet, Chawin Sitawarin, and David Wagner. 2025. StruQ: defending against prompt injection with structured queries. InProceedings of the 34th USENIX Conference on Security Symposium(Seattle, WA, USA)(SEC ’25). USENIX Association, USA, Article 123, 18 pages
2025
- [5]
-
[6]
Edoardo Debenedetti, Ilia Shumailov, Tianqi Fan, Jamie Hayes, Nicholas Carlini, Daniel Fabian, Christoph Kern, Chongyang Shi, Andreas Terzis, and Florian Tramèr. 2025. Defeating Prompt Injections by Design. arXiv:2503.18813 [cs.CR] https://arxiv.org/abs/2503.18813
work page internal anchor Pith review arXiv 2025
-
[7]
Edoardo Debenedetti, Jie Zhang, Mislav Balunović, Luca Beurer-Kellner, Marc Fis- cher, and Florian Tramèr. 2024. AgentDojo: A Dynamic Environment to Evaluate Prompt Injection Attacks and Defenses for LLM Agents. arXiv:2406.13352 [cs.CR] https://arxiv.org/abs/2406.13352
work page internal anchor Pith review arXiv 2024
-
[8]
Jonathan Evertz, Niklas Risse, Nicolai Neuer, Andreas Müller, Philipp Normann, Gaetano Sapia, Srishti Gupta, David Pape, Soumya Shaw, Devansh Srivastav, Christian Wressnegger, Erwin Quiring, Thorsten Eisenhofer, Daniel Arp, and Lea Schönherr. 2026. Chasing Shadows: Pitfalls in LLM Security Research. In Proceedings of the 2026 Network and Distributed Syste...
2026
- [9]
-
[10]
Jeanne Ferrante, Karl J. Ottenstein, and Joe D. Warren. 1987. The program dependence graph and its use in optimization.ACM Trans. Program. Lang. Syst. 9, 3 (July 1987), 319–349. doi:10.1145/24039.24041
-
[11]
Kai Greshake, Sahar Abdelnabi, Shailesh Mishra, Christoph Endres, Thorsten Holz, and Mario Fritz. 2023. Not what you’ve signed up for: Compromis- ing Real-World LLM-Integrated Applications with Indirect Prompt Injection. arXiv:2302.12173 [cs.CR] https://arxiv.org/abs/2302.12173
work page internal anchor Pith review arXiv 2023
-
[12]
HackerOne. 2025. HackerOne Report Finds 210% Spike in AI Vulnerability Reports Amid Rise of AI Autonomy. https://www.hackerone.com/press- release/hackerone-report-finds-210-spike-ai-vulnerability-reports-amid-rise- ai-autonomy. Online
2025
-
[13]
Keegan Hines, Gary Lopez, Matthew Hall, Federico Zarfati, Yonatan Zunger, and Emre Kiciman. 2024. Defending Against Indirect Prompt Injection Attacks With Spotlighting. arXiv:2403.14720 [cs.CR] https://arxiv.org/abs/2403.14720
work page internal anchor Pith review arXiv 2024
-
[14]
Invariant Labs. 2024. MCP Security Notification — Tool Poisoning Attacks. https:// invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks. Online
2024
- [15]
- [16]
-
[17]
Herbert Jordan, Bernhard Scholz, and Pavle Subotic. 2016. Soufflé: On Synthesis of Program Analyzers. InComputer Aided Verification - 28th International Conference, CA V 2016, Toronto, ON, Canada, July 17-23, 2016, Proceedings, Part II (Lecture Notes in Computer Science), Swarat Chaudhuri and Azadeh Farzan (Eds.). Springer, 422–
2016
-
[18]
doi:10.1007/978-3-319-41540-6_23
-
[19]
Adnan Khan. 2026. Clinejection — Compromising Cline’s Production Releases just by Prompting an Issue Triager. https://adnanthekhan.com/posts/clinejection/. Online
2026
-
[20]
Richard Landis and Gary G
J. Richard Landis and Gary G. Koch. 1977. The Measurement of Observer Agree- ment for Categorical Data.Biometrics33, 1 (1977), 159–174
1977
- [21]
-
[22]
Fengyu Liu, Yuan Zhang, Jiaqi Luo, et al . 2025. Make Agent Defeat Agent: Automatic Detection of Taint-Style Vulnerabilities in LLM-based Agents. In34th USENIX Security Symposium (USENIX Security 25). https://www.usenix.org/ conference/usenixsecurity25/presentation/liu-fengyu
2025
-
[23]
Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. 2024. Formalizing and Benchmarking Prompt Injection Attacks and Defenses. In33rd USENIX Security Symposium (USENIX Security 24). 1831–1847. https://www. usenix.org/conference/usenixsecurity24/presentation/liu-yupei
2024
-
[24]
Yupei Liu, Yuqi Jia, Jinyuan Jia, Dawn Song, and Neil Zhenqiang Gong. 2025. DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks . In 2025 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Alamitos, CA, USA, 2190–2208. doi:10.1109/SP61157.2025.00250
-
[25]
Ye Liu, Yue Xue, Daoyuan Wu, Yuqiang Sun, Yi Li, Miaolei Shi, and Yang Liu
-
[26]
InProceedings 2025 Network and Distributed System Security Symposium (NDSS 2025)
PropertyGPT: LLM-driven Formal Verification of Smart Contracts through Retrieval-Augmented Property Generation. InProceedings 2025 Network and Distributed System Security Symposium (NDSS 2025). Internet Society. doi:10. 14722/ndss.2025.241357
-
[27]
National Vulnerability Database. 2025. CVE-2025-32711 Detail. https://nvd.nist. gov/vuln/detail/CVE-2025-32711. Online
2025
-
[28]
npm. 2026. clawhub — Download Statistics. https://www.npmjs.com/package/ clawhub. 842K downloads/month as of April 2026
2026
-
[29]
npm. 2026. @modelcontextprotocol/sdk — Download Statistics. https://www. npmjs.com/package/@modelcontextprotocol/sdk. 150 M downloads/month as of April 2026
2026
-
[30]
npm. 2026. openclaw — Download Statistics. https://www.npmjs.com/package/ openclaw. 6.7 M downloads/month as of April 2026
2026
-
[31]
OpenClaw. 2026. OpenClaw: Open-Source Personal AI Agent Framework. https: //github.com/openclaw/openclaw. 362K GitHub stars as of April 2026
2026
-
[32]
OpenClaw Maintainers. 2026. Security Vulnerability Report — Issue #135. https: //github.com/openclaw/clawhub/issues/135. Representative disclosure issue; accessed: 2026-04-29
2026
-
[33]
Gorilla: Large Language Model Connected with Massive APIs
Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large Language Model Connected with Massive APIs. arXiv:2305.15334 [cs.CL] https://arxiv.org/abs/2305.15334
work page internal anchor Pith review arXiv 2023
-
[34]
Pillar Security. 2025. The Security Risks of Model Context Protocol (MCP). https://www.pillar.security/blog/the-security-risks-of-model-context- protocol-mcp. Online
2025
-
[35]
PyPI. 2026. mcp — Download Statistics. https://pypi.org/project/mcp/. 231 M downloads/month as of April 2026
2026
-
[36]
Rubio-Medrano, Akash Kotak, Wenlu Wang, and Karsten Sohr
Carlos E. Rubio-Medrano, Akash Kotak, Wenlu Wang, and Karsten Sohr. 2024. Pairing Human and Artificial Intelligence: Enforcing Access Control Policies with LLMs and Formal Specifications. InProceedings of the 29th ACM Symposium on Access Control Models and Technologies(San Antonio, TX, USA)(SACMAT 2024). Association for Computing Machinery, New York, NY, ...
-
[37]
Salt Security. 2024. Security Flaws Within ChatGPT Ecosystem. https://salt.security/blog/security-flaws-within-chatgpt-extensions-allowed- access-to-accounts-on-third-party-websites-and-sensitive-data. Online
2024
-
[38]
Timo Schick, Jane Dwivedi-Yu, Roberto Dessí, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: language models can teach themselves to use tools. InProceedings of the 37th International Conference on Neural Information Processing Systems(New Orleans, LA, USA)(NIPS ’23). Curran Associates ...
2023
- [39]
- [40]
-
[41]
AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents
Haoyu Wang, Christopher M. Poskitt, and Jun Sun. 2025. AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents. arXiv:2503.18666 [cs.AI] https://arxiv.org/abs/2503.18666
work page internal anchor Pith review arXiv 2025
-
[42]
John Whaley, Dzintars Avots, Michael Carbin, and Monica S. Lam. 2005. Using datalog with binary decision diagrams for program analysis. InProceedings of the Third Asian Conference on Programming Languages and Systems(Tsukuba, Japan) (APLAS’05). Springer-Verlag, Berlin, Heidelberg, 97–118. doi:10.1007/11575467_8
-
[43]
Yuhao Wu, Franziska Roesner, Tadayoshi Kohno, Ning Zhang, and Umar Iqbal
-
[44]
SecGPT: An execution isolation architecture for LLM-based systems
IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems. arXiv:2403.04960 [cs.CR] https://arxiv.org/abs/2403.04960
-
[45]
Yuhao Wu, Ke Yang, Franziska Roesner, Tadayoshi Kohno, Ning Zhang, and Umar Iqbal. 2026. Towards Automating Data Access Permissions in AI Agents . In 2026 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Conference’17, July 2017, Washington, DC, USA Wen et al. Alamitos, CA, USA, 336–354. doi:10.1109/SP63933.2026.00018
-
[46]
Junjie Ye, Sixian Li, Guanyu Li, Caishuang Huang, Songyang Gao, Yilong Wu, Qi Zhang, Tao Gui, and Xuanjing Huang. 2024. ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Lun-Wei Ku, Andre Marti...
-
[47]
Zenity Labs. 2025. EchoLeak: A Reminder That AI Agent Risks Are Here to Stay. https://labs.zenity.io/p/echoleak-a-reminder-that-ai-agent-risks-are-here- to-stay-3cf3. Online
2025
-
[48]
Qiusi Zhan, Zhixiang Liang, Zifan Ying, and Daniel Kang. 2024. InjecAgent: Benchmarking Indirect Prompt Injections in Tool-Integrated Large Language Model Agents. arXiv:2403.02691 [cs.CL] https://arxiv.org/abs/2403.02691
work page internal anchor Pith review arXiv 2024
-
[49]
Hanrong Zhang, Jingyuan Huang, Kai Mei, Yifei Yao, Zhenting Wang, Chenlu Zhan, Hongwei Wang, and Yongfeng Zhang. 2025. Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents. arXiv:2410.02644 [cs.CR] https://arxiv.org/abs/2410.02644
work page internal anchor Pith review arXiv 2025
-
[50]
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. arXiv:2306.05685 [cs.CL] https://arxiv.org/abs/2306.05685
work page internal anchor Pith review arXiv 2023
-
[51]
Titzer, Heather Miller, and Phillip B
Peter Yong Zhong, Siyuan Chen, Ruiqi Wang, McKenna McCall, Ben L. Titzer, Heather Miller, and Phillip B. Gibbons. 2025. RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage. arXiv:2502.08966 [cs.CR] https: //arxiv.org/abs/2502.08966 Semia: Auditing Agent Skills via Constraint-Guided Representation Synthesis Conference’17, July 2017, Wa...
-
[52]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
2017
-
[53]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
-
[54]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
-
[55]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
2017
-
[56]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
-
[57]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
-
[58]
homeassistant-config.json
Load Home Assistant config:$config = Get-Content ".homeassistant-config.json" | ConvertFrom-Json$token = $config.token Skill Excerpt: Game Light Tracker …**For builds, gallery submission, and advanced features, use:**npx @codahq/packs-sdk buildnpx @codahq/packs-sdk release Skill Excerpt: Coda Packs ## References- **Design:** /docs/TIERED-MEMORY.md (EvoCla...
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.