Recognition: no theorem link
Sketch-based Access Control: A Multimodal Interface for Translating User Preferences into Intent-Aligned Policies
Pith reviewed 2026-05-12 03:46 UTC · model grok-4.3
The pith
A sketch-and-AI system lets users turn rough access preferences into complete, validated policies by iteratively specifying, analyzing, and testing them.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The SBAC system and its Specify-Analyze-Test workflow enabled participants to progressively refine initially underspecified preferences into more complete and precise policies by surfacing unanticipated gaps, resolving ambiguities through dialogue, and validating policy behavior through concrete scenarios.
What carries the argument
The three-stage human-AI workflow (Specify, Analyze, Test) that maintains and interprets an evolving access-control specification expressed through sketches and natural-language dialogue.
If this is right
- Users can discover policy gaps they did not anticipate when they began sketching.
- Dialogue with the system can resolve ambiguities that remain after an initial sketch.
- Concrete scenario testing allows users to check whether the generated policy behaves as intended.
- The resulting policies are both more complete and more precise than the starting preferences.
Where Pith is reading between the lines
- The same sketch-plus-dialogue pattern could be applied to other domains that require users to state conditional rules, such as privacy settings or smart-home automation.
- If the AI interpretation step is reliable, organizations could reduce the number of access-control misconfigurations that arise from users' difficulty in articulating precise conditions.
- A next step would be to measure whether policies created this way produce fewer security incidents when deployed in real systems.
Load-bearing premise
The multimodal models will correctly interpret changing sketches and conversation turns without systematically misunderstanding or inventing user preferences.
What would settle it
A controlled test in which users repeatedly draw the same access rule and the system produces policies that contradict the users' stated intent in more than a small fraction of cases.
Figures
read the original abstract
Developing simple and expressive access controls -- interfaces to specify policies that define who should have access to resources and under what circumstances -- is a longstanding challenge in usable security. We present Sketch-based Access Control (SBAC), a sketch-based, AI-assisted access control authoring system that combines the expressive power of sketching with the interpretive capabilities of multimodal large language models (MLLMs) to support the interpretation and validation of policy specifications as they are iteratively refined. Through a formative study with 14 participants, we identified three design requirements and developed a human-AI collaborative workflow composed of three stages -- Specify, Analyze, and Test -- enabled by the system's ability to maintain and interpret evolving access control specifications. In a user evaluation with 14 participants grounded in their real-world access control scenarios, we found the system and the workflow helped participants progressively refine initially underspecified preferences into more complete and precise policies -- surfacing gaps they had not anticipated, resolving ambiguities through dialogue, and validating policy behavior through concrete scenarios.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Sketch-based Access Control (SBAC), a multimodal system that integrates user sketching with MLLM interpretation to support iterative policy authoring via a Specify-Analyze-Test workflow. A formative study (n=14) derives three design requirements; a subsequent evaluation (n=14) using participants' real-world scenarios reports that the system helps refine underspecified preferences into more complete policies by surfacing gaps, resolving ambiguities through dialogue, and validating behaviors via concrete scenarios.
Significance. If the central claim holds, the work advances usable security and human-AI collaboration by demonstrating how sketching plus MLLM dialogue can bridge the gap between vague user intent and precise access-control policies. The grounding in participants' actual scenarios and the emphasis on progressive refinement provide ecological validity and practical insight for AI-assisted authoring tools. The approach is novel in combining expressive sketching with iterative validation, though its impact would be strengthened by more rigorous verification of alignment.
major comments (2)
- [User Evaluation] User Evaluation section: the claim that the workflow produces intent-aligned policies rests on qualitative participant feedback from n=14 without quantitative metrics (e.g., policy correctness rates, LLM interpretation accuracy, or inter-rater agreement on final policies). This is load-bearing because subjective validation can miss subtle logical deviations in access-control rules, leaving the weakest assumption (reliable MLLM translation without systematic hallucination) untested.
- [Specify-Analyze-Test Workflow] The description of the Specify-Analyze-Test workflow does not include logged error analysis or independent checks that separate genuine user underspecification from MLLM misinterpretation of sketches or dialogue; without this, progressive refinement cannot be confidently attributed to the system rather than participant self-correction.
minor comments (1)
- [Abstract] Abstract: states positive qualitative outcomes but omits any mention of the specific analysis methods or limitations of the small-scale studies, which would better contextualize the results for readers.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed review. The comments identify important opportunities to strengthen the empirical grounding of our claims. We address each major comment below and commit to major revisions that incorporate additional quantitative analysis and error logging.
read point-by-point responses
-
Referee: [User Evaluation] User Evaluation section: the claim that the workflow produces intent-aligned policies rests on qualitative participant feedback from n=14 without quantitative metrics (e.g., policy correctness rates, LLM interpretation accuracy, or inter-rater agreement on final policies). This is load-bearing because subjective validation can miss subtle logical deviations in access-control rules, leaving the weakest assumption (reliable MLLM translation without systematic hallucination) untested.
Authors: We agree that the absence of quantitative metrics leaves the alignment claims more vulnerable to the critique that subjective feedback may overlook logical inconsistencies. The evaluation was intentionally qualitative to prioritize ecological validity with participants' own scenarios, but this does not substitute for objective verification. In the revised manuscript we will add expert-rated policy correctness scores (two independent raters scoring final policies against participant intent), report inter-rater agreement (Cohen's kappa), and include a post-hoc analysis of MLLM interpretation accuracy derived from the session logs. We will also expand the limitations section to discuss the risk of undetected hallucinations. revision: yes
-
Referee: [Specify-Analyze-Test Workflow] The description of the Specify-Analyze-Test workflow does not include logged error analysis or independent checks that separate genuine user underspecification from MLLM misinterpretation of sketches or dialogue; without this, progressive refinement cannot be confidently attributed to the system rather than participant self-correction.
Authors: We concur that the current workflow description attributes refinements to the system without disambiguating sources of change. The manuscript reports observed participant behavior but does not present a systematic error log breakdown. In the revision we will add a dedicated subsection that categorizes logged events into (a) user-initiated clarifications of underspecification and (b) MLLM misinterpretations of sketches or dialogue, with counts and examples. We will also include independent verification: a second researcher will review a random sample of 20% of sessions to confirm the categorization, with agreement statistics reported. revision: yes
Circularity Check
No circularity: empirical claims rest on independent user studies
full rationale
The paper describes an HCI system (SBAC) and a Specify-Analyze-Test workflow, then reports outcomes from two separate user studies (formative n=14 and evaluation n=14) using participants' real-world scenarios. Central claims concern observed progressive refinement, gap-surfacing, and ambiguity resolution; these are presented as empirical findings from participant feedback and behavior, not as derivations, predictions, or fitted quantities that reduce to the inputs by construction. No equations, parameters, uniqueness theorems, or self-citations are invoked as load-bearing steps in any derivation chain. The evaluation design is independent of the system implementation in the required sense: results are not forced by re-labeling study inputs as outputs. This is the normal non-circular outcome for an empirical usability paper.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Ruba Abu-Salma, Kat Krol, Simon Parkin, Victoria Koh, Kevin Kwan, Jazib Mah- boob, Zahra Traboulsi, and M Angela Sasse. 2017. The security blanket of the chat world: An analytic evaluation and a user study of telegram. Internet Society
work page 2017
-
[2]
Christine Alvarado and Randall Davis. 2007. SketchREAD: a multi-domain sketch recognition engine. InACM SIGGRAPH 2007 courses. 34–es
work page 2007
-
[3]
Lujo Bauer, Lorrie Faith Cranor, Robert W Reeder, Michael K Reiter, and Kami Vaniea. 2009. Real life challenges in access-control management. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 899–908
work page 2009
-
[4]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology3, 2 (2006), 77–101
work page 2006
-
[5]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health11, 4 (2019), 589–597
work page 2019
-
[6]
Marion Buchenau and Jane Fulton Suri. 2000. Experience prototyping. InPro- ceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques. 424–433
work page 2000
-
[7]
2010.Sketching user experiences: getting the design right and the right design
Bill Buxton. 2010.Sketching user experiences: getting the design right and the right design. Morgan kaufmann
work page 2010
-
[8]
Giovanni Campagna, Silei Xu, Rakesh Ramesh, Michael Fischer, and Monica S Lam. 2018. Controlling fine-grain sharing in natural language with a virtual assistant.Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies2, 3 (2018), 1–28
work page 2018
-
[9]
Yi Chen, Mingming Zha, Nan Zhang, Dandan Xu, Qianqian Zhao, Xuan Feng, Kan Yuan, Fnu Suya, Yuan Tian, Kai Chen, et al . 2019. Demystifying hidden privacy settings in mobile apps. In2019 IEEE Symposium on Security and Privacy (SP). IEEE, 570–586
work page 2019
- [10]
-
[11]
John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. TaleBrush: Sketching stories with generative pretrained language models. InProceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–19
work page 2022
-
[12]
Steven Davy, Brendan Jennings, and John Strassner. 2008. The policy continuum– Policy authoring and conflict analysis.Computer Communications31, 13 (2008), 2981–2995
work page 2008
-
[13]
Mathias Eitz, James Hays, and Marc Alexa. 2012. How do humans sketch objects? ACM Transactions on graphics (TOG)31, 4 (2012), 1–10
work page 2012
-
[14]
Kathi Fisler, Shriram Krishnamurthi, Leo A Meyerovich, and Michael Carl Tschantz. 2005. Verification and change-impact analysis of access-control poli- cies. InProceedings of the 27th international conference on Software engineering. 196–205
work page 2005
-
[15]
William W Gaver, Jacob Beaver, and Steve Benford. 2003. Ambiguity as a resource for design. InProceedings of the SIGCHI conference on Human factors in computing systems. 233–240
work page 2003
-
[16]
Frederic Gmeiner, Nicolai Marquardt, Michael Bentley, Hugo Romat, Michel Pahud, David Brown, Asta Roseway, Nikolas Martelaro, Kenneth Holstein, Ken Hinckley, et al. 2025. Intent tagging: Exploring micro-prompting interactions for supporting granular human-GenAI co-creation workflows. InProceedings of the 2025 CHI Conference on Human Factors in Computing S...
work page 2025
-
[17]
Weijia He, Maximilian Golla, Roshni Padhi, Jordan Ofek, Markus Dürmuth, Ear- lence Fernandes, and Blase Ur. 2018. Rethinking Access Control and Authentica- tion for the Home Internet of Things ({ { { { {IoT} } } } }). In27th USENIX Security Symposium (USENIX Security 18). 255–272
work page 2018
-
[18]
Vincent C Hu, David Ferraiolo, Rick Kuhn, Arthur R Friedman, Alan J Lang, Margaret M Cogdell, Adam Schnitzer, Kenneth Sandlin, Robert Miller, Karen Scarfone, et al. 2013. Guide to attribute based access control (abac) definition and considerations (draft).NIST special publication800, 162 (2013), 1–54
work page 2013
-
[19]
Vincent C Hu, D Richard Kuhn, David F Ferraiolo, and Jeffrey Voas. 2015. Attribute-based access control.Computer48, 2 (2015), 85–88
work page 2015
-
[20]
Vincent C Hu, Rick Kuhn, Dylan Yaga, et al. 2017. Verification and test methods for access control policies/models.NIST Special Publication800, 192 (2017), 800– 192
work page 2017
-
[21]
Graham Hughes and Tevfik Bultan. 2008. Automated verification of access control policies using a SAT solver.International journal on software tools for technology transfer10, 6 (2008), 503–520
work page 2008
-
[22]
Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. 2024. Gpt-4o system card.arXiv preprint arXiv:2410.21276(2024)
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[23]
Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B Bederson, Al- lison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stéphane Conversy, Helen Evans, Heiko Hansen, et al. 2003. Technology probes: inspiring design for and with families. InProceedings of the SIGCHI conference on Human factors in computing systems. 17–24
work page 2003
-
[24]
Jane Im, Ruiyi Wang, Weikun Lyu, Nick Cook, Hana Habib, Lorrie Faith Cranor, Nikola Banovic, and Florian Schaub. 2023. Less is not more: Improving findabil- ity and actionability of privacy controls for online behavioral advertising. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–33
work page 2023
-
[25]
Maritza Johnson, John Karat, Clare-Marie Karat, and Keith Grueneberg. 2010. Usable policy template authoring for iterative policy refinement. In2010 IEEE International Symposium on Policies for Distributed Systems and Networks. IEEE, 18–21
work page 2010
-
[26]
My} data just goes {Everywhere:
Ruogu Kang, Laura Dabbish, Nathaniel Fruchter, and Sara Kiesler. 2015. {“My} data just goes {Everywhere:”} user mental models of the internet and impli- cations for privacy and security. InEleventh symposium on usable privacy and security (SOUPS 2015). 39–52
work page 2015
-
[27]
Clare-Marie Karat, John Karat, Carolyn Brodie, and Jinjuan Feng. 2006. Evalu- ating interfaces for privacy policy rule authoring. InProceedings of the SIGCHI conference on Human Factors in computing systems. 83–92
work page 2006
- [28]
- [29]
-
[30]
Yeah, it does have a... Windows98 Vibe
Karen Li, Kopo M Ramokapane, and Awais Rashid. 2022. " Yeah, it does have a... Windows98 Vibe”: Usability Study of Security Features in Programmable Logic Controllers.arXiv preprint arXiv:2208.02500(2022)
-
[31]
Michelle Madejski, Maritza Johnson, and Steven M Bellovin. 2012. A study of privacy settings errors in an online social network. In2012 IEEE international conference on pervasive computing and communications workshops. IEEE, 340–345
work page 2012
-
[32]
Nicolai Marquardt, Asta Roseway, Hugo Romat, Payod Panda, Michel Pahud, Gonzalo Ramos, Steven M Drucker, Andrew D Wilson, Ken Hinckley, and Nathalie Riche. 2025. ImaginationVellum: Generative-AI Ideation Canvas with Spatial Prompts, Generative Strokes, and Ideation History. InProceedings of the 38th Annual ACM Symposium on User Interface Software and Tech...
work page 2025
-
[33]
Kirsten Martin and Helen Nissenbaum. 2016. Measuring privacy: An empirical test using context to expose confounding variables.Colum. Sci. & Tech. L. Rev.18 (2016), 176
work page 2016
-
[34]
Michelle L Mazurek, JP Arsenault, Joanna Bresee, Nitin Gupta, Iulia Ion, Christina Johns, Daniel Lee, Yuan Liang, Jenny Olsen, Brandon Salmon, et al. 2010. Access control for home data sharing: Attitudes, needs and practices. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 645–654
work page 2010
-
[35]
2021.Sketching for real-time control of crowd simulations
Luis Rene Montana Gonzalez. 2021.Sketching for real-time control of crowd simulations. Ph. D. Dissertation. University of Sheffield
work page 2021
-
[36]
Charles Morisset and David Sanchez. 2018. On building a visualisation tool for access control policies. InInternational Conference on Information Systems Security and Privacy. Springer, 215–239
work page 2018
-
[37]
Pardis Emami Naeini, Sruti Bhagavatula, Hana Habib, Martin Degeling, Lujo Bauer, Lorrie Faith Cranor, and Norman Sadeh. 2017. Privacy expectations and preferences in an {IoT} world. InThirteenth symposium on usable privacy and security (SOUPS 2017). 399–412
work page 2017
- [38]
-
[39]
Helen Nissenbaum. 2004. Privacy as contextual integrity.Wash. L. Rev.79 (2004), 119
work page 2004
-
[40]
Maggie Oates, Yama Ahmadullah, Abigail Marsh, Chelse Swoopes, Shikun Zhang, Rebecca Balebako, and Lorrie Faith Cranor. 2018. Turtles, locks, and bathrooms: Understanding mental models of privacy through illustration.Proceedings on Privacy Enhancing Technologies(2018)
work page 2018
-
[41]
OWASP Foundation. 2021. A01:2021 – Broken Access Control. https://owasp. org/Top10/2021/A01_2021-Broken_Access_Control/. Accessed: 2026-03-26
work page 2021
-
[42]
Robert W Reeder, Lujo Bauer, Lorrie Faith Cranor, Michael K Reiter, Kelli Bacon, Keisha How, and Heather Strong. 2008. Expandable grids for visualizing and authoring computer security policies. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems. 1473–1482
work page 2008
-
[43]
Robert W Reeder, Lujo Bauer, Lorrie Faith Cranor, Michael K Reiter, and Kami Vaniea. 2009. Effects of access-control policy conflict-resolution methods on policy-authoring usability. (2009)
work page 2009
-
[44]
Robert W Reeder, Clare-Marie Karat, John Karat, and Carolyn Brodie. 2007. Usability challenges in security and privacy policy-authoring interfaces. InIFIP Conference on Human-Computer Interaction. Springer, 141–155
work page 2007
-
[45]
Franziska Roesner, David Molnar, Alexander Moshchuk, Tadayoshi Kohno, and Helen J Wang. 2014. World-driven access control for continuous sensing. In Proceedings of the 2014 ACM SIGSAC conference on computer and communications Preprint, , Monteiro et al. security. 1169–1181
work page 2014
-
[46]
Ravi S Sandhu. 1998. Role-based access control. InAdvances in computers. Vol. 46. Elsevier, 237–286
work page 1998
-
[47]
Danelle Shah, Joseph Schneider, and Mark Campbell. 2010. A robust sketch interface for natural robot control. In2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 4458–4463
work page 2010
-
[48]
Marjorie Skubic, Derek Anderson, Samuel Blisard, Dennis Perzanowski, and Alan Schultz. 2007. Using a hand-drawn sketch to control a team of robots. Autonomous Robots22 (2007), 399–410
work page 2007
-
[49]
John Slankas and Laurie Williams. 2013. Access control policy extraction from unconstrained natural language text. In2013 International Conference on Social Computing. IEEE, 435–440
work page 2013
-
[50]
Hari Subramonyam, Roy Pea, Christopher Pondoc, Maneesh Agrawala, and Colleen Seifert. 2024. Bridging the gulf of envisioning: Cognitive challenges in prompt based interactions with LLMs. InProceedings of the 2024 CHI Conference on Human Factors in Computing Systems. 1–19
work page 2024
-
[51]
Kentaro Taninaka, Rahul Jain, Jingyu Shi, Kazunori Takashio, and Karthik Ramani
-
[52]
InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Transparent barriers: natural language access control policies for XR- enhanced everyday objects. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–20
work page 2025
-
[53]
tldraw. 2024. tldraw: A tiny little drawing app. https://github.com/tldraw/tldraw. Accessed: 2026-03-25
work page 2024
-
[54]
Jinhe Wen, Yingxi Zhao, Wenqian Xu, Yaxing Yao, and Haojian Jin. 2025. Teaching Data Science Students to Sketch Privacy Designs Through Heuristics. In2025 IEEE Symposium on Security and Privacy (SP). IEEE, 1251–1269
work page 2025
-
[55]
Yuxi Wu, Jacob Logas, Devansh Ponda, Julia Haines, Jiaming Li, Jeffrey Nichols, W Keith Edwards, and Sauvik Das. [n. d.]. Modeling End-User Affective Discom- fort With Mobile App Permissions Across Physical Contexts. ([n. d.])
-
[56]
Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao
-
[57]
Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V
Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441(2023)
work page internal anchor Pith review arXiv 2023
-
[58]
the control over the current interface is really limited
Ryan Yen, Jian Zhao, and Daniel Vogel. 2025. Code shaping: Iterative code editing with free-form AI-interpreted sketching. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–17. A Appendix A.1 Formative Study Details We conducted the formative study in a simulated smart office en- vironment populated with devices such as cam...
work page 2025
-
[59]
An UNANNOTATED image showing the canvas as the user sees it
-
[60]
An ANNOTATED image with numbered marks [N] overlaid on each element
-
[61]
A JSON mapping of mark numbers to shape metadata Your task is to classify each marked element and identify relationships. IMPORTANT -- Reading text from the sketch: - Carefully read ALL text visible in the images, including handwritten, hand-drawn, typed, and label text. - Users often hand-draw names, roles, and labels. Look closely at the unannotated ima...
-
[62]
Policy Extraction & Updates: - Identify access control policies from conversation and sketches. A policy is made of a subject, resource, action and context. - Each distinct Subject-Action-Resource combination is a separate policy, even if they share parameters. "Alice can view Camera" and "Alice can control Thermostat" are two policies, not one. - Keep po...
-
[63]
Insight Generation & Tracking: - Detect risks, ambiguities, and conflicts in policies and sketches - Maintain an ongoing insights list - Address insights appropriately when users respond - Track which elements (shapes) are related to each insight
-
[64]
Would you like to move on to testing your access control policies?
On Completion: - When sufficient analysis has been done and insights addressed: * Ask if they would like to continue analyzing their access control policies? * If yes, ask them to continue looking at all the insights and policies. * If no or if all insights seem to be accepted, ask "Would you like to move on to testing your access control policies?" - Onl...
-
[65]
Policy Format: IMPORTANT: In all text fields (description, explanation, subject, resource, action, context), use the ACTUAL names of elements from the Canvas Element Map -- NOT mark numbers like [1] or [3]. Mark numbers [N] must ONLY appear in the elements array. { policyNumber: "policy#", description: "A plain one-line access control policy statement con...
-
[66]
ALL SUBJECTS: Every subject across all policies b
STRUCTURED REASONING (CRITICAL -- complete BEFORE generating insights): Before generating insights, enumerate in your internal reasoning: a. ALL SUBJECTS: Every subject across all policies b. ALL RESOURCES: Every resource across all policies c. ALL PERMISSIONS: Each as Subject -> Action -> Resource [+ Context] d. GAPS: Which subjects have no access define...
-
[67]
Does 'control thermostat'include changing the schedule vs. just adjusting the temperature?
INSIGHT IDENTIFICATION: GOAL: Identify insights that are substantively grounded in the policies and sketch. Every insight must be traceable to actual policies or sketch elements. Omit any category that lacks sufficient evidence -- quality and traceability take precedence over coverage. Before generating insights, assess the available information: - What p...
-
[68]
INSIGHT RESPONSES: IMPORTANT: Always maintain all previously generated insights in your responses unless explicitly instructed by the user to remove them. Keep track of insight states through user interactions: a. ACCEPTING INSIGHTS: When a user accepts an insight: - Mark the insight as accepted but keep it in the insights array - Acknowledge their accept...
-
[69]
PROACTIVE SKETCH-POLICY ALIGNMENT (CRITICAL): - When the user addresses an insight, edits a policy, or provides new information that changes the access control model, CHECK THE CURRENT SKETCH FIRST to see if the change is already reflected visually. - ONLY populate the "generate" field if the sketch does NOT already reflect the change. If the user's sketc...
-
[70]
IMPORTANT: You must respond with a JSON object in the following format: { "chat": "WHEN generate is populated, you MUST start with what you are updating on the sketch (e.g.'I\'ve updated the sketch to reflect the new condition.') THEN continue your response. Plain text, no markdown.", Sketch-Based Access Control: A Multimodal Interface for Translating Use...
-
[71]
Classify the user's intent into exactly ONE category based on what the user wants to happen next
- [72]
-
[73]
For "fix" or "explore" intents, provide ONLY a brief acknowledgment -- a deeper analysis with policy updates will follow automatically. ## Intent Categories Classify intent by asking: what does the user want to HAPPEN as a result of their message? - **"understand"**: The user is purely asking a question. They want information back and provide NO new infor...
-
[74]
**Always return a valid JSON object conforming to the schema.**
-
[75]
**Do not generate extra fields or omit required fields.**
-
[76]
**Provide clear and logical reasoning in `long_description_of_strategy`.**
-
[77]
**Ensure each`shapeId`is unique and consistent across related events.**
-
[78]
**Use meaningful`intent`descriptions for all actions.** ## Useful notes - Always begin with a clear strategy in`long_description_of_strategy`. - Compare the information you have from the screenshot of the user's viewport with the description of the canvas shapes on the viewport. - If you're not certain about what to do next, use a`think`event to work thro...
-
[79]
**Fixed factors**: Aspects of the policy that remain constant across all test cases (e.g., the system being tested, the general domain)
-
[80]
**Variable factors**: Dimensions that can be varied to test policy boundaries. For each variable factor, provide: - The policy's own value (baseline -- boundaryType: "baseline", isBaseline: true) - 2-4 alternative values, each with an explicit boundaryType: * "just_inside" -- a value that barely stays within the policy boundary (should still be Allowed) *...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.