pith. machine review for the scientific record. sign in

arxiv: 2604.05571 · v1 · submitted 2026-04-07 · 💻 cs.CR · cs.HC

Recognition: no theorem link

Understanding User Privacy Perceptions of GenAI Smartphones

Authors on Pith no claims yet

Pith reviewed 2026-05-10 19:13 UTC · model grok-4.3

classification 💻 cs.CR cs.HC
keywords GenAI smartphonesprivacy perceptionsuser interviewsdata lifecycleprivacy designgenerative AImobile privacyuser expectations
0
0 comments X

The pith

Users engage with GenAI smartphones without understanding their data operations but develop strong privacy concerns once technical details are explained.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper explores user perceptions of privacy in GenAI smartphones, which embed generative AI at the system level and require ongoing access to sensitive data. Interviews with 22 everyday mobile users reveal limited initial grasp of how these devices function, followed by heightened concerns when participants learn about data flows. These concerns extend across the full data lifecycle, covering nontransparent collection, insecure storage, and weak user control. A follow-up focus group surfaces concrete suggestions for system-level controls, better data practices, and clearer transparency features that could guide balanced design.

Core claim

Users of GenAI smartphones operate with limited understanding of how the systems deliver functions through continuous sensitive data access, but their privacy concerns increase markedly once they are shown the technical details; these concerns cover the entire data lifecycle from nontransparent collection to insecure storage and insufficient control, and participants propose coordinated changes in system controls, data management, and user-facing transparency to address them.

What carries the argument

Semi-structured interviews with 22 participants that first elicit general usage views and then expose users to technical explanations of GenAI smartphone data operations, followed by a focus group to collect privacy-enhancing suggestions.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Transparency features may need careful calibration so that informing users does not simply raise demands without corresponding control options.
  • The pattern of concerns across the data lifecycle could inform privacy requirements for other AI-embedded consumer devices beyond smartphones.
  • Testing specific prototypes that implement the suggested system-level controls and transparency elements would show whether they reduce reported concerns in practice.

Load-bearing premise

The perceptions reported by this sample of 22 users accurately reflect wider attitudes and that exposure to technical details does not artificially increase the concerns they express.

What would settle it

A larger-scale survey or controlled experiment measuring privacy concern levels before versus after exposure to the same GenAI smartphone technical details across a demographically broader group of mobile users.

Figures

Figures reproduced from arXiv: 2604.05571 by Haoyu Wang, Liu Wang, Luona Xu, Ran Jin, Shidong Pan, Tianming Liu.

Figure 1
Figure 1. Figure 1: GenAI smartphone usage scenarios. (a) Autonomous [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: An overview of our study. and those from non-technical disciplines, provided they had basic awareness of GenAI technologies and GenAI smartphones. Our recruitment strategy was motivated by two considerations: (i) One objective of the study is to elicit user-driven design suggestions for privacy-preserving GenAI smartphones, which presupposes a certain degree of familiarity with GenAI-enabled features and c… view at source ↗
Figure 3
Figure 3. Figure 3: The findings summarized from the interview. [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
read the original abstract

GenAI smartphones, which natively embed generative AI at the system level, are transforming mobile interactions by automating a wide range of tasks and executing UI actions on behalf of users. Their superior capabilities rely on continuous access to sensitive and context-rich data, raising privacy concerns that surpass those of traditional mobile devices. Yet, little is known about how users perceive the privacy implications of such devices or what safeguards they expect, which is especially critical at this early stage of GenAI smartphone adoption. To address this gap, we conduct 22 semi-structured interviews with everyday mobile users to explore their usage of GenAI smartphones, privacy concerns, and privacy design expectations. Our findings show that users engage with GenAI smartphones with limited understanding of how these systems operate to deliver functions, but show heightened privacy concerns once exposed to the technical details. Participants' concerns span the entire data lifecycle, including nontransparent collection, insecure storage, and weak data control. In a follow-up focus group, participants discuss a range of privacy-enhancing suggestions that call for coordinated changes across system-level controls, data management practices, and user-facing transparency. Their concerns and suggestions offer user-centered guidances for designing GenAI smartphones that balance functionality with privacy protection, offering valuable takeaways for system designers and regulators.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper reports results from 22 semi-structured interviews and a follow-up focus group with everyday mobile users on their perceptions of GenAI smartphones. It claims users operate these devices with limited understanding of their internal data flows but exhibit heightened privacy concerns once technical details are explained, with worries spanning nontransparent collection, insecure storage, and weak control across the data lifecycle. Participants propose coordinated privacy enhancements at system, data-management, and transparency levels to guide designers and regulators.

Significance. If the findings are robust, the work supplies timely, user-grounded insights into privacy expectations for an emerging class of devices at an early adoption stage. The qualitative design is well-suited to surfacing nuanced expectations and suggestions that quantitative surveys might miss, and the paper appropriately grounds its claims in direct participant responses rather than fitted models or derivations.

major comments (2)
  1. [Abstract and §4] Abstract and §4 (Findings): the central claim that participants 'show heightened privacy concerns once exposed to the technical details' is load-bearing yet rests on an untested assumption. Because technical explanations occur inside the same semi-structured interviews that elicit the concerns, the design lacks a pre-exposure baseline, making it impossible to separate pre-existing attitudes from priming effects introduced by the researcher.
  2. [§3] §3 (Methods): the manuscript supplies no information on recruitment strategy, interview protocol details, qualitative coding process, inter-rater reliability, or steps taken to reduce social-desirability bias in privacy discussions. These omissions directly affect the credibility of the reported concerns and suggestions.
minor comments (2)
  1. [Abstract] The abstract and introduction could more explicitly acknowledge the small sample size and exploratory nature when generalizing to 'users' and 'GenAI smartphones'.
  2. A brief comparison table or bullet list contrasting GenAI smartphone privacy concerns with those of conventional smartphones would improve readability of the contribution.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. The comments identify key areas where greater precision and transparency are needed. We respond to each major comment below and will revise the manuscript to address them.

read point-by-point responses
  1. Referee: [Abstract and §4] Abstract and §4 (Findings): the central claim that participants 'show heightened privacy concerns once exposed to the technical details' is load-bearing yet rests on an untested assumption. Because technical explanations occur inside the same semi-structured interviews that elicit the concerns, the design lacks a pre-exposure baseline, making it impossible to separate pre-existing attitudes from priming effects introduced by the researcher.

    Authors: We acknowledge the validity of this point. The interview protocol began with open questions on device usage and understanding before any technical explanations were offered, after which privacy concerns were probed. However, because all elements occurred within the same session, we lack an independent baseline and cannot isolate priming effects from the researcher's explanations. In the revision we will rephrase the abstract and §4 to describe the observed sequence without implying causation (e.g., “participants voiced privacy concerns after technical details were explained during the interview”). We will also add an explicit limitations paragraph discussing the exploratory nature of the design and the possibility of researcher-induced priming. revision: yes

  2. Referee: [§3] §3 (Methods): the manuscript supplies no information on recruitment strategy, interview protocol details, qualitative coding process, inter-rater reliability, or steps taken to reduce social-desirability bias in privacy discussions. These omissions directly affect the credibility of the reported concerns and suggestions.

    Authors: We agree that the current §3 lacks these essential details. In the revised manuscript we will expand the methods section to describe the recruitment strategy, provide the full interview protocol, outline the qualitative coding and thematic analysis process, report any inter-rater reliability steps taken, and explain the measures used to reduce social-desirability bias. These additions will improve transparency and allow readers to assess the robustness of the findings. revision: yes

Circularity Check

0 steps flagged

No circularity: direct reporting of interview data

full rationale

This is a qualitative empirical study based on 22 semi-structured interviews and one follow-up focus group. The central claims (limited user understanding of GenAI smartphone operation, heightened privacy concerns after exposure to technical details, and lifecycle-spanning concerns) are presented as direct summaries of participant responses. No equations, fitted parameters, predictions, derivations, or self-citation chains exist that could reduce any result to its own inputs by construction. The enumerated circularity patterns (self-definitional, fitted-input-called-prediction, self-citation load-bearing, etc.) do not apply. The study is self-contained against external benchmarks as standard exploratory HCI research.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard assumptions of qualitative user research rather than new formal constructs.

axioms (1)
  • domain assumption Semi-structured interviews with a small sample of everyday users can surface representative privacy perceptions and expectations
    Invoked implicitly when generalizing from 22 participants to guidance for designers and regulators.

pith-pipeline@v0.9.0 · 5528 in / 1074 out tokens · 49184 ms · 2026-05-10T19:13:06.478622+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

85 extracted references · 85 canonical work pages · 1 internal anchor

  1. [1]

    Doubao Mobile Assistant

    2025. Doubao Mobile Assistant. https://o .doubao.com/

  2. [2]

    Melvin Abraham, Pejman Saeghe, Mark Mcgill, and Mohamed Khamis. 2022. Implications of xr on privacy, security and behaviour: Insights from experts. In Nordic Human-Computer Interaction Conference . 1–12

  3. [3]

    Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2015. Privacy and human behavior in the age of information. Science 347, 6221 (2015), 509–514

  4. [4]

    Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2020. Secrets and Likes: The Drive for Privacy and the Difficulty of Achieving It in the Digital Age. Journal of Consumer Psychology (2020). https://api .semanticscholar.org/ CorpusID:225210777

  5. [5]

    Mahdi Alkaeed, Adnan Qayyum, and Junaid Qadir. 2024. Privacy preservation in Artificial Intelligence and Extended Reality (AI-XR) metaverses: A survey. Journal of Network and Computer Applications 231 (2024), 103989

  6. [6]

    Android. 2025. Photo Picker. https://developer .android.com/training/data- storage/shared/photo-picker

  7. [7]

    Apple. 2024. Introducing Apple’s On-Device and Server Foundation Models. https: //machinelearning.apple.com/research/introducing-apple-foundation-models

  8. [8]

    Apple. 2025. Apple Intelligence. https://www .apple.com/apple-intelligence/

  9. [9]

    Apple. 2025. Delivering an Enhanced Privacy Experience in Your Pho- tos App. https://developer .apple.com/documentation/photokit/delivering-an- enhanced-privacy-experience-in-your-photos-app

  10. [10]

    Artifact Availability. 2026. GenAI Smartphones Interviews Supplemental Materials. https://anonymous .4open.science/r/Understanding-User-Privacy- Perceptions-of-GenAI-Smartphones-00D2/

  11. [11]

    Eugene Bagdasarian, Ren Yi, Sahra Ghalebikesabi, Peter Kairouz, Marco Gruteser, Sewoong Oh, Borja Balle, and Daniel Ramage. 2024. AirGapAgent: Protecting Privacy-Conscious Conversational Agents. Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security (2024). https: //api.semanticscholar.org/CorpusID:269626177

  12. [12]

    Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions.Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018). https://api .semanticscholar.org/CorpusID: 5039081

  13. [13]

    Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101

  14. [14]

    Srishti Chaudhary. 2024. AI at Your Service: Generative Artificial Intelligence and the Next Generation of Assistants. In 2024 Artificial Intelligence for Business (AIxB). IEEE, 51–52

  15. [15]

    Google Cloud. 2025. What is generative ai and what are its applications? https: //cloud.google.com/use-cases/generative-ai

  16. [16]

    McKinsey & Company. 2023. What Is Generative AI. https://www.mckinsey.com/ featured-insights/mckinsey-explainers/what-is-generative-ai

  17. [17]

    cybernews. 2024. Privacy is the leading concern regarding AI smartphones, study finds. https://cybernews .com/tech/ai-smartphones-privacy-concerns/

  18. [18]

    Badhan Chandra Das, M Hadi Amini, and Yanzhao Wu. 2025. Security and privacy challenges of large language models: A survey. Comput. Surveys 57, 6 (2025), 1–39

  19. [19]

    Fred D Davis. 1993. User acceptance of information technology: system char- acteristics, user perceptions and behavioral impacts. International journal of man-machine studies 38, 3 (1993), 475–487

  20. [20]

    deepseek. 2025. DeepSeek. https://www .deepseek.com/

  21. [21]

    Kenan Degirmenci. 2020. Mobile users’ information privacy concerns and the role of app permission requests. International journal of information management 50 (2020), 261–272

  22. [22]

    Nora Draper and Joseph Turow. 2019. The corporate cultivation of digital resigna- tion. New Media & Society 21 (08 2019), 1824–1839. doi:10.1177/1461444819833331

  23. [23]

    Mychael Maoeretz Engel, Arief Ramadhan, Edi Abdurachman, and Agung Triset- yarso. 2022. Mobile device security: a systematic literature review on research trends, methods and datasets. Journal of System and Management Sciences 12, 2 (2022), 66–78

  24. [24]

    Habiba Farzand, Melvin Abraham, Stephen Brewster, Mohamed Khamis, and Karola Marky. 2025. A systematic deconstruction of human-centric privacy & security threats on mobile phones. International Journal of Human–Computer Interaction 41, 2 (2025), 1628–1651

  25. [25]

    Georgios Feretzakis, Konstantinos Papaspyridis, Aris Gkoulalas-Divanis, and Vassilios S Verykios. 2024. Privacy-preserving techniques in generative ai and large language models: a narrative review. Information 15, 11 (2024), 697

  26. [26]

    Abenezer Golda, Kidus Mekonen, Amit Pandey, Anushka Singh, Vikas Hassija, Vinay Chamola, and Biplab Sikdar. 2024. Privacy and security concerns in generative AI: a comprehensive survey. IEEE Access 12 (2024), 48126–48144

  27. [27]

    Google. 2025. Google AI. https://store .google.com/intl/en/ideas/categories/ai/

  28. [28]

    An Automated Survey of Generative Artificial Intelligence: Large Language Models, Architectures, Protocols, and Applications

    Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merch’an. 2023. A sur- vey of Generative AI Applications. ArXiv abs/2306.02781 (2023). https: //api.semanticscholar.org/CorpusID:259075811

  29. [29]

    Greg Guest, Arwen Bunce, and Laura Johnson. 2006. How Many Interviews Are Enough?: An Experiment with Data Saturation and Variability. Field Methods 18, 1 (2006), 59–82. arXiv:https://doi.org/10.1177/1525822X05279903 doi:10 .1177/ 1525822X05279903

  30. [30]

    Feng He, Tianqing Zhu, Dayong Ye, Bo Liu, Wanlei Zhou, and Philip S. Yu

  31. [31]

    The emerged security and privacy of LLM agent: A survey with case studies,

    The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies. ArXiv abs/2407.19354 (2024). https://api .semanticscholar.org/CorpusID: 271534130

  32. [32]

    Feng He, Tianqing Zhu, Dayong Ye, Bo Liu, Wanlei Zhou, and Philip S Yu. 2025. The emerged security and privacy of llm agent: A survey with case studies. Comput. Surveys 58, 6 (2025), 1–36. Conference’17, July 2017, Washington, DC, USA Ran Jin, Liu Wang, Shidong Pan, Luona Xu, Tianming Liu, and Haoyu Wang

  33. [33]

    HONOR. 2025. HONOR AI. https://www .honor.com/global/tech/honor-ai/

  34. [34]

    IBM. [n. d.]. Data Lifecycle Management. https://www .ibm.com/think/topics/ data-lifecycle-management

  35. [35]

    IDC. 2024. The Rise of GenAI Smartphones. https://blogs.idc.com/2024/07/05/the- rise-of-gen-ai-smartphones/

  36. [36]

    Yongnam Jung, Cheng Chen, Eunchae Jang, and S Shyam Sundar. 2024. Do we trust Chatgpt as much as Google search and Wikipedia?. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems . 1–9

  37. [37]

    Faisal Kalota. 2024. A Primer on Generative Artificial Intelligence. Education Sciences 14, 2 (2024). doi:10 .3390/educsci14020172

  38. [38]

    Jenny Kitzinger. 1994. The methodology of focus groups: the importance of interaction between research participants. Sociology of health & illness 16, 1 (1994), 103–121

  39. [39]

    Klaus Krippendorff. 2004. Reliability in content analysis: Some common mis- conceptions and recommendations. Human communication research 30, 3 (2004), 411–433

  40. [40]

    J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics (1977), 159–174

  41. [41]

    Josephine Lau, Benjamin Zimmerman, and Florian Schaub. 2018. Alexa, Are You Listening? Privacy Perceptions, Concerns and Privacy-seeking Behaviors with Smart Speakers. Proceedings of the ACM on Human-Computer Interaction 2 (2018), 1 – 31. https://api .semanticscholar.org/CorpusID:53223356

  42. [42]

    Anna Leschanowsky, Silas Rech, Birgit Popp, and Tom Bäckström. 2024. Evaluat- ing privacy, security, and trust perceptions in conversational AI: A systematic review. arXiv preprint arXiv:2406.09037 (2024)

  43. [43]

    Haoran Li, Yulin Chen, Jinglong Luo, Yan Kang, Xiaojin Zhang, Qi Hu, Chunkit Chan, and Yangqiu Song. 2023. Privacy in Large Language Models: At- tacks, Defenses and Future Directions. ArXiv abs/2310.10383 (2023). https: //api.semanticscholar.org/CorpusID:264145758

  44. [44]

    Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han

  45. [45]

    GetMobile: Mobile Computing and Communications 28 (2023), 12 – 17

    AWQ: Activation-aware Weight Quantization for On-Device LLM Com- pression and Acceleration. GetMobile: Mobile Computing and Communications 28 (2023), 12 – 17. https://api .semanticscholar.org/CorpusID:258999941

  46. [46]

    Yihao Liu, Jinhe Huang, Yanjie Li, Dong Wang, and Bin Xiao. 2024. Generative AI model privacy: a survey. Artificial Intelligence Review 58, 1 (2024), 33

  47. [47]

    Rongjun Ma, Caterina Maidhof, Juan Carlos Carrillo, Janne Lindqvist, and Jose Such. 2025. Privacy perceptions of custom gpts by users and creators. In Pro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–18

  48. [48]

    Kathleen M MacQueen, Eleanor McLellan, Kelly Kay, and Bobby Milstein. 1998. Codebook development for team-based qualitative analysis. Cam Journal 10, 2 (1998), 31–36

  49. [49]

    Siphosethu Maqolo. 2025. Mobile, App, and Cloud Security: Threats, Vulner- abilities, and Defense Mechanisms. Mobile, App, and Cloud Security: Threats, Vulnerabilities, and Defense Mechanisms (2025)

  50. [50]

    Karola Marky, Sarah Prange, Max Mühlhäuser, and Florian Alt. 2021. Roles matter! Understanding differences in the privacy mental models of smart home visitors and residents. InProceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia. 108–122

  51. [51]

    Giacomo Marzi, Marco Balzano, and Davide Marchiori. 2024. K-Alpha Calcula- tor–Krippendorff’s Alpha Calculator: A user-friendly tool for computing Krip- pendorff’s Alpha inter-rater reliability coefficient.MethodsX 12 (2024), 102545. doi:10.1016/j.mex.2023.102545

  52. [52]

    Haroon Iqbal Maseeh, Shamsun Nahar, Charles Jebarajakirthy, Mitchell Ross, Denni Arli, Manish Das, Mehak Rehman, and Hafiz Ahmad Ashraf. 2023. Ex- ploring the privacy concerns of smartphone app users: a qualitative approach. Marketing Intelligence & Planning 41, 7 (2023), 945–969

  53. [53]

    Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on human-computer interaction 3, CSCW (2019), 1–23

  54. [54]

    David L Morgan. 1997. Focus groups as qualitative research . Vol. 16. Sage

  55. [55]

    Motorola. 2025. Moto AI. https://www .motorola.com/we/moto-ai

  56. [56]

    Seth Neel and Peter Chang. 2023. Privacy Issues in Large Language Models: A Survey. ArXiv abs/2312.06717 (2023). https://api .semanticscholar.org/CorpusID: 266174760

  57. [57]

    Patricia A Norberg, Daniel R Horne, and David A Horne. 2007. The privacy paradox: Personal information disclosure intentions versus behaviors. Journal of consumer affairs 41, 1 (2007), 100–126

  58. [58]

    Phumezo Ntlatywa and Darelle van Greunen. 2024. A Systematic Literature Review of Usable Privacy Controls for Mobile Applications. In International Conference on Intelligent and Innovative Computing Applications . 116–126

  59. [59]

    Nyumba, Kerrie Wilson, Christina J Derrick, and Nibedita Mukherjee

    Tobias O. Nyumba, Kerrie Wilson, Christina J Derrick, and Nibedita Mukherjee

  60. [60]

    Methods in Ecology and evolution 9, 1 (2018), 20–32

    The use of focus group discussion methodology: Insights from two decades of application in conservation. Methods in Ecology and evolution 9, 1 (2018), 20–32

  61. [61]

    Joseph O’Hagan, Pejman Saeghe, Jan Gugenheimer, Daniel Medeiros, Karola Marky, Mohamed Khamis, and Mark McGill. 2023. Privacy-enhancing technology and everyday augmented reality: Understanding bystanders’ varying needs for awareness and consent. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 4 (2023), 1–35

  62. [62]

    OPPO. 2025. OPPO AI. https://www .oppo.com/en/discover/technology/oppo- ai/

  63. [63]

    Shidong Pan, Yikai Ge, and Xiaoyu Sun. 2025. A First Look at Privacy Risks of Android Task-executable Voice Assistant Applications. arXiv:2509.23680 [cs.CR] https://arxiv.org/abs/2509.23680

  64. [64]

    E.M. Rogers. 2003. Diffusion of Innovations, 5th Edition . Free Press. https: //books.google.com.sg/books?id=9U1K5LjUOwEC

  65. [65]

    Saqib Saeed. 2024. Usable Privacy and Security in Mobile Applications: Perception of Mobile End Users in Saudi Arabia. Big Data and Cognitive Computing 8, 11 (2024), 162

  66. [66]

    Samsung. 2025. Galaxy AI. https://www .samsung.com/sg/galaxy-ai/

  67. [67]

    Semerikov, Tetiana A

    Serhiy O. Semerikov, Tetiana A. Vakaliuk, Olga B. Kanevska, Mykhailo V. Moi- seienko, Ivan I. Donchev, and Andrii O. Kolhatin. 2025. LLM on the edge: the new frontier. In doors. https://api .semanticscholar.org/CorpusID:277071900

  68. [68]

    Sandeep Singh Sengar, Affan Bin Hasan, Sanjay Kumar, and Fiona Carroll. 2024. Generative Artificial Intelligence: A Systematic Review and Applications.Multim. Tools Appl. 84 (2024), 23661–23700. https://api .semanticscholar.org/CorpusID: 269922606

  69. [69]

    Yijia Shao, Tianshi Li, Weiyan Shi, Yanchen Liu, and Diyi Yang. 2024. Privacylens: Evaluating privacy norm awareness of language models in action. Advances in Neural Information Processing Systems 37 (2024), 89373–89407

  70. [70]

    Mainwaring, Halla Hrund Skúladóttir, and Höskuldur Borgthorsson

    Irina Shklovski, Scott D. Mainwaring, Halla Hrund Skúladóttir, and Höskuldur Borgthorsson. 2014. Leakiness and creepiness in app space: perceptions of privacy and mobile app use. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2014). https://api .semanticscholar.org/CorpusID:11716

  71. [71]

    Robin Staab, Mark Vero, Mislav Balunovi’c, and Martin T. Vechev. 2023. Be- yond Memorization: Violating Privacy Via Inference with Large Language Mod- els. ArXiv abs/2310.07298 (2023). https://api .semanticscholar.org/CorpusID: 263834989

  72. [72]

    Label Studio. 2025. How to Use krippendorff’s Alpha to Measure Anno- tation Agreement. https://www .aidoczh.com/labelstudio/blog/how-to-use- krippendorff-s-alpha-to-measure-annotation-agreement/index .html

  73. [73]

    Shaju, M Rakshana, R Ganesh, S Balavedhaa, and U Thiruvaazhi

    S Swetha, Ram Sundhar K. Shaju, M Rakshana, R Ganesh, S Balavedhaa, and U Thiruvaazhi. 2025. Privacy Preservation in Gen AI Applications. https: //api.semanticscholar.org/CorpusID:277780582

  74. [74]

    Batuhan Tömekçe, Mark Vero, Robin Staab, and Martin T. Vechev. 2024. Pri- vate Attribute Inference from Images with Vision-Language Models. ArXiv abs/2404.10618 (2024). https://api .semanticscholar.org/CorpusID:269157542

  75. [75]

    Sarah Tran, Hongfan Lu, Isaac Slaughter, Bernease Herman, Aayushi Dangol, Yue Fu, Lufei Chen, Biniyam Gebreyohannes, Bill Howe, Alexis Hiniker, Nicholas We- ber, and Robert Wolfe. 2025. Understanding Privacy Norms Around LLM-Based Chatbots: A Contextual Integrity Perspective. https://api .semanticscholar.org/ CorpusID:280565989

  76. [76]

    Liangxuan Wu, Yanjie Zhao, Chao Wang, Tianming Liu, and Haoyu Wang. 2024. A first look at llm-powered smartphones. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering Workshops. 208–217

  77. [77]

    Xiaomi. 2025. Xiaomi HyperAI. https://www .mi.com/global/brand/ai/xiaomi- hyperai

  78. [78]

    Runhua Xu, Nathalie Baracaldo, and James Joshi. 2021. Privacy-preserving machine learning: Methods, challenges and directions. arXiv preprint arXiv:2108.04417 (2021)

  79. [79]

    Zhenliang Xue, Yixin Song, Zeyu Mi, Le Chen, Yubin Xia, and Haibo Chen. 2024. PowerInfer-2: Fast Large Language Model Inference on a Smartphone. ArXiv abs/2406.06282 (2024). https://api .semanticscholar.org/CorpusID:270371224

  80. [80]

    yankodesign. 2024. The Evolution of Smartphones: What are GenAI Phones? https://www.yankodesign.com/2024/05/27/the-evolution-of-smartphones- what-are-genai-phones/

Showing first 80 references.