Recognition: no theorem link
Understanding User Privacy Perceptions of GenAI Smartphones
Pith reviewed 2026-05-10 19:13 UTC · model grok-4.3
The pith
Users engage with GenAI smartphones without understanding their data operations but develop strong privacy concerns once technical details are explained.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Users of GenAI smartphones operate with limited understanding of how the systems deliver functions through continuous sensitive data access, but their privacy concerns increase markedly once they are shown the technical details; these concerns cover the entire data lifecycle from nontransparent collection to insecure storage and insufficient control, and participants propose coordinated changes in system controls, data management, and user-facing transparency to address them.
What carries the argument
Semi-structured interviews with 22 participants that first elicit general usage views and then expose users to technical explanations of GenAI smartphone data operations, followed by a focus group to collect privacy-enhancing suggestions.
Where Pith is reading between the lines
- Transparency features may need careful calibration so that informing users does not simply raise demands without corresponding control options.
- The pattern of concerns across the data lifecycle could inform privacy requirements for other AI-embedded consumer devices beyond smartphones.
- Testing specific prototypes that implement the suggested system-level controls and transparency elements would show whether they reduce reported concerns in practice.
Load-bearing premise
The perceptions reported by this sample of 22 users accurately reflect wider attitudes and that exposure to technical details does not artificially increase the concerns they express.
What would settle it
A larger-scale survey or controlled experiment measuring privacy concern levels before versus after exposure to the same GenAI smartphone technical details across a demographically broader group of mobile users.
Figures
read the original abstract
GenAI smartphones, which natively embed generative AI at the system level, are transforming mobile interactions by automating a wide range of tasks and executing UI actions on behalf of users. Their superior capabilities rely on continuous access to sensitive and context-rich data, raising privacy concerns that surpass those of traditional mobile devices. Yet, little is known about how users perceive the privacy implications of such devices or what safeguards they expect, which is especially critical at this early stage of GenAI smartphone adoption. To address this gap, we conduct 22 semi-structured interviews with everyday mobile users to explore their usage of GenAI smartphones, privacy concerns, and privacy design expectations. Our findings show that users engage with GenAI smartphones with limited understanding of how these systems operate to deliver functions, but show heightened privacy concerns once exposed to the technical details. Participants' concerns span the entire data lifecycle, including nontransparent collection, insecure storage, and weak data control. In a follow-up focus group, participants discuss a range of privacy-enhancing suggestions that call for coordinated changes across system-level controls, data management practices, and user-facing transparency. Their concerns and suggestions offer user-centered guidances for designing GenAI smartphones that balance functionality with privacy protection, offering valuable takeaways for system designers and regulators.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports results from 22 semi-structured interviews and a follow-up focus group with everyday mobile users on their perceptions of GenAI smartphones. It claims users operate these devices with limited understanding of their internal data flows but exhibit heightened privacy concerns once technical details are explained, with worries spanning nontransparent collection, insecure storage, and weak control across the data lifecycle. Participants propose coordinated privacy enhancements at system, data-management, and transparency levels to guide designers and regulators.
Significance. If the findings are robust, the work supplies timely, user-grounded insights into privacy expectations for an emerging class of devices at an early adoption stage. The qualitative design is well-suited to surfacing nuanced expectations and suggestions that quantitative surveys might miss, and the paper appropriately grounds its claims in direct participant responses rather than fitted models or derivations.
major comments (2)
- [Abstract and §4] Abstract and §4 (Findings): the central claim that participants 'show heightened privacy concerns once exposed to the technical details' is load-bearing yet rests on an untested assumption. Because technical explanations occur inside the same semi-structured interviews that elicit the concerns, the design lacks a pre-exposure baseline, making it impossible to separate pre-existing attitudes from priming effects introduced by the researcher.
- [§3] §3 (Methods): the manuscript supplies no information on recruitment strategy, interview protocol details, qualitative coding process, inter-rater reliability, or steps taken to reduce social-desirability bias in privacy discussions. These omissions directly affect the credibility of the reported concerns and suggestions.
minor comments (2)
- [Abstract] The abstract and introduction could more explicitly acknowledge the small sample size and exploratory nature when generalizing to 'users' and 'GenAI smartphones'.
- A brief comparison table or bullet list contrasting GenAI smartphone privacy concerns with those of conventional smartphones would improve readability of the contribution.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback on our manuscript. The comments identify key areas where greater precision and transparency are needed. We respond to each major comment below and will revise the manuscript to address them.
read point-by-point responses
-
Referee: [Abstract and §4] Abstract and §4 (Findings): the central claim that participants 'show heightened privacy concerns once exposed to the technical details' is load-bearing yet rests on an untested assumption. Because technical explanations occur inside the same semi-structured interviews that elicit the concerns, the design lacks a pre-exposure baseline, making it impossible to separate pre-existing attitudes from priming effects introduced by the researcher.
Authors: We acknowledge the validity of this point. The interview protocol began with open questions on device usage and understanding before any technical explanations were offered, after which privacy concerns were probed. However, because all elements occurred within the same session, we lack an independent baseline and cannot isolate priming effects from the researcher's explanations. In the revision we will rephrase the abstract and §4 to describe the observed sequence without implying causation (e.g., “participants voiced privacy concerns after technical details were explained during the interview”). We will also add an explicit limitations paragraph discussing the exploratory nature of the design and the possibility of researcher-induced priming. revision: yes
-
Referee: [§3] §3 (Methods): the manuscript supplies no information on recruitment strategy, interview protocol details, qualitative coding process, inter-rater reliability, or steps taken to reduce social-desirability bias in privacy discussions. These omissions directly affect the credibility of the reported concerns and suggestions.
Authors: We agree that the current §3 lacks these essential details. In the revised manuscript we will expand the methods section to describe the recruitment strategy, provide the full interview protocol, outline the qualitative coding and thematic analysis process, report any inter-rater reliability steps taken, and explain the measures used to reduce social-desirability bias. These additions will improve transparency and allow readers to assess the robustness of the findings. revision: yes
Circularity Check
No circularity: direct reporting of interview data
full rationale
This is a qualitative empirical study based on 22 semi-structured interviews and one follow-up focus group. The central claims (limited user understanding of GenAI smartphone operation, heightened privacy concerns after exposure to technical details, and lifecycle-spanning concerns) are presented as direct summaries of participant responses. No equations, fitted parameters, predictions, derivations, or self-citation chains exist that could reduce any result to its own inputs by construction. The enumerated circularity patterns (self-definitional, fitted-input-called-prediction, self-citation load-bearing, etc.) do not apply. The study is self-contained against external benchmarks as standard exploratory HCI research.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Semi-structured interviews with a small sample of everyday users can surface representative privacy perceptions and expectations
Reference graph
Works this paper leans on
- [1]
-
[2]
Melvin Abraham, Pejman Saeghe, Mark Mcgill, and Mohamed Khamis. 2022. Implications of xr on privacy, security and behaviour: Insights from experts. In Nordic Human-Computer Interaction Conference . 1–12
work page 2022
-
[3]
Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2015. Privacy and human behavior in the age of information. Science 347, 6221 (2015), 509–514
work page 2015
-
[4]
Alessandro Acquisti, Laura Brandimarte, and George Loewenstein. 2020. Secrets and Likes: The Drive for Privacy and the Difficulty of Achieving It in the Digital Age. Journal of Consumer Psychology (2020). https://api .semanticscholar.org/ CorpusID:225210777
work page 2020
-
[5]
Mahdi Alkaeed, Adnan Qayyum, and Junaid Qadir. 2024. Privacy preservation in Artificial Intelligence and Extended Reality (AI-XR) metaverses: A survey. Journal of Network and Computer Applications 231 (2024), 103989
work page 2024
-
[6]
Android. 2025. Photo Picker. https://developer .android.com/training/data- storage/shared/photo-picker
work page 2025
-
[7]
Apple. 2024. Introducing Apple’s On-Device and Server Foundation Models. https: //machinelearning.apple.com/research/introducing-apple-foundation-models
work page 2024
-
[8]
Apple. 2025. Apple Intelligence. https://www .apple.com/apple-intelligence/
work page 2025
-
[9]
Apple. 2025. Delivering an Enhanced Privacy Experience in Your Pho- tos App. https://developer .apple.com/documentation/photokit/delivering-an- enhanced-privacy-experience-in-your-photos-app
work page 2025
-
[10]
Artifact Availability. 2026. GenAI Smartphones Interviews Supplemental Materials. https://anonymous .4open.science/r/Understanding-User-Privacy- Perceptions-of-GenAI-Smartphones-00D2/
work page 2026
-
[11]
Eugene Bagdasarian, Ren Yi, Sahra Ghalebikesabi, Peter Kairouz, Marco Gruteser, Sewoong Oh, Borja Balle, and Daniel Ramage. 2024. AirGapAgent: Protecting Privacy-Conscious Conversational Agents. Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security (2024). https: //api.semanticscholar.org/CorpusID:269626177
work page 2024
-
[12]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ’It’s Reducing a Human Being to a Percentage’: Perceptions of Justice in Algorithmic Decisions.Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018). https://api .semanticscholar.org/CorpusID: 5039081
work page 2018
-
[13]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative research in psychology 3, 2 (2006), 77–101
work page 2006
-
[14]
Srishti Chaudhary. 2024. AI at Your Service: Generative Artificial Intelligence and the Next Generation of Assistants. In 2024 Artificial Intelligence for Business (AIxB). IEEE, 51–52
work page 2024
-
[15]
Google Cloud. 2025. What is generative ai and what are its applications? https: //cloud.google.com/use-cases/generative-ai
work page 2025
-
[16]
McKinsey & Company. 2023. What Is Generative AI. https://www.mckinsey.com/ featured-insights/mckinsey-explainers/what-is-generative-ai
work page 2023
-
[17]
cybernews. 2024. Privacy is the leading concern regarding AI smartphones, study finds. https://cybernews .com/tech/ai-smartphones-privacy-concerns/
work page 2024
-
[18]
Badhan Chandra Das, M Hadi Amini, and Yanzhao Wu. 2025. Security and privacy challenges of large language models: A survey. Comput. Surveys 57, 6 (2025), 1–39
work page 2025
-
[19]
Fred D Davis. 1993. User acceptance of information technology: system char- acteristics, user perceptions and behavioral impacts. International journal of man-machine studies 38, 3 (1993), 475–487
work page 1993
-
[20]
deepseek. 2025. DeepSeek. https://www .deepseek.com/
work page 2025
-
[21]
Kenan Degirmenci. 2020. Mobile users’ information privacy concerns and the role of app permission requests. International journal of information management 50 (2020), 261–272
work page 2020
-
[22]
Nora Draper and Joseph Turow. 2019. The corporate cultivation of digital resigna- tion. New Media & Society 21 (08 2019), 1824–1839. doi:10.1177/1461444819833331
-
[23]
Mychael Maoeretz Engel, Arief Ramadhan, Edi Abdurachman, and Agung Triset- yarso. 2022. Mobile device security: a systematic literature review on research trends, methods and datasets. Journal of System and Management Sciences 12, 2 (2022), 66–78
work page 2022
-
[24]
Habiba Farzand, Melvin Abraham, Stephen Brewster, Mohamed Khamis, and Karola Marky. 2025. A systematic deconstruction of human-centric privacy & security threats on mobile phones. International Journal of Human–Computer Interaction 41, 2 (2025), 1628–1651
work page 2025
-
[25]
Georgios Feretzakis, Konstantinos Papaspyridis, Aris Gkoulalas-Divanis, and Vassilios S Verykios. 2024. Privacy-preserving techniques in generative ai and large language models: a narrative review. Information 15, 11 (2024), 697
work page 2024
-
[26]
Abenezer Golda, Kidus Mekonen, Amit Pandey, Anushka Singh, Vikas Hassija, Vinay Chamola, and Biplab Sikdar. 2024. Privacy and security concerns in generative AI: a comprehensive survey. IEEE Access 12 (2024), 48126–48144
work page 2024
-
[27]
Google. 2025. Google AI. https://store .google.com/intl/en/ideas/categories/ai/
work page 2025
-
[28]
Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merch’an. 2023. A sur- vey of Generative AI Applications. ArXiv abs/2306.02781 (2023). https: //api.semanticscholar.org/CorpusID:259075811
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[29]
Greg Guest, Arwen Bunce, and Laura Johnson. 2006. How Many Interviews Are Enough?: An Experiment with Data Saturation and Variability. Field Methods 18, 1 (2006), 59–82. arXiv:https://doi.org/10.1177/1525822X05279903 doi:10 .1177/ 1525822X05279903
-
[30]
Feng He, Tianqing Zhu, Dayong Ye, Bo Liu, Wanlei Zhou, and Philip S. Yu
-
[31]
The emerged security and privacy of LLM agent: A survey with case studies,
The Emerged Security and Privacy of LLM Agent: A Survey with Case Studies. ArXiv abs/2407.19354 (2024). https://api .semanticscholar.org/CorpusID: 271534130
-
[32]
Feng He, Tianqing Zhu, Dayong Ye, Bo Liu, Wanlei Zhou, and Philip S Yu. 2025. The emerged security and privacy of llm agent: A survey with case studies. Comput. Surveys 58, 6 (2025), 1–36. Conference’17, July 2017, Washington, DC, USA Ran Jin, Liu Wang, Shidong Pan, Luona Xu, Tianming Liu, and Haoyu Wang
work page 2025
-
[33]
HONOR. 2025. HONOR AI. https://www .honor.com/global/tech/honor-ai/
work page 2025
-
[34]
IBM. [n. d.]. Data Lifecycle Management. https://www .ibm.com/think/topics/ data-lifecycle-management
-
[35]
IDC. 2024. The Rise of GenAI Smartphones. https://blogs.idc.com/2024/07/05/the- rise-of-gen-ai-smartphones/
work page 2024
-
[36]
Yongnam Jung, Cheng Chen, Eunchae Jang, and S Shyam Sundar. 2024. Do we trust Chatgpt as much as Google search and Wikipedia?. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems . 1–9
work page 2024
-
[37]
Faisal Kalota. 2024. A Primer on Generative Artificial Intelligence. Education Sciences 14, 2 (2024). doi:10 .3390/educsci14020172
work page 2024
-
[38]
Jenny Kitzinger. 1994. The methodology of focus groups: the importance of interaction between research participants. Sociology of health & illness 16, 1 (1994), 103–121
work page 1994
-
[39]
Klaus Krippendorff. 2004. Reliability in content analysis: Some common mis- conceptions and recommendations. Human communication research 30, 3 (2004), 411–433
work page 2004
-
[40]
J Richard Landis and Gary G Koch. 1977. The measurement of observer agreement for categorical data. biometrics (1977), 159–174
work page 1977
-
[41]
Josephine Lau, Benjamin Zimmerman, and Florian Schaub. 2018. Alexa, Are You Listening? Privacy Perceptions, Concerns and Privacy-seeking Behaviors with Smart Speakers. Proceedings of the ACM on Human-Computer Interaction 2 (2018), 1 – 31. https://api .semanticscholar.org/CorpusID:53223356
work page 2018
- [42]
- [43]
-
[44]
Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han
-
[45]
GetMobile: Mobile Computing and Communications 28 (2023), 12 – 17
AWQ: Activation-aware Weight Quantization for On-Device LLM Com- pression and Acceleration. GetMobile: Mobile Computing and Communications 28 (2023), 12 – 17. https://api .semanticscholar.org/CorpusID:258999941
work page 2023
-
[46]
Yihao Liu, Jinhe Huang, Yanjie Li, Dong Wang, and Bin Xiao. 2024. Generative AI model privacy: a survey. Artificial Intelligence Review 58, 1 (2024), 33
work page 2024
-
[47]
Rongjun Ma, Caterina Maidhof, Juan Carlos Carrillo, Janne Lindqvist, and Jose Such. 2025. Privacy perceptions of custom gpts by users and creators. In Pro- ceedings of the 2025 CHI Conference on Human Factors in Computing Systems . 1–18
work page 2025
-
[48]
Kathleen M MacQueen, Eleanor McLellan, Kelly Kay, and Bobby Milstein. 1998. Codebook development for team-based qualitative analysis. Cam Journal 10, 2 (1998), 31–36
work page 1998
-
[49]
Siphosethu Maqolo. 2025. Mobile, App, and Cloud Security: Threats, Vulner- abilities, and Defense Mechanisms. Mobile, App, and Cloud Security: Threats, Vulnerabilities, and Defense Mechanisms (2025)
work page 2025
-
[50]
Karola Marky, Sarah Prange, Max Mühlhäuser, and Florian Alt. 2021. Roles matter! Understanding differences in the privacy mental models of smart home visitors and residents. InProceedings of the 20th International Conference on Mobile and Ubiquitous Multimedia. 108–122
work page 2021
-
[51]
Giacomo Marzi, Marco Balzano, and Davide Marchiori. 2024. K-Alpha Calcula- tor–Krippendorff’s Alpha Calculator: A user-friendly tool for computing Krip- pendorff’s Alpha inter-rater reliability coefficient.MethodsX 12 (2024), 102545. doi:10.1016/j.mex.2023.102545
-
[52]
Haroon Iqbal Maseeh, Shamsun Nahar, Charles Jebarajakirthy, Mitchell Ross, Denni Arli, Manish Das, Mehak Rehman, and Hafiz Ahmad Ashraf. 2023. Ex- ploring the privacy concerns of smartphone app users: a qualitative approach. Marketing Intelligence & Planning 41, 7 (2023), 945–969
work page 2023
-
[53]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on human-computer interaction 3, CSCW (2019), 1–23
work page 2019
-
[54]
David L Morgan. 1997. Focus groups as qualitative research . Vol. 16. Sage
work page 1997
-
[55]
Motorola. 2025. Moto AI. https://www .motorola.com/we/moto-ai
work page 2025
- [56]
-
[57]
Patricia A Norberg, Daniel R Horne, and David A Horne. 2007. The privacy paradox: Personal information disclosure intentions versus behaviors. Journal of consumer affairs 41, 1 (2007), 100–126
work page 2007
-
[58]
Phumezo Ntlatywa and Darelle van Greunen. 2024. A Systematic Literature Review of Usable Privacy Controls for Mobile Applications. In International Conference on Intelligent and Innovative Computing Applications . 116–126
work page 2024
-
[59]
Nyumba, Kerrie Wilson, Christina J Derrick, and Nibedita Mukherjee
Tobias O. Nyumba, Kerrie Wilson, Christina J Derrick, and Nibedita Mukherjee
-
[60]
Methods in Ecology and evolution 9, 1 (2018), 20–32
The use of focus group discussion methodology: Insights from two decades of application in conservation. Methods in Ecology and evolution 9, 1 (2018), 20–32
work page 2018
-
[61]
Joseph O’Hagan, Pejman Saeghe, Jan Gugenheimer, Daniel Medeiros, Karola Marky, Mohamed Khamis, and Mark McGill. 2023. Privacy-enhancing technology and everyday augmented reality: Understanding bystanders’ varying needs for awareness and consent. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 4 (2023), 1–35
work page 2023
-
[62]
OPPO. 2025. OPPO AI. https://www .oppo.com/en/discover/technology/oppo- ai/
work page 2025
- [63]
-
[64]
E.M. Rogers. 2003. Diffusion of Innovations, 5th Edition . Free Press. https: //books.google.com.sg/books?id=9U1K5LjUOwEC
work page 2003
-
[65]
Saqib Saeed. 2024. Usable Privacy and Security in Mobile Applications: Perception of Mobile End Users in Saudi Arabia. Big Data and Cognitive Computing 8, 11 (2024), 162
work page 2024
-
[66]
Samsung. 2025. Galaxy AI. https://www .samsung.com/sg/galaxy-ai/
work page 2025
-
[67]
Serhiy O. Semerikov, Tetiana A. Vakaliuk, Olga B. Kanevska, Mykhailo V. Moi- seienko, Ivan I. Donchev, and Andrii O. Kolhatin. 2025. LLM on the edge: the new frontier. In doors. https://api .semanticscholar.org/CorpusID:277071900
work page 2025
-
[68]
Sandeep Singh Sengar, Affan Bin Hasan, Sanjay Kumar, and Fiona Carroll. 2024. Generative Artificial Intelligence: A Systematic Review and Applications.Multim. Tools Appl. 84 (2024), 23661–23700. https://api .semanticscholar.org/CorpusID: 269922606
work page 2024
-
[69]
Yijia Shao, Tianshi Li, Weiyan Shi, Yanchen Liu, and Diyi Yang. 2024. Privacylens: Evaluating privacy norm awareness of language models in action. Advances in Neural Information Processing Systems 37 (2024), 89373–89407
work page 2024
-
[70]
Mainwaring, Halla Hrund Skúladóttir, and Höskuldur Borgthorsson
Irina Shklovski, Scott D. Mainwaring, Halla Hrund Skúladóttir, and Höskuldur Borgthorsson. 2014. Leakiness and creepiness in app space: perceptions of privacy and mobile app use. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2014). https://api .semanticscholar.org/CorpusID:11716
work page 2014
- [71]
-
[72]
Label Studio. 2025. How to Use krippendorff’s Alpha to Measure Anno- tation Agreement. https://www .aidoczh.com/labelstudio/blog/how-to-use- krippendorff-s-alpha-to-measure-annotation-agreement/index .html
work page 2025
-
[73]
Shaju, M Rakshana, R Ganesh, S Balavedhaa, and U Thiruvaazhi
S Swetha, Ram Sundhar K. Shaju, M Rakshana, R Ganesh, S Balavedhaa, and U Thiruvaazhi. 2025. Privacy Preservation in Gen AI Applications. https: //api.semanticscholar.org/CorpusID:277780582
work page 2025
- [74]
-
[75]
Sarah Tran, Hongfan Lu, Isaac Slaughter, Bernease Herman, Aayushi Dangol, Yue Fu, Lufei Chen, Biniyam Gebreyohannes, Bill Howe, Alexis Hiniker, Nicholas We- ber, and Robert Wolfe. 2025. Understanding Privacy Norms Around LLM-Based Chatbots: A Contextual Integrity Perspective. https://api .semanticscholar.org/ CorpusID:280565989
work page 2025
-
[76]
Liangxuan Wu, Yanjie Zhao, Chao Wang, Tianming Liu, and Haoyu Wang. 2024. A first look at llm-powered smartphones. In Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering Workshops. 208–217
work page 2024
-
[77]
Xiaomi. 2025. Xiaomi HyperAI. https://www .mi.com/global/brand/ai/xiaomi- hyperai
work page 2025
- [78]
- [79]
-
[80]
yankodesign. 2024. The Evolution of Smartphones: What are GenAI Phones? https://www.yankodesign.com/2024/05/27/the-evolution-of-smartphones- what-are-genai-phones/
work page 2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.