Recognition: unknown
A Sociotechnical, Practitioner-Centered Approach to Technology Adoption in Cybersecurity Operations: An LLM Case
Pith reviewed 2026-05-09 21:38 UTC · model grok-4.3
The pith
Embedding researchers in a SOC to co-create LLM tools with practitioners produces sustained adoption by aligning with operational needs.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Embedding researchers within the SOC for six months allowed identification of recurring operational challenges such as repetitive tasks, fragmented data, and tooling bottlenecks; direct collaboration with practitioners then produced LLM companion tools that were iteratively refined to reduce workflow disruption and increase interpretability, resulting in a transition from initial skepticism to sustained adoption explained by a sociotechnical co-creation process consistent with Nonaka's SECI model.
What carries the argument
The sociotechnical co-creation process aligned with Nonaka's SECI model (socialization, externalization, combination, and internalization of knowledge between researchers and practitioners) that converts tacit operational expertise into usable LLM tools.
Load-bearing premise
The shift to sustained adoption occurred primarily because of the sociotechnical co-creation process rather than because of the particular company culture, the researchers' ongoing presence, or the specific capabilities of the LLM.
What would settle it
Observe whether a comparable SOC that receives LLM tools without the six-month embedded co-creation process still reaches sustained adoption, or whether a SOC that follows the co-creation process fails to adopt the tools.
Figures
read the original abstract
Technology for security operations centers (SOCs) has a storied history of slow adoption due to concerns about trust and reliability. These concerns are amplified with artificial intelligence, particularly large language models (LLMs), which exhibit issues such as hallucinations and inconsistent outputs. To assess whether LLM-based tools can improve SOC efficiency, we embedded two PhD researchers within a multinational company SOC for six months of ethnographic fieldwork. We identified recurring challenges, such as repetitive tasks, fragmented/unclear data, and tooling bottlenecks, and collaborated directly with practitioners to develop LLM companion tools aligned with their operational needs. Iterative refinement reduced workflow disruption and improved interpretability, leading from skepticism to sustained adoption. Ethnographic analysis indicates that this shift was enabled by our sociotechnical co-creation process consistent with Nonaka's SECI model. This framework explains the common challenges in traditional SOC technology adoption, including workflow misalignment, rigidity against evolving threats and internal requirements, and stagnation over time. Our findings show that the co-creation approach can overcome these old barriers and create a new paradigm for creating usable technology for cybersecurity operations.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports on a six-month embedded ethnographic study in which two PhD researchers collaborated with practitioners in a multinational SOC to co-create LLM-based companion tools addressing repetitive tasks, fragmented data, and tooling bottlenecks. Drawing on Nonaka's SECI model as an interpretive framework, the authors claim that their sociotechnical co-creation process enabled a transition from initial skepticism to sustained adoption, overcoming longstanding barriers such as workflow misalignment, rigidity to evolving threats, and stagnation that have historically impeded technology uptake in cybersecurity operations.
Significance. If the attribution of sustained adoption to the co-creation/SECI-aligned process is robustly supported, the work would provide a concrete, practitioner-centered model for developing usable AI tools in high-stakes operational environments. The embedded fieldwork yields rich, context-specific insights into SOC workflows and LLM integration challenges that are rarely captured in lab-based or survey studies. The paper also explicitly credits the iterative, collaborative refinement process for reducing disruption and improving interpretability.
major comments (2)
- [Abstract and Findings/Discussion sections] The central causal claim—that the observed shift to sustained LLM tool adoption resulted primarily from the sociotechnical co-creation process aligned with the SECI model—rests on interpretive ethnographic analysis whose supporting evidence is not detailed. No description is given of data collection protocols, coding procedures, inter-rater reliability checks, or systematic comparison against plausible alternative explanations (researcher presence/Hawthorne effect, pre-existing company culture, specific LLM capabilities, or external iterative pressures). This directly undermines the load-bearing attribution in the abstract and findings.
- [Methodology and Discussion] The manuscript presents a single-case ethnography without a baseline period, control condition, or explicit falsification steps for the SECI interpretation. While qualitative work does not require statistical controls, the strong claim that the co-creation approach 'can overcome these old barriers and create a new paradigm' requires at least a transparent account of how alternative accounts were considered and why they were set aside.
minor comments (2)
- [Abstract] The abstract states that 'ethnographic analysis indicates' the enabling role of the process but supplies no methodological detail; moving a concise methods summary into the abstract would strengthen the reader's ability to evaluate the claim.
- [Introduction and Discussion] Clarify whether the SECI model was used prospectively to guide the co-creation activities or applied retrospectively as an interpretive lens; the current wording leaves this ambiguous.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback, which highlights important areas for strengthening the evidentiary basis and transparency of our ethnographic claims. We have prepared point-by-point responses below and will incorporate revisions to address the concerns while preserving the interpretive nature of the study.
read point-by-point responses
-
Referee: [Abstract and Findings/Discussion sections] The central causal claim—that the observed shift to sustained LLM tool adoption resulted primarily from the sociotechnical co-creation process aligned with the SECI model—rests on interpretive ethnographic analysis whose supporting evidence is not detailed. No description is given of data collection protocols, coding procedures, inter-rater reliability checks, or systematic comparison against plausible alternative explanations (researcher presence/Hawthorne effect, pre-existing company culture, specific LLM capabilities, or external iterative pressures). This directly undermines the load-bearing attribution in the abstract and findings.
Authors: We agree that greater methodological transparency is required to support the interpretive claims. In the revised manuscript we will expand the Methodology section with explicit descriptions of data collection protocols (daily field notes, semi-structured interviews, artifact analysis, and participant observation logs), the thematic coding process (iterative open and axial coding guided by SECI constructs, with illustrative data excerpts), and our reflexive practices including member checking with SOC practitioners to establish credibility. Although conventional inter-rater reliability statistics are not standard in interpretive ethnography, we will document how multiple researchers cross-verified interpretations. We will also add a new subsection in the Discussion that systematically evaluates alternative explanations, citing specific observational sequences (e.g., periods of persistent skepticism despite LLM availability that resolved only after co-creation workshops) to show why the sociotechnical process was the primary driver rather than researcher presence, pre-existing culture, or external factors. revision: yes
-
Referee: [Methodology and Discussion] The manuscript presents a single-case ethnography without a baseline period, control condition, or explicit falsification steps for the SECI interpretation. While qualitative work does not require statistical controls, the strong claim that the co-creation approach 'can overcome these old barriers and create a new paradigm' requires at least a transparent account of how alternative accounts were considered and why they were set aside.
Authors: We accept that the single-case embedded design precludes a formal baseline period or control condition, as the collaboration began immediately upon entry and the study was not structured as an intervention trial. In the revision we will add an explicit account in the Discussion of how alternative explanations were considered and set aside during analysis, drawing on chronological field evidence that links specific co-creation events to adoption milestones while ruling out alternatives (e.g., adoption did not occur until workflow alignment was achieved through practitioner input, despite earlier LLM exposure). We will also moderate the abstract and conclusion language to emphasize the interpretive contribution and list the single-case limitation more prominently, while noting that the depth of contextual data provides a falsification opportunity through within-case variation. revision: partial
Circularity Check
No circularity: interpretive ethnographic analysis applies external SECI lens to observations
full rationale
The paper presents findings from six months of embedded ethnographic fieldwork in a single SOC, identifying challenges and describing iterative co-creation of LLM tools leading to adoption. Nonaka's SECI model is invoked only as a post-hoc interpretive framework to explain the observed shift from skepticism to sustained use, not as a source of predictions or derivations that loop back to the data. No equations, fitted parameters, self-definitional constructs, or load-bearing self-citations appear in the provided text. The central attribution rests on qualitative analysis rather than reducing by construction to its inputs; any weaknesses lie in causal inference and lack of controls, not circular logic.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Nonaka's SECI model accurately captures the knowledge conversion processes that enable technology adoption in security operations centers
Reference graph
Works this paper leans on
-
[1]
A human capital model for mitigating security analyst burnout,
S. C. Sundaramurthy, A. G. Bardas, J. Case, X. Ou, M. Wesch, J. McHugh, and S. R. Rajagopalan, “A human capital model for mitigating security analyst burnout,” inProceedings of the Eleventh USENIX Conference on Usable Privacy and Security, ser. SOUPS ’15. USA: USENIX Association, 2015, p. 347–359. [Online]. Available: https://www.usenix.org/conferenc e/so...
2015
-
[2]
Humans are dynamic- our tools should be too,
S. C. Sundaramurthy, M. Wesch, X. Ou, J. McHugh, S. R. Rajagopalan, and A. G. Bardas, “Humans are dynamic- our tools should be too,”IEEE Internet Computing, vol. 21, no. 3, pp. 40–46, 2017. [Online]. Available: https://ieeexplore.ieee.org/document/7927884/
-
[3]
Zimmerman,11 Strategies of a World-Class Cybersecurity Operations Center
C. Zimmerman,11 Strategies of a World-Class Cybersecurity Operations Center. MITRE Corporation,
-
[4]
Available: https://www.mitre.org/sites/ default/files/2022-04/11-strategies-of-a-world-class-c ybersecurity-operations-center.pdf
[Online]. Available: https://www.mitre.org/sites/ default/files/2022-04/11-strategies-of-a-world-class-c ybersecurity-operations-center.pdf
2022
-
[5]
99% false positives: A qualitative study of {SOC} analysts’ perspectives on security alarms,
B. A. Alahmadi, L. Axon, and I. Martinovic, “99% false positives: A qualitative study of {SOC} analysts’ perspectives on security alarms,” in31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 2783–2800. [Online]. Available: https://www.usenix.o rg/conference/usenixsecurity22/presentation/alahmadi
2022
-
[6]
Why don’t software developers use static analysis tools to find bugs?
B. Johnson, Y . Song, E. Murphy-Hill, and R. Bowdidge, “Why don’t software developers use static analysis tools to find bugs?” in2013 35th International Conference on Software Engineering (ICSE). IEEE, 2013, pp. 672–681. [Online]. Available: https: //petertsehsun.github.io/soen7481/papers/icse13b.pdf
2013
-
[7]
Questions developers ask while diagnosing potential security vulnerabilities with static analysis,
J. Smith, B. Johnson, E. Murphy-Hill, B. Chu, and H. R. Lipford, “Questions developers ask while diagnosing potential security vulnerabilities with static analysis,” inProceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, 2015, pp. 248–259. [Online]. Available: https://dl.acm.org/doi/pdf/10.1145/2786805.2786812
-
[8]
S. Tariq, M. Baruwal Chhetri, S. Nepal, and C. Paris, “Alert fatigue in security operations centres: Research challenges and opportunities,”ACM Computing Surveys, vol. 57, no. 9, pp. 1–38, 2025. [Online]. Available: https://dl.acm.org/doi/10.1145/3723158
-
[9]
A field study to uncover and a tool to support the alert investigation process of tier-1 analysts,
L. Kersten, K. Beelen, E. Zambon, C. Snijders, and L. Allodi, “A field study to uncover and a tool to support the alert investigation process of tier-1 analysts,” Proc. of USEC, pp. 1–15, 2025. [Online]. Available: https://www.ndss-symposium.org/wp-content/uploads /usec25-34.pdf
2025
-
[10]
arXiv preprint arXiv:2402.16968 , year =
G. de Jesus Coelho da Silva and C. B. Westphall, “A survey of large language models in cybersecurity,” 2024. [Online]. Available: https://arxiv.org/abs/2402.16968
-
[11]
H. Xu, S. Wang, N. Li, K. Wang, Y . Zhao, K. Chen, T. Yu, Y . Liu, and H. Wang, “Large language models for cyber security: A systematic literature review,” 2024. [Online]. Available: https://arxiv.org/abs/2405.04760
-
[12]
Ai-driven guided response for security operation centers with microsoft copilot for security,
S. Freitas, J. Kalajdjieski, A. Gharib, and R. McCann, “Ai-driven guided response for security operation centers with microsoft copilot for security,” inCompanion Proceedings of the ACM on Web Conference 2025, ser. WWW ’25. New York, NY , USA: Association for Computing Machinery, 2025, p. 191–200. [Online]. Available: https://doi.org/10.1145/3701716.3715209
-
[13]
Randomized controlled trial for copilot for security (whitepaper),
B. Edelman, J. Bono, S. Peng, R. Rodriguez, and S. Ho, “Randomized controlled trial for copilot for security (whitepaper),” Jan. 2024. [Online]. Available: https://www.microsoft.com/content/dam/microsoft/fin al/en-us/microsoft-product-and-services/microsoft-d ynamics-365/pdf/Microsoft-Copilot-for-Security-pro ductivity-findings-Whitepaper-Jan2024.pdf
2024
-
[14]
Rapid7’s AI engine supercharges security operations with generative AI,
Rapid7, “Rapid7’s AI engine supercharges security operations with generative AI,” Jun. 2024. [Online]. Available: https://www.rapid7.com/about/press-relea ses/rapid7s-ai-engine-supercharges-security-operation s-with-generative-ai/
2024
-
[15]
Reliaquest launches first autonomous, self-learning AI agent for security operations,
ReliaQuest, “Reliaquest launches first autonomous, self-learning AI agent for security operations,” Sep
-
[16]
Available: https://reliaquest.com/new s-and-press/reliaquest-launches-first-autonomous-sel f-learning-ai-agent-for-security-operations/
[Online]. Available: https://reliaquest.com/new s-and-press/reliaquest-launches-first-autonomous-sel f-learning-ai-agent-for-security-operations/
-
[17]
Navigating autonomy: Unveiling security experts’ perspectives on augmented intelligence in cybersecurity,
N. Roch, H. Sievers, L. Schöni, and V . Zimmermann, “Navigating autonomy: Unveiling security experts’ perspectives on augmented intelligence in cybersecurity,” inTwentieth Symposium on Usable Privacy and Security (SOUPS 2024). Philadelphia, PA: USENIX Association, Aug. 2024, pp. 41–60. [Online]. Available: https://www.usenix.org/conference/soups2024/prese...
2024
-
[18]
Human performance in security operations: a survey on burnout, well-being and flow state among practitioners,
K. Thimmaraju, S. I. Rispens, and G.-J. Ahn, “Human performance in security operations: a survey on burnout, well-being and flow state among practitioners,” in Proc. 2025 Workshop on Security Operations Center Operations and Construction (WOSOC 2025), 2025, pp. 2–4. [Online]. Available: https://www.ndss-symposium .org/wp-content/uploads/wosoc25-final2.pdf
2025
-
[19]
Matched and mismatched SOCs: A qualitative study on security operations center issues,
F. B. Kokulu, A. Soneji, T. Bao, Y . Shoshitaishvili, Z. Zhao, A. Doupé, and G.-J. Ahn, “Matched and mismatched SOCs: A qualitative study on security operations center issues,” inProceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’19. New York, NY , USA: Association for Computing Machinery, 2019, p. 1955–1970. ...
-
[20]
Analysis of cyber security knowledge gaps based on cyber security body of knowledge,
C. Catal, A. Ozcan, E. Donmez, and A. Kasif, “Analysis of cyber security knowledge gaps based on cyber security body of knowledge,”Education and Information Technologies, vol. 28, no. 2, p. 1809–1831, Aug. 2022. [Online]. Available: https: //doi.org/10.1007/s10639-022-11261-8
-
[21]
Security practitioners in context: Their activities and in- teractions with other stakeholders within organizations,
R. Werlinger, K. Hawkey, D. Botta, and K. Beznosov, “Security practitioners in context: Their activities and in- teractions with other stakeholders within organizations,” International Journal of Human-Computer Studies, vol. 67, no. 7, pp. 584–606, 2009. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S1 071581909000354
2009
-
[22]
A dynamic theory of organizational knowledge creation,
I. Nonaka, “A dynamic theory of organizational knowledge creation,”Organization Science, vol. 5, no. 1, pp. 14–37, 1994. [Online]. Available: https://josephma honey.web.illinois.edu/BA504_Fall%202008/Uploade d%20in%20Nov%202007/Nonaka%20(1994).pdf
1994
-
[23]
An Anthropological Approach to Studying CSIRTs ,
S. C. Sundaramurthy, J. McHugh, X. S. Ou, S. R. Rajagopalan, and M. Wesch, “ An Anthropological Approach to Studying CSIRTs ,”IEEE Security & Privacy, vol. 12, no. 05, pp. 52–60, Sep. 2014. [Online]. Available: https://doi.ieeecomputersociety.org/10.110 9/MSP.2014.84
2014
-
[24]
Lave and E
J. Lave and E. Wenger,Situated Learning: Legitimate Peripheral Participation, ser. Learning in Doing: Social, Cognitive and Computational Perspectives. Cambridge University Press, 1991. [Online]. Available: https://www.cambridge.org/highereducation/books/si tuated-learning/6915ABD21C8E4619F750A4D4AC A616CD#overview
1991
-
[25]
Towards bridging the Research-Practice gap: Understanding Researcher-Practitioner interactions and challenges in Human-Centered cybersecurity,
J. M. Haney, C. C. IV , and S. M. Furman, “Towards bridging the Research-Practice gap: Understanding Researcher-Practitioner interactions and challenges in Human-Centered cybersecurity,” inTwentieth Symposium on Usable Privacy and Security (SOUPS 2024). Philadelphia, PA: USENIX Association, Aug. 2024, pp. 567–586. [Online]. Available: https://www.us enix....
2024
-
[26]
Turning contradictions into innovations or: How we learned to stop whining and improve security operations,
S. C. Sundaramurthy, J. McHugh, X. Ou, M. Wesch, A. G. Bardas, and S. R. Rajagopalan, “Turning contradictions into innovations or: How we learned to stop whining and improve security operations,” in Twelfth Symposium on Usable Privacy and Security (SOUPS 2016). Denver, CO: USENIX Association, Jun. 2016, pp. 237–251. [Online]. Available: https: //www.useni...
2016
-
[27]
Adopting AI to protect industrial control systems: Assessing challenges and opportunities from the operators’ perspective,
C. Fung, E. Zeng, and L. Bauer, “Adopting AI to protect industrial control systems: Assessing challenges and opportunities from the operators’ perspective,” inTwenty-First Symposium on Usable Privacy and Security (SOUPS 2025), 2025, pp. 555–573. [Online]. Available: https://www.usenix.org/system/files/soups 2025-fung.pdf
2025
-
[28]
An analysis of the role of situated learning in starting a security culture in a software company,
A. Tuladhar, D. Lende, J. Ligatti, and X. Ou, “An analysis of the role of situated learning in starting a security culture in a software company,” inSeventeenth Symposium on Usable Privacy and Security (SOUPS 2021). USENIX Association, Aug. 2021, pp. 617–632. [Online]. Available: https://www.usenix.org/conferenc e/soups2021/presentation/tuladhar
2021
-
[29]
Security operations center: A systematic study and open challenges,
M. Vielberth, F. Böhm, I. Fichtinger, and G. Pernul, “Security operations center: A systematic study and open challenges,”IEEE Access, vol. 8, pp. 227 756–227 779, 2020. [Online]. Available: https: //api.semanticscholar.org/CorpusID:230513062
2020
-
[30]
An ethnographic study to assess the enactment of information security culture in a retail store,
A. Greig, K. Renaud, and S. Flowerday, “An ethnographic study to assess the enactment of information security culture in a retail store,” in2015 World Congress on Internet Security (WorldCIS), 2015, pp. 61–66. [Online]. Available: https://ieeexplore.ieee. org/document/7359415
-
[31]
AI-augmented SOC: A survey of LLMs and agents for security automation,
S. Srinivas, B. Kirk, J. Zendejas, M. Espino, M. Boskovich, A. Bari, K. Dajani, and N. Alzahrani, “AI-augmented SOC: A survey of LLMs and agents for security automation,”Journal of Cybersecurity and Privacy, vol. 5, no. 4, 2025. [Online]. Available: https://www.mdpi.com/2624-800X/5/4/95
2025
-
[32]
The rise of cognitive SOCs: A systematic literature review on AI approaches,
F. Binbeshr, M. Imam, M. Ghaleb, M. Hamdan, M. A. Rahim, and M. Hammoudeh, “The rise of cognitive SOCs: A systematic literature review on AI approaches,”IEEE Open Journal of the Computer Society, vol. 6, pp. 360–379, 2025. [Online]. Available: 14 https://www.computer.org/csdl/journal/oj/2025/01/108 58372/23VPu8d631m
2025
-
[33]
J. Mink, H. Benkraouda, L. Yang, A. Ciptadi, A. Ah- madzadeh, D. V otipka, and G. Wang, “Everybody’s got ML, tell me what else you have: Practitioners’ perception of ML-based security tools and explana- tions,” in2023 IEEE Symposium on Security and Privacy (SP), 2023, pp. 2068–2085. [Online]. Available: https://ieeexplore.ieee.org/document/10179321
-
[34]
R. Singh, S. Tariq, F. Jalalvand, M. B. Chhetri, S. Nepal, C. Paris, and M. Lochner, “LLMs in the SOC: An empirical study of human-AI collaboration in security operations centres,” 2025. [Online]. Available: https://arxiv.org/abs/2508.18947
-
[35]
Integrating large language models into security incident response,
D. Kramer, L. Rosique, A. Narotam, E. Bursztein, P. G. Kelley, K. Thomas, and A. Woodruff, “Integrating large language models into security incident response,” inTwenty-First Symposium on Usable Privacy and Security (SOUPS 2025), 2025, pp. 133–148. [Online]. Available: https://www.usenix.org/system/files/soups 2025-kramer.pdf
2025
-
[36]
PentestGPT: Evaluating and harnessing large language models for automated penetration testing,
G. Deng, Y . Liu, V . Mayoral-Vilches, P. Liu, Y . Li, Y . Xu, T. Zhang, Y . Liu, M. Pinzger, and S. Rass, “PentestGPT: Evaluating and harnessing large language models for automated penetration testing,” in33rd USENIX Security Symposium (USENIX Security 24). Philadelphia, PA: USENIX Association, Aug. 2024, pp. 847–864. [Online]. Available: https://www.use...
2024
-
[37]
A preliminary study on using large language models in software pentesting,
K. Shashwat, F. Hahn, X. Ou, D. Goldgof, L. Hall, J. Ligatti, S. R. Rajgopalan, and A. Z. Tabari, “A preliminary study on using large language models in software pentesting,” inWorkshop on SOC Operations and Construction (WOSOC), March 2024. [Online]. Available: https://arxiv.org/abs/2401.17459
-
[38]
N. Rastogi, S. Pant, D. Dhanuka, A. Saxena, and P. Mairal, “Too much to trust? measuring the security and cognitive impacts of explainability in AI-driven SOCs,” 2025. [Online]. Available: https://arxiv.org/abs/2503.02065
-
[39]
Explainable AI in cybersecurity operations: Lessons learned from xAI tool deployment,
M. Nyre-Yu, E. Morris, M. R. Smith, B. Moss, and C. Smutz, “Explainable AI in cybersecurity operations: Lessons learned from xAI tool deployment,” Proceedings 2022 Symposium on Usable Security, 2022. [Online]. Available: https://api.semanticscholar.org/Co rpusID:253156531
2022
-
[40]
Lende and G
D. Lende and G. Downey, Eds.,The encultured brain: an introduction to neuroanthropology. MIT Press, Jan
-
[41]
Available: https://direct.mit.edu/books /edited-volume/3397/The-Encultured-BrainAn-Intro duction-to
[Online]. Available: https://direct.mit.edu/books /edited-volume/3397/The-Encultured-BrainAn-Intro duction-to
-
[42]
I. Nonaka and G. von Krogh, “Perspective—tacit knowledge and knowledge conversion: Controversy and advancement in organizational knowledge creation theory,”Organization Science, vol. 20, no. 3, pp. 635–652, 2009. [Online]. Available: https://pubsonline.i nforms.org/doi/10.1287/orsc.1080.0412
-
[43]
L. A. Suchman,Plans and situated actions: An inquiry into the idea of human-machine communication. University of California, Berkeley, 1984. [Online]. Available: https://www.cs.colby.edu/courses/J16/cs267 /papers/Suchman-PlansAndSituatedActions.pdf
1984
-
[44]
Studies of expansive learning: Foundations, findings and future challenges,
Y . Engeström and A. Sannino, “Studies of expansive learning: Foundations, findings and future challenges,” Introduction to Vygotsky, pp. 100–146, 2017. [Online]. Available: https://www.sciencedirect.com/science/arti cle/abs/pii/S1747938X10000035
2017
-
[45]
Van Maanen,Tales of the field: On writing ethnography
J. Van Maanen,Tales of the field: On writing ethnography. University of Chicago Press, 2011. [Online]. Available: https://press.uchicago.edu/ucp/boo ks/book/chicago/T/bo11574153.html
2011
-
[46]
Work-oriented design of computer artifacts,
P. Ehn, “Work-oriented design of computer artifacts,” Ph.D. dissertation, Arbetslivscentrum, 1988. [Online]. Available: https://www.diva-portal.org/smash/get/diva2: 580037/fulltext02.pdf
1988
-
[47]
A multifaceted intervention to reduce inappropriate polypharmacy in primary care: research co-creation opportunities in a pilot study,
K. Anderson, M. Foster, C. Freeman, and I. Scott, “A multifaceted intervention to reduce inappropriate polypharmacy in primary care: research co-creation opportunities in a pilot study,”The Medical journal of Australia, vol. 204, pp. S41–S, 04 2016. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.5 694/mja16.00125
2016
-
[48]
T. Greenhalgh, C. JACKSON, S. Shaw, and T. Janamian, “Achieving research impact through co-creation in community-based health services: Literature review and case study: Achieving research impact through co-creation,”The Milbank Quarterly, vol. 94, pp. 392–429, 06 2016. [Online]. Available: https://pubmed .ncbi.nlm.nih.gov/27265562/
-
[49]
"but they have overlooked a few things in Afghanistan:
K. Panahi, S. Robertson, Y . Acar, A. G. Bardas, T. Kohno, and L. Simko, “"but they have overlooked a few things in Afghanistan:" an analysis of the integration of biometric voter verification in the 2019 afghan presidential elections,” in33rd USENIX Security Symposium (USENIX Security 24). Philadelphia, PA: USENIX Association, Aug. 2024, pp. 2047–2064. [...
2019
-
[50]
S. Gupta, R. Ranjan, and S. N. Singh, “A comprehensive survey of retrieval-augmented generation (RAG): Evolution, current landscape and future directions,” arXiv preprint arXiv:2410.12837, 2024. [Online]. Available: https://arxiv.org/abs/2410.12837
-
[51]
Retrieval augmented generation for robust cyber defense,
M. Rahman, K. O. Piryani, A. M. Sanchez, S. Munikoti, L. De La Torre, M. S. Levin, M. Akbar, M. Hossain, M. Hasan, and M. Halappanavar, “Retrieval augmented generation for robust cyber defense,” Pacific Northwest National Laboratory (PNNL), Richland, WA (United States), Tech. Rep., 2024. [Online]. Available: https: //www.pnnl.gov/main/publications/externa...
2024
-
[52]
React: Synergizing reasoning and acting in language models,
S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y . Cao, “React: Synergizing reasoning and acting in language models,” inThe eleventh international conference on learning representations,
-
[53]
Available: https://arxiv.org/abs/2210.0 3629
[Online]. Available: https://arxiv.org/abs/2210.0 3629
-
[54]
A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, P. Savary, C. Bamford, D. S. Chaplot, D. de las Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, T. Lavril, T. Lacroix, and et al., “Mistral 7b,”arXiv preprint arXiv:2310.06825, 2023. [Online]. Available: https://arxiv.org/abs/2310.06825
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[55]
A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al- Dahle, A. Letman, A. Mathur, and et al., “The llama 3 herd of models,” 2024. [Online]. Available: https://arxiv.org/abs/2407.21783
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[56]
Findings of the association for computational linguistics: Acl 2024,
L.-W. Ku, A. F. Martins, and V . Srikumar, “Findings of the association for computational linguistics: Acl 2024,” inFindings of the Association for Computational Linguistics: ACL 2024, 2024
2024
-
[57]
Ollama. Ollama. Accessed: 2026/02. [Online]. Avail- able: https://ollama.com/
2026
-
[58]
Chroma. Chroma. Accessed: 2026/02. [Online]. Avail- able: https://www.trychroma.com/
2026
-
[59]
Mitre attack
MITRE. Mitre attack. Accessed: 2026/02. [Online]. Available: https://attack.mitre.org/ 16 A Codebook Table 2: Thematic Structure: High-Level Themes, Definitions, and Associated Codes Theme Definition Codes Ethnographic Insight Embedded anthropological approach actively reex- amined operational pain points, surfaced tacit con- straints, and supported the c...
2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.