Recognition: unknown
Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows
Pith reviewed 2026-05-07 11:00 UTC · model grok-4.3
The pith
GenAI shapes the core information recruiters use to define jobs and judge candidates, even as they believe they retain final authority over hiring decisions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Through interviews with 22 recruiting professionals, the authors find that recruiters believe they maintain final authority across the recruiting pipeline, yet genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation, from defining a job to determining good interview performances. The decision to adopt genAI is frequently outside recruiters' control, driven by calls from higher-ups, the need to counter applicant AI use, and productivity demands. Reported efficiency gains remain marginal and come at the cost of recruiter deskilling that jeopardizes meaningful oversight of decision-making.
What carries the argument
Interviews that expose misperceptions of agency, showing genAI influences decisions indirectly by redefining the information base rather than through direct overrides.
If this is right
- Design of genAI tools for hiring must make their effects on foundational information visible to preserve perceived and actual control.
- Deskilling among recruiters risks weakening the quality of human oversight in employment decisions over time.
- Forced or pressured adoption of genAI can produce limited net benefits while eroding professional skills.
- Perceptions of final authority do not align with the actual ways genAI structures evaluation inputs.
Where Pith is reading between the lines
- The same pattern of indirect shaping could affect other decision domains where AI preprocesses data before human review, such as loan approvals or medical triage.
- Recruiter training could incorporate exercises that help users detect and adjust for AI-defined criteria in their workflows.
- Larger-scale surveys or log-based studies of actual tool usage could test whether self-reported control matches observed changes in outputs.
- Regulations on AI in hiring might need requirements for transparency about how AI contributes to job definitions and performance standards.
Load-bearing premise
Self-reported perceptions from interviews with 22 recruiting professionals accurately capture the real extent of genAI influence and extend beyond this specific group without major bias.
What would settle it
A controlled observation where recruiters' job postings, screening criteria, or final candidate selections change substantially once genAI-generated content is removed or its specific contributions are fully disclosed to them.
read the original abstract
When generative AI (genAI) systems are used in high-stakes decision-making, its recommended role is to aid, rather than replace, human decision-making. However, there is little empirical exploration of how professionals making high-stakes decisions, such as those related to employment, perceive their agency and level of control when working with genAI systems. Through interviews with 22 recruiting professionals, we investigate how genAI subtly influences control over everyday workflows and even individual hiring decisions. Our findings highlight a pressing conflict: while recruiters believe they have final authority across the recruiting pipeline, genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation, from defining a job to determining good interview performances. The decision of whether or not to adopt was also often outside recruiters' control, with many feeling compelled to adopt genAI due to calls to integrate AI from higher-ups in their business, to combat applicant use of AI, and the individual need to boost productivity. Despite a seemingly seismic shift in how recruiting happens, participants only reported marginal efficiency gains. Such gains came at the high cost of recruiter deskilling, a trend that jeopardizes the meaningful oversight of decision-making. We conclude by discussing the implications of such findings for responsible and perceptible genAI use in hiring contexts.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports on semi-structured interviews with 22 recruiting professionals and claims that, despite recruiters' belief in retaining final authority over hiring decisions, generative AI functions as an 'invisible architect' that shapes foundational elements of the recruiting pipeline (job definitions, candidate evaluation criteria, and interview performance standards). It further argues that adoption decisions are often externally driven, efficiency gains are marginal, and the process risks recruiter deskilling, with implications for responsible GenAI deployment in employment contexts.
Significance. If the reported perceptions are robustly documented, the work would add timely empirical insight into human-AI agency dynamics in high-stakes professional settings, extending HCI and CSCW literature on subtle influence and deskilling. The qualitative focus on recruiting workflows offers concrete examples that could inform design guidelines for perceptible AI tools, though the absence of triangulation limits claims about actual (versus perceived) influence.
major comments (3)
- [Methods] Methods section: The manuscript supplies no information on sampling/recruitment procedures, interview protocol or guide, transcription process, thematic analysis steps (e.g., inductive vs. deductive coding), number of coders, or any trustworthiness measures such as inter-rater reliability or member checking. Because the central claims about misperceptions of agency and GenAI's 'invisible architect' role rest exclusively on this thematic analysis, the lack of methodological transparency is load-bearing for evaluating the evidence quality.
- [Abstract and Findings] Abstract and Findings: The assertion that 'genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation' is presented as an empirical finding rather than a participant-reported perception. The supporting data consist only of self-reported interview excerpts; no workflow observations, artifact comparisons (e.g., AI-generated vs. final job descriptions), adoption metrics, or organizational records are described, leaving the factual framing unsupported by the evidence provided.
- [Results and Discussion] Results/Discussion: Claims of 'marginal efficiency gains' and 'deskilling' are derived from participant self-reports without quantification, pre/post comparisons, or external validation. This weakens the ability to assess the magnitude or generalizability of the reported trade-offs, which are central to the paper's argument about jeopardized oversight.
minor comments (2)
- [Abstract] The abstract and introduction could more explicitly flag that all claims derive from perceptions rather than observed practices, to avoid conflating reported feelings with documented influence.
- [Methods or Results] Participant demographics (e.g., experience level, organization size, geographic distribution) are not summarized in a table or early section, making it harder to evaluate the sample's representativeness.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback, which has identified key areas where the manuscript can be strengthened in terms of transparency and precise framing. We address each major comment below and outline the revisions we will make.
read point-by-point responses
-
Referee: [Methods] Methods section: The manuscript supplies no information on sampling/recruitment procedures, interview protocol or guide, transcription process, thematic analysis steps (e.g., inductive vs. deductive coding), number of coders, or any trustworthiness measures such as inter-rater reliability or member checking. Because the central claims about misperceptions of agency and GenAI's 'invisible architect' role rest exclusively on this thematic analysis, the lack of methodological transparency is load-bearing for evaluating the evidence quality.
Authors: We agree that the Methods section requires substantial expansion for full transparency and replicability. The study used purposive sampling to recruit 22 recruiting professionals via LinkedIn outreach and professional associations in the recruiting field. Interviews followed a semi-structured guide addressing daily workflows, genAI tool adoption and use cases, perceptions of control and agency, efficiency impacts, and deskilling concerns. All sessions were audio-recorded with consent and transcribed verbatim. Analysis employed inductive thematic analysis per Braun and Clarke's six-phase framework, with two authors independently coding transcripts, comparing codes, and resolving discrepancies through discussion. Trustworthiness was supported via reflexive memos, team debriefing, and an audit trail of coding decisions. We will revise the Methods section to detail all of these elements and include the interview guide as supplementary material. revision: yes
-
Referee: [Abstract and Findings] Abstract and Findings: The assertion that 'genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation' is presented as an empirical finding rather than a participant-reported perception. The supporting data consist only of self-reported interview excerpts; no workflow observations, artifact comparisons (e.g., AI-generated vs. final job descriptions), adoption metrics, or organizational records are described, leaving the factual framing unsupported by the evidence provided.
Authors: We appreciate the referee's emphasis on accurate epistemological framing. Our work is a qualitative interview study centered on professionals' perceptions and experiences; we do not claim direct observation of genAI's architectural influence or possess artifact comparisons or metrics. We will revise the abstract and findings sections to explicitly qualify the claims as derived from participant accounts and thematic analysis (e.g., 'participants described genAI as functioning as an invisible architect...' and 'our analysis indicates that recruiters perceive...'). We will also strengthen the limitations discussion to note the absence of observational or artifact-based validation and the interpretive nature of the evidence. revision: yes
-
Referee: [Results and Discussion] Results/Discussion: Claims of 'marginal efficiency gains' and 'deskilling' are derived from participant self-reports without quantification, pre/post comparisons, or external validation. This weakens the ability to assess the magnitude or generalizability of the reported trade-offs, which are central to the paper's argument about jeopardized oversight.
Authors: We concur that reliance on self-reports without quantification or external benchmarks limits assessment of magnitude and generalizability. In revision we will add more precise reporting of theme prevalence (e.g., noting that the majority of participants described efficiency gains as marginal and providing additional representative excerpts). We will expand the limitations section to explicitly discuss the lack of pre/post data, behavioral measures, or organizational records and the consequent interpretive scope. While we cannot introduce new quantitative data at this stage, we believe the consistent patterns across the sample still offer valuable insight into perceived trade-offs and will frame the argument accordingly as perception-based. revision: partial
Circularity Check
No circularity in qualitative empirical study
full rationale
The paper is an empirical qualitative study relying on semi-structured interviews with 22 recruiting professionals followed by thematic analysis. No equations, derivations, fitted parameters, predictions, or mathematical models are present that could reduce to their own inputs. Central claims are framed as participant perceptions and themes extracted from interview data rather than self-derived results. Any self-citations (if present) are not load-bearing for uniqueness theorems or ansatzes; the evidence chain rests on external interview data and is self-contained against standard qualitative benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Self-reported perceptions from interviews accurately reflect participants' actual experiences and the influence of GenAI on their workflows.
Reference graph
Works this paper leans on
- [1]
-
[2]
Jiafu An, Difang Huang, Chen Lin, and Mingzhu Tai. 2025. Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation.PNAS nexus4, 3 (2025), pgaf089
2025
-
[3]
Jacy Reese Anthis, Kristian Lum, Michael Ekstrand, Avi Feller, and Chenhao Tan. 2025. The impossibility of fair LLMs. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 105–120
2025
-
[4]
Lena Armstrong, Abbey Liu, Stephen MacNeil, and Danaë Metaxa. 2024. The Silicon Ceiling: Auditing GPT’s Race and Gender Biases in Hiring. InProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(San Luis Potosi, Mexico)(EAAMO ’24). Association for Computing Machinery, New York, NY, USA, Article 2, 18 pages. ...
-
[5]
Ashby. 2025. Ashby. https://www.ashbyhq.com/
2025
-
[6]
Colorado General Assembly. 2024. Consumer Protections for Artificial Intelligence (SB24-205)
2024
-
[7]
Illinois General Assembly. 2024. Artificial Intelligence in Employment (Public Act 103-0804)
2024
-
[8]
Avature. 2025. Applicant Tracking System. https://www.avature.net/applicant-tracking-system/
2025
-
[9]
Alessio Azzutti. 2024. Artificial Intelligence and Machine Learning in Finance: Key Concepts, Applications, and Regulatory Considerations. InThe Emerald Handbook of Fintech: Reshaping Finance. Emerald Publishing Limited, 315–339
2024
- [10]
-
[11]
Available: https://doi.org/10.1109/SP54263.2024.00254
Rosanna Bellini, Emily Tseng, Noel Warford, Alaa Daffalla, Tara Matthews, Sunny Consolvo, Jill Palzkill Woelfer, Patrick Gage Kelley, Michelle L. Mazurek, Dana Cuomo, Nicola Dell, and Thomas Ristenpart. 2024. SoK: Safer Digital-Safety Research Involving At-Risk Users. In2024 IEEE Symposium on Security and Privacy (SP). 635–654. doi:10.1109/SP54263.2024.00071
-
[12]
Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination.American economic review94, 4 (2004), 991–1013
2004
-
[13]
Miranda Bogen and Aaron Rieke. 2018. Help wanted: An examination of hiring algorithms, equity, and bias.Upturn, December7 (2018)
2018
-
[14]
Virginia Braun and Victoria Clarke. 2021. Thematic analysis: A practical guide. (2021)
2021
-
[15]
Breezy. 2025. Breezy. https://breezy.hr/
2025
-
[16]
BrightHire. 2025. Equitable Hiring. https://brighthire.com/solutions/equitable-hiring/
2025
-
[17]
BrightHire. 2025. BrightHire. https://brighthire.com/
2025
-
[18]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.Proc. ACM Hum.-Comput. Interact.5, CSCW1, Article 188 (April 2021), 21 pages. doi:10.1145/3449287
work page internal anchor Pith review doi:10.1145/3449287 2021
-
[19]
Bullhorn. 2025. Bullhorn. https://www.bullhorn.com/
2025
-
[20]
Shiye Cao and Chien-Ming Huang. 2022. Understanding User Reliance on AI in Assisted Decision-Making.Proc. ACM Hum.-Comput. Interact.6, CSCW2, Article 471 (Nov. 2022), 23 pages. doi:10.1145/3555572
-
[21]
S Catherine, NV Suresh, T Mangaiyarkarasi, and Leena Jenefa. 2025. Unveiling the Enigma of Shadow: Ethical Difficulties in the Field of AI. InNavigating Data Science: Unleashing the Creative Potential of Artificial Intelligence. Emerald Publishing Limited, 57–67
2025
-
[22]
2025.The GenAI Divide
Challapally, Aditya, Pease, Chris, Raskar, Ramesh, and Chari, Pradyumna. 2025.The GenAI Divide. Technical Report. MIT. Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows FAccT ’26, June 25–28, 2026, Montreal, QC, Canada
2025
-
[23]
Myra Cheng, Sunny Yu, Cinoo Lee, Pranav Khadpe, Lujain Ibrahim, and Dan Jurafsky. 2025. Social Sycophancy: A Broader Understanding of LLM Sycophancy. doi:10.48550/arXiv.2505.13995 arXiv:2505.13995 [cs]
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2505.13995 2025
-
[24]
Avishek Choudhury and Zaira Chaudhry. 2024. Large language models and user trust: consequence of self-referential learning loop and the deskilling of health care professionals.Journal of Medical Internet Research26 (2024), e56764
2024
-
[25]
Phoebe K Chua and Melissa Mazmanian. 2020. Are you one of us? Current hiring practices suggest the potential for class biases in large tech companies.Proceedings of the ACM on Human-Computer Interaction4, CSCW2 (2020), 1–20
2020
-
[26]
Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. 2022. Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. InProceedings of the 2022 ACM conference on fairness, accountability, and transparency. 1571–1583
2022
-
[27]
Covey. 2024. Covey. https://covey.framer.website/
2024
-
[28]
Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. doi:10.1145/3313831.3376638
-
[29]
Online Etymology Dictionary. 2026. Origin and history of agency. https://www.etymonline.com/word/agency
2026
-
[30]
Colin Duncan and Wendy Loretto. 2004. Never the right age? Gender and age-based discrimination in employment.Gender, Work & Organization11, 1 (2004), 95–115
2004
-
[31]
Eightfold. 2025. AI talent acquisition & recruiting platform. https://eightfold.ai/
2025
-
[32]
Department of Labor; U.S
Equal Employment Opportunity Commission; U.S. Department of Labor; U.S. Department of Justice; U.S. Office of Personnel Management; U.S. Department of the Treasury. 1978. Uniform Guidelines on Employee Selection Procedures. 43 Fed. Reg. 38295 (Aug. 25, 1978); codified at 29 C.F.R. Part 1607. Guidance establishing uniform federal standards for employment t...
1978
-
[33]
European Parliament and Council of the European Union. 2016. General Data Protection Regulation.Official Journal of the European UnionL 119 (2016), 1–88. Article 22
2016
-
[34]
John C Flanagan. 1954. The Critical Incident Technique. (1954), 1–33
1954
-
[35]
Gem. 2025. Gem. https://www.gem.com/
2025
-
[36]
Kate Glazko, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, and Jennifer Mankoff. 2024. Identifying and Improving Disability Bias in GPT-Based Resume Screening. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 687–700. doi:10.114...
-
[37]
ATLAS.ti Scientific Software Development GmbH. 2025. ATLAS.ti Mac. https://atlasti.com
2025
-
[38]
Ben Green and Yiling Chen. 2019. Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. InProceedings of the conference on fairness, accountability, and transparency. 90–99
2019
-
[39]
Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm-in-the-Loop Decision Making.Proc. ACM Hum.-Comput. Interact.3, CSCW, Article 50 (Nov. 2019), 24 pages. doi:10.1145/3359152
-
[40]
Ben Green and Yiling Chen. 2021. Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts.Proc. ACM Hum.-Comput. Interact.5, CSCW2, Article 418 (Oct. 2021), 33 pages. doi:10.1145/3479562
-
[41]
Greenhouse. 2025. Greenhouse. https://www.greenhouse.com
2025
-
[42]
Hirevue. 2026. Our Science. https://www.hirevue.com/our-science
2026
-
[43]
Hirevue. 2025. HireVue. https://www.hirevue.com/
2025
-
[44]
IBM. 2026. Generative AI for Human Resources (HR) Professionals. https://www.coursera.org/specializations/generative-ai-for-human- resources/
2026
-
[45]
Lujain Ibrahim, Katherine M Collins, Sunnie SY Kim, Anka Reuel, Max Lamparth, Kevin Feng, Lama Ahmad, Prajna Soni, Alia El Kattan, Merlin Stein, et al. 2025. Measuring and mitigating overreliance is necessary for building human-compatible AI.arXiv preprint arXiv:2509.08010(2025)
-
[46]
iCIMS. 2025. iCIMS. https://www.icims.com/
2025
-
[47]
Indeed. 2025. Indeed. https://www.indeed.com/hire
2025
-
[48]
Juicebox. 2025. Juicebox (PeopleGPT) - The Leading AI Recruiting Platform. https://juicebox.ai/
2025
-
[49]
Patrick Kline, Evan K Rose, and Christopher R Walters. 2022. Systemic discrimination among large US employers.The Quarterly Journal of Economics137, 4 (2022), 1963–2036
2022
-
[50]
Klaus Krippendorff. 2011. Computing Krippendorff’s alpha-reliability. (2011)
2011
-
[51]
Olya Kudina and Bas de Boer. 2025. Large language models, politics, and the functionalization of language.AI and Ethics5, 3 (2025), 2367–2379
2025
-
[52]
Alisa Küper and Nicole Krämer. 2025. Psychological traits and appropriate reliance: Factors shaping trust in AI.International Journal of Human–Computer Interaction41, 7 (2025), 4115–4131
2025
- [53]
-
[54]
Lan Li, Tina Lassiter, Joohee Oh, and Min Kyung Lee. 2021. Algorithmic Hiring in Practice: Recruiter and HR Professional’s Perspectives on AI Use in Hiring. InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society(Virtual Event, USA)(AIES ’21). Association for Computing Machinery, New York, NY, USA, 166–176. doi:10.1145/3461702.3462531
- [55]
-
[56]
2025.The Future of Recruiting 2025
LinkedIn. 2025.The Future of Recruiting 2025. Technical Report. https://business.linkedin.com/talent-solutions/resources/future-of- recruiting
2025
-
[57]
Takuya Maeda and Anabel Quan-Haase. 2024. When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Association for Computing Machinery, New York, NY, USA, 1068–1077. doi:10.1145/3630106.3658956
-
[58]
Lars Malmqvist. 2025. Sycophancy in Large Language Models: Causes and Mitigations. InIntelligent Computing, Kohei Arai (Ed.). Springer Nature Switzerland, Cham, 61–74. doi:10.1007/978-3-031-92611-2_5
- [59]
-
[60]
Stephanie Mertens, Mario Herberz, Ulf J. J. Hahnel, and Tobias Brosch. 2022. The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains.Proceedings of the National Academy of Sciences119, 1 (Jan. 2022), e2107346118. doi:10.1073/pnas.2107346118 Publisher: Proceedings of the National Academy of Sciences
-
[61]
Metaview. 2025. Metaview. https://www.metaview.ai/
2025
-
[62]
Huy Nghiem, John Prindle, Jieyu Zhao, and Hal Daumé Iii. 2024. " You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations.arXiv preprint arXiv:2406.12232(2024)
-
[63]
Bethany J Nichols, David S Pedulla, and Jeff T Sheng. 2025. More than a match:“Fit” as a tool in hiring decisions.Work and Occupations 52, 2 (2025), 175–203
2025
-
[64]
Phil Strazzulla. 2025. 22 Best Applicant Tracking Systems (ATS): Full Comparison 2025. https://www.selectsoftwarereviews.com/buyer- guide/applicant-tracking-systems
2025
-
[65]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: evaluating claims and practices. InProceedings of the 2020 Conference on Fairness, Accountability, and Transparency(Barcelona, Spain)(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 469–481. doi:10.1145/3351095.3372828
-
[66]
LinkedIn Recruiter. 2025. LinkedIn Recruiter. https://business.linkedin.com/talent-solutions/recruiter
2025
-
[67]
Lauren A Rivera. 2012. Hiring as cultural matching: The case of elite professional service firms.American sociological review77, 6 (2012), 999–1022
2012
-
[68]
Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. 2020. What does it mean to’solve’the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems. InProceedings of the 2020 conference on fairness, accountability, and transparency. 458–468
2020
-
[69]
Devansh Saxena and Shion Guha. 2024. Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-making.ACM Journal on Responsible Computing(March 2024). doi:10.1145/3616473 Publisher: ACMPUB27New York, NY
-
[70]
Trevor Schachner. Febr. Prompt Engineering for HR. https://www.shrm.org/labs/resources/prompt-engineering-for-hr
-
[71]
2016.Designing the User Interface: Strategies for Effective Human-Computer Interaction(6th ed.)
Ben Shneiderman, Catherine Plaisant, Maxine Cohen, Steven Jacobs, Niklas Elmqvist, and Nicholas Diakopoulos. 2016.Designing the User Interface: Strategies for Effective Human-Computer Interaction(6th ed.). Pearson
2016
-
[72]
Mona Sloane, Emanuel Moss, and Rumman Chowdhury. 2022. A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability.Patterns3, 2 (2022)
2022
-
[73]
S Shyam Sundar. 2020. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII).Journal of Computer-Mediated Communication25, 1 (Jan. 2020), 74–88. doi:10.1093/jcmc/zmz026 _eprint: https://academic.oup.com/jcmc/article- pdf/25/1/74/32961171/zmz026.pdf
-
[74]
Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael S Bernstein, and Ranjay Krishna. 2023. Explanations can reduce overreliance on ai systems during decision-making.Proceedings of the ACM on Human-Computer Interaction7, CSCW1 (2023), 1–38
2023
- [75]
-
[76]
Willo. 2025. Willo. https://www.willo.video
2025
-
[77]
Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. 2021. Building and auditing fair algorithms: A case study in candidate screening. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 666–677
2021
-
[78]
Kyra Wilson and Aylin Caliskan. 2024. Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society7, 1 (Oct. 2024), 1578–1590. doi:10.1609/aies.v7i1.31748
-
[79]
Kyra Wilson, Mattea Sim, Anna-Maria Gueorguieva, and Aylin Caliskan. 2025. No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy. arXiv:2509.04404 [cs.CY] https://arxiv.org/abs/2509.04404 Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows FAccT ’26, June 25–28, 2026...
-
[80]
Workable. 2025. Workable. https://www.workable.com/
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.