pith. machine review for the scientific record. sign in

arxiv: 2604.26851 · v1 · submitted 2026-04-29 · 💻 cs.CY · cs.AI

Recognition: unknown

Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows

Authors on Pith no claims yet

Pith reviewed 2026-05-07 11:00 UTC · model grok-4.3

classification 💻 cs.CY cs.AI
keywords generative AIrecruiting workflowsperceptions of agencyhiring decisionsdeskillinginvisible influencehuman oversighthigh-stakes decisions
0
0 comments X

The pith

GenAI shapes the core information recruiters use to define jobs and judge candidates, even as they believe they retain final authority over hiring decisions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how recruiting professionals experience agency and control when generative AI enters their workflows for high-stakes employment decisions. Interviews with 22 professionals show that recruiters view themselves as holding ultimate oversight across the pipeline, yet genAI quietly structures the underlying data from job descriptions through interview evaluations. Adoption often stems from external pressures such as directives from leadership or the need to match applicant AI use, yielding only modest efficiency improvements. These changes coincide with noticeable deskilling among recruiters, which undercuts effective human supervision. The findings matter because they question whether current practices preserve meaningful human judgment in hiring.

Core claim

Through interviews with 22 recruiting professionals, the authors find that recruiters believe they maintain final authority across the recruiting pipeline, yet genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation, from defining a job to determining good interview performances. The decision to adopt genAI is frequently outside recruiters' control, driven by calls from higher-ups, the need to counter applicant AI use, and productivity demands. Reported efficiency gains remain marginal and come at the cost of recruiter deskilling that jeopardizes meaningful oversight of decision-making.

What carries the argument

Interviews that expose misperceptions of agency, showing genAI influences decisions indirectly by redefining the information base rather than through direct overrides.

If this is right

  • Design of genAI tools for hiring must make their effects on foundational information visible to preserve perceived and actual control.
  • Deskilling among recruiters risks weakening the quality of human oversight in employment decisions over time.
  • Forced or pressured adoption of genAI can produce limited net benefits while eroding professional skills.
  • Perceptions of final authority do not align with the actual ways genAI structures evaluation inputs.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pattern of indirect shaping could affect other decision domains where AI preprocesses data before human review, such as loan approvals or medical triage.
  • Recruiter training could incorporate exercises that help users detect and adjust for AI-defined criteria in their workflows.
  • Larger-scale surveys or log-based studies of actual tool usage could test whether self-reported control matches observed changes in outputs.
  • Regulations on AI in hiring might need requirements for transparency about how AI contributes to job definitions and performance standards.

Load-bearing premise

Self-reported perceptions from interviews with 22 recruiting professionals accurately capture the real extent of genAI influence and extend beyond this specific group without major bias.

What would settle it

A controlled observation where recruiters' job postings, screening criteria, or final candidate selections change substantially once genAI-generated content is removed or its specific contributions are fully disclosed to them.

read the original abstract

When generative AI (genAI) systems are used in high-stakes decision-making, its recommended role is to aid, rather than replace, human decision-making. However, there is little empirical exploration of how professionals making high-stakes decisions, such as those related to employment, perceive their agency and level of control when working with genAI systems. Through interviews with 22 recruiting professionals, we investigate how genAI subtly influences control over everyday workflows and even individual hiring decisions. Our findings highlight a pressing conflict: while recruiters believe they have final authority across the recruiting pipeline, genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation, from defining a job to determining good interview performances. The decision of whether or not to adopt was also often outside recruiters' control, with many feeling compelled to adopt genAI due to calls to integrate AI from higher-ups in their business, to combat applicant use of AI, and the individual need to boost productivity. Despite a seemingly seismic shift in how recruiting happens, participants only reported marginal efficiency gains. Such gains came at the high cost of recruiter deskilling, a trend that jeopardizes the meaningful oversight of decision-making. We conclude by discussing the implications of such findings for responsible and perceptible genAI use in hiring contexts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper reports on semi-structured interviews with 22 recruiting professionals and claims that, despite recruiters' belief in retaining final authority over hiring decisions, generative AI functions as an 'invisible architect' that shapes foundational elements of the recruiting pipeline (job definitions, candidate evaluation criteria, and interview performance standards). It further argues that adoption decisions are often externally driven, efficiency gains are marginal, and the process risks recruiter deskilling, with implications for responsible GenAI deployment in employment contexts.

Significance. If the reported perceptions are robustly documented, the work would add timely empirical insight into human-AI agency dynamics in high-stakes professional settings, extending HCI and CSCW literature on subtle influence and deskilling. The qualitative focus on recruiting workflows offers concrete examples that could inform design guidelines for perceptible AI tools, though the absence of triangulation limits claims about actual (versus perceived) influence.

major comments (3)
  1. [Methods] Methods section: The manuscript supplies no information on sampling/recruitment procedures, interview protocol or guide, transcription process, thematic analysis steps (e.g., inductive vs. deductive coding), number of coders, or any trustworthiness measures such as inter-rater reliability or member checking. Because the central claims about misperceptions of agency and GenAI's 'invisible architect' role rest exclusively on this thematic analysis, the lack of methodological transparency is load-bearing for evaluating the evidence quality.
  2. [Abstract and Findings] Abstract and Findings: The assertion that 'genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation' is presented as an empirical finding rather than a participant-reported perception. The supporting data consist only of self-reported interview excerpts; no workflow observations, artifact comparisons (e.g., AI-generated vs. final job descriptions), adoption metrics, or organizational records are described, leaving the factual framing unsupported by the evidence provided.
  3. [Results and Discussion] Results/Discussion: Claims of 'marginal efficiency gains' and 'deskilling' are derived from participant self-reports without quantification, pre/post comparisons, or external validation. This weakens the ability to assess the magnitude or generalizability of the reported trade-offs, which are central to the paper's argument about jeopardized oversight.
minor comments (2)
  1. [Abstract] The abstract and introduction could more explicitly flag that all claims derive from perceptions rather than observed practices, to avoid conflating reported feelings with documented influence.
  2. [Methods or Results] Participant demographics (e.g., experience level, organization size, geographic distribution) are not summarized in a table or early section, making it harder to evaluate the sample's representativeness.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback, which has identified key areas where the manuscript can be strengthened in terms of transparency and precise framing. We address each major comment below and outline the revisions we will make.

read point-by-point responses
  1. Referee: [Methods] Methods section: The manuscript supplies no information on sampling/recruitment procedures, interview protocol or guide, transcription process, thematic analysis steps (e.g., inductive vs. deductive coding), number of coders, or any trustworthiness measures such as inter-rater reliability or member checking. Because the central claims about misperceptions of agency and GenAI's 'invisible architect' role rest exclusively on this thematic analysis, the lack of methodological transparency is load-bearing for evaluating the evidence quality.

    Authors: We agree that the Methods section requires substantial expansion for full transparency and replicability. The study used purposive sampling to recruit 22 recruiting professionals via LinkedIn outreach and professional associations in the recruiting field. Interviews followed a semi-structured guide addressing daily workflows, genAI tool adoption and use cases, perceptions of control and agency, efficiency impacts, and deskilling concerns. All sessions were audio-recorded with consent and transcribed verbatim. Analysis employed inductive thematic analysis per Braun and Clarke's six-phase framework, with two authors independently coding transcripts, comparing codes, and resolving discrepancies through discussion. Trustworthiness was supported via reflexive memos, team debriefing, and an audit trail of coding decisions. We will revise the Methods section to detail all of these elements and include the interview guide as supplementary material. revision: yes

  2. Referee: [Abstract and Findings] Abstract and Findings: The assertion that 'genAI has become an invisible architect that shapes the foundational building blocks of information used for evaluation' is presented as an empirical finding rather than a participant-reported perception. The supporting data consist only of self-reported interview excerpts; no workflow observations, artifact comparisons (e.g., AI-generated vs. final job descriptions), adoption metrics, or organizational records are described, leaving the factual framing unsupported by the evidence provided.

    Authors: We appreciate the referee's emphasis on accurate epistemological framing. Our work is a qualitative interview study centered on professionals' perceptions and experiences; we do not claim direct observation of genAI's architectural influence or possess artifact comparisons or metrics. We will revise the abstract and findings sections to explicitly qualify the claims as derived from participant accounts and thematic analysis (e.g., 'participants described genAI as functioning as an invisible architect...' and 'our analysis indicates that recruiters perceive...'). We will also strengthen the limitations discussion to note the absence of observational or artifact-based validation and the interpretive nature of the evidence. revision: yes

  3. Referee: [Results and Discussion] Results/Discussion: Claims of 'marginal efficiency gains' and 'deskilling' are derived from participant self-reports without quantification, pre/post comparisons, or external validation. This weakens the ability to assess the magnitude or generalizability of the reported trade-offs, which are central to the paper's argument about jeopardized oversight.

    Authors: We concur that reliance on self-reports without quantification or external benchmarks limits assessment of magnitude and generalizability. In revision we will add more precise reporting of theme prevalence (e.g., noting that the majority of participants described efficiency gains as marginal and providing additional representative excerpts). We will expand the limitations section to explicitly discuss the lack of pre/post data, behavioral measures, or organizational records and the consequent interpretive scope. While we cannot introduce new quantitative data at this stage, we believe the consistent patterns across the sample still offer valuable insight into perceived trade-offs and will frame the argument accordingly as perception-based. revision: partial

Circularity Check

0 steps flagged

No circularity in qualitative empirical study

full rationale

The paper is an empirical qualitative study relying on semi-structured interviews with 22 recruiting professionals followed by thematic analysis. No equations, derivations, fitted parameters, predictions, or mathematical models are present that could reduce to their own inputs. Central claims are framed as participant perceptions and themes extracted from interview data rather than self-derived results. Any self-citations (if present) are not load-bearing for uniqueness theorems or ansatzes; the evidence chain rests on external interview data and is self-contained against standard qualitative benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim depends on the assumption that interview self-reports validly capture both perceptions and the actual shaping role of GenAI, plus that the small convenience sample supports broader conclusions about recruiting workflows.

axioms (1)
  • domain assumption Self-reported perceptions from interviews accurately reflect participants' actual experiences and the influence of GenAI on their workflows.
    Qualitative interview studies require this assumption to treat reported beliefs as evidence of real control dynamics.

pith-pipeline@v0.9.0 · 5535 in / 1145 out tokens · 68715 ms · 2026-05-07T11:00:22.206344+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

86 extracted references · 27 canonical work pages · 2 internal anchors

  1. [1]

    Haozhe An, Christabel Acquaye, Colin Wang, Zongxia Li, and Rachel Rudinger. 2024. Do Large Language Models Discriminate in Hiring Decisions on the Basis of Race, Ethnicity, and Gender?arXiv preprint arXiv:2406.10486(2024)

  2. [2]

    Jiafu An, Difang Huang, Chen Lin, and Mingzhu Tai. 2025. Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation.PNAS nexus4, 3 (2025), pgaf089

  3. [3]

    Jacy Reese Anthis, Kristian Lum, Michael Ekstrand, Avi Feller, and Chenhao Tan. 2025. The impossibility of fair LLMs. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 105–120

  4. [4]

    Lena Armstrong, Abbey Liu, Stephen MacNeil, and Danaë Metaxa. 2024. The Silicon Ceiling: Auditing GPT’s Race and Gender Biases in Hiring. InProceedings of the 4th ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization(San Luis Potosi, Mexico)(EAAMO ’24). Association for Computing Machinery, New York, NY, USA, Article 2, 18 pages. ...

  5. [5]

    Ashby. 2025. Ashby. https://www.ashbyhq.com/

  6. [6]

    Colorado General Assembly. 2024. Consumer Protections for Artificial Intelligence (SB24-205)

  7. [7]

    Illinois General Assembly. 2024. Artificial Intelligence in Employment (Public Act 103-0804)

  8. [8]

    Avature. 2025. Applicant Tracking System. https://www.avature.net/applicant-tracking-system/

  9. [9]

    Alessio Azzutti. 2024. Artificial Intelligence and Machine Learning in Finance: Key Concepts, Applications, and Regulatory Considerations. InThe Emerald Handbook of Fintech: Reshaping Finance. Emerald Publishing Limited, 315–339

  10. [10]

    Tina Behzad, Siddartha Devic, Vatsal Sharan, Aleksandra Korolova, and David Kempe. 2025. An External Fairness Evaluation of LinkedIn Talent Search. arXiv:2511.10752 [cs.CY] https://arxiv.org/abs/2511.10752

  11. [11]

    Available: https://doi.org/10.1109/SP54263.2024.00254

    Rosanna Bellini, Emily Tseng, Noel Warford, Alaa Daffalla, Tara Matthews, Sunny Consolvo, Jill Palzkill Woelfer, Patrick Gage Kelley, Michelle L. Mazurek, Dana Cuomo, Nicola Dell, and Thomas Ristenpart. 2024. SoK: Safer Digital-Safety Research Involving At-Risk Users. In2024 IEEE Symposium on Security and Privacy (SP). 635–654. doi:10.1109/SP54263.2024.00071

  12. [12]

    Marianne Bertrand and Sendhil Mullainathan. 2004. Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination.American economic review94, 4 (2004), 991–1013

  13. [13]

    Miranda Bogen and Aaron Rieke. 2018. Help wanted: An examination of hiring algorithms, equity, and bias.Upturn, December7 (2018)

  14. [14]

    Virginia Braun and Victoria Clarke. 2021. Thematic analysis: A practical guide. (2021)

  15. [15]

    Breezy. 2025. Breezy. https://breezy.hr/

  16. [16]

    BrightHire. 2025. Equitable Hiring. https://brighthire.com/solutions/equitable-hiring/

  17. [17]

    BrightHire. 2025. BrightHire. https://brighthire.com/

  18. [18]

    Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.Proc. ACM Hum.-Comput. Interact.5, CSCW1, Article 188 (April 2021), 21 pages. doi:10.1145/3449287

  19. [19]

    Bullhorn. 2025. Bullhorn. https://www.bullhorn.com/

  20. [20]

    Shiye Cao and Chien-Ming Huang. 2022. Understanding User Reliance on AI in Assisted Decision-Making.Proc. ACM Hum.-Comput. Interact.6, CSCW2, Article 471 (Nov. 2022), 23 pages. doi:10.1145/3555572

  21. [21]

    S Catherine, NV Suresh, T Mangaiyarkarasi, and Leena Jenefa. 2025. Unveiling the Enigma of Shadow: Ethical Difficulties in the Field of AI. InNavigating Data Science: Unleashing the Creative Potential of Artificial Intelligence. Emerald Publishing Limited, 57–67

  22. [22]

    2025.The GenAI Divide

    Challapally, Aditya, Pease, Chris, Raskar, Ramesh, and Chari, Pradyumna. 2025.The GenAI Divide. Technical Report. MIT. Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  23. [23]

    Myra Cheng, Sunny Yu, Cinoo Lee, Pranav Khadpe, Lujain Ibrahim, and Dan Jurafsky. 2025. Social Sycophancy: A Broader Understanding of LLM Sycophancy. doi:10.48550/arXiv.2505.13995 arXiv:2505.13995 [cs]

  24. [24]

    Avishek Choudhury and Zaira Chaudhry. 2024. Large language models and user trust: consequence of self-referential learning loop and the deskilling of health care professionals.Journal of Medical Internet Research26 (2024), e56764

  25. [25]

    Phoebe K Chua and Melissa Mazmanian. 2020. Are you one of us? Current hiring practices suggest the potential for class biases in large tech companies.Proceedings of the ACM on Human-Computer Interaction4, CSCW2 (2020), 1–20

  26. [26]

    Sasha Costanza-Chock, Inioluwa Deborah Raji, and Joy Buolamwini. 2022. Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. InProceedings of the 2022 ACM conference on fairness, accountability, and transparency. 1571–1583

  27. [27]

    Covey. 2024. Covey. https://covey.framer.website/

  28. [28]

    Maria De-Arteaga, Riccardo Fogliato, and Alexandra Chouldechova. 2020. A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores. InProceedings of the 2020 CHI Conference on Human Factors in Computing Systems(Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. doi:10.1145/3313831.3376638

  29. [29]

    Online Etymology Dictionary. 2026. Origin and history of agency. https://www.etymonline.com/word/agency

  30. [30]

    Colin Duncan and Wendy Loretto. 2004. Never the right age? Gender and age-based discrimination in employment.Gender, Work & Organization11, 1 (2004), 95–115

  31. [31]

    Eightfold. 2025. AI talent acquisition & recruiting platform. https://eightfold.ai/

  32. [32]

    Department of Labor; U.S

    Equal Employment Opportunity Commission; U.S. Department of Labor; U.S. Department of Justice; U.S. Office of Personnel Management; U.S. Department of the Treasury. 1978. Uniform Guidelines on Employee Selection Procedures. 43 Fed. Reg. 38295 (Aug. 25, 1978); codified at 29 C.F.R. Part 1607. Guidance establishing uniform federal standards for employment t...

  33. [33]

    European Parliament and Council of the European Union. 2016. General Data Protection Regulation.Official Journal of the European UnionL 119 (2016), 1–88. Article 22

  34. [34]

    John C Flanagan. 1954. The Critical Incident Technique. (1954), 1–33

  35. [35]

    Gem. 2025. Gem. https://www.gem.com/

  36. [36]

    Kate Glazko, Yusuf Mohammed, Ben Kosa, Venkatesh Potluri, and Jennifer Mankoff. 2024. Identifying and Improving Disability Bias in GPT-Based Resume Screening. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency(Rio de Janeiro, Brazil)(FAccT ’24). Association for Computing Machinery, New York, NY, USA, 687–700. doi:10.114...

  37. [37]

    ATLAS.ti Scientific Software Development GmbH. 2025. ATLAS.ti Mac. https://atlasti.com

  38. [38]

    Ben Green and Yiling Chen. 2019. Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. InProceedings of the conference on fairness, accountability, and transparency. 90–99

  39. [39]

    Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm-in-the-Loop Decision Making.Proc. ACM Hum.-Comput. Interact.3, CSCW, Article 50 (Nov. 2019), 24 pages. doi:10.1145/3359152

  40. [40]

    Ben Green and Yiling Chen. 2021. Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts.Proc. ACM Hum.-Comput. Interact.5, CSCW2, Article 418 (Oct. 2021), 33 pages. doi:10.1145/3479562

  41. [41]

    Greenhouse. 2025. Greenhouse. https://www.greenhouse.com

  42. [42]

    Hirevue. 2026. Our Science. https://www.hirevue.com/our-science

  43. [43]

    Hirevue. 2025. HireVue. https://www.hirevue.com/

  44. [44]

    IBM. 2026. Generative AI for Human Resources (HR) Professionals. https://www.coursera.org/specializations/generative-ai-for-human- resources/

  45. [45]

    Lujain Ibrahim, Katherine M Collins, Sunnie SY Kim, Anka Reuel, Max Lamparth, Kevin Feng, Lama Ahmad, Prajna Soni, Alia El Kattan, Merlin Stein, et al. 2025. Measuring and mitigating overreliance is necessary for building human-compatible AI.arXiv preprint arXiv:2509.08010(2025)

  46. [46]

    iCIMS. 2025. iCIMS. https://www.icims.com/

  47. [47]

    Indeed. 2025. Indeed. https://www.indeed.com/hire

  48. [48]

    Juicebox. 2025. Juicebox (PeopleGPT) - The Leading AI Recruiting Platform. https://juicebox.ai/

  49. [49]

    Patrick Kline, Evan K Rose, and Christopher R Walters. 2022. Systemic discrimination among large US employers.The Quarterly Journal of Economics137, 4 (2022), 1963–2036

  50. [50]

    Klaus Krippendorff. 2011. Computing Krippendorff’s alpha-reliability. (2011)

  51. [51]

    Olya Kudina and Bas de Boer. 2025. Large language models, politics, and the functionalization of language.AI and Ethics5, 3 (2025), 2367–2379

  52. [52]

    Alisa Küper and Nicole Krämer. 2025. Psychological traits and appropriate reliance: Factors shaping trust in AI.International Journal of Human–Computer Interaction41, 7 (2025), 4115–4131

  53. [53]

    Vivian Lai, Chacha Chen, Q Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a science of human-ai decision making: a survey of empirical studies.arXiv preprint arXiv:2112.11471(2021). FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Surati, Bellini, and Black

  54. [54]

    Lan Li, Tina Lassiter, Joohee Oh, and Min Kyung Lee. 2021. Algorithmic Hiring in Practice: Recruiter and HR Professional’s Perspectives on AI Use in Hiring. InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society(Virtual Event, USA)(AIES ’21). Association for Computing Machinery, New York, NY, USA, 166–176. doi:10.1145/3461702.3462531

  55. [55]

    Yingji Li, Mengnan Du, Rui Song, Xin Wang, and Ying Wang. 2023. A survey on fairness in large language models.arXiv preprint arXiv:2308.10149(2023)

  56. [56]

    2025.The Future of Recruiting 2025

    LinkedIn. 2025.The Future of Recruiting 2025. Technical Report. https://business.linkedin.com/talent-solutions/resources/future-of- recruiting

  57. [57]

    Takuya Maeda and Anabel Quan-Haase. 2024. When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Association for Computing Machinery, New York, NY, USA, 1068–1077. doi:10.1145/3630106.3658956

  58. [58]

    Lars Malmqvist. 2025. Sycophancy in Large Language Models: Causes and Mitigations. InIntelligent Computing, Kohei Arai (Ed.). Springer Nature Switzerland, Cham, 61–74. doi:10.1007/978-3-031-92611-2_5

  59. [59]

    Nestor Maslej, Loredana Fattorini, Raymond Perrault, Yolanda Gil, Vanessa Parli, Njenga Kariuki, Emily Capstick, Anka Reuel, Erik Brynjolfsson, John Etchemendy, et al. 2025. Artificial intelligence index report 2025.arXiv preprint arXiv:2504.07139(2025)

  60. [60]

    Stephanie Mertens, Mario Herberz, Ulf J. J. Hahnel, and Tobias Brosch. 2022. The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains.Proceedings of the National Academy of Sciences119, 1 (Jan. 2022), e2107346118. doi:10.1073/pnas.2107346118 Publisher: Proceedings of the National Academy of Sciences

  61. [61]

    Metaview. 2025. Metaview. https://www.metaview.ai/

  62. [62]

    You Gotta be a Doctor, Lin

    Huy Nghiem, John Prindle, Jieyu Zhao, and Hal Daumé Iii. 2024. " You Gotta be a Doctor, Lin": An Investigation of Name-Based Bias of Large Language Models in Employment Recommendations.arXiv preprint arXiv:2406.12232(2024)

  63. [63]

    Bethany J Nichols, David S Pedulla, and Jeff T Sheng. 2025. More than a match:“Fit” as a tool in hiring decisions.Work and Occupations 52, 2 (2025), 175–203

  64. [64]

    Phil Strazzulla. 2025. 22 Best Applicant Tracking Systems (ATS): Full Comparison 2025. https://www.selectsoftwarereviews.com/buyer- guide/applicant-tracking-systems

  65. [65]

    Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: evaluating claims and practices. InProceedings of the 2020 Conference on Fairness, Accountability, and Transparency(Barcelona, Spain)(FAT* ’20). Association for Computing Machinery, New York, NY, USA, 469–481. doi:10.1145/3351095.3372828

  66. [66]

    LinkedIn Recruiter. 2025. LinkedIn Recruiter. https://business.linkedin.com/talent-solutions/recruiter

  67. [67]

    Lauren A Rivera. 2012. Hiring as cultural matching: The case of elite professional service firms.American sociological review77, 6 (2012), 999–1022

  68. [68]

    Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. 2020. What does it mean to’solve’the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems. InProceedings of the 2020 conference on fairness, accountability, and transparency. 458–468

  69. [69]

    Devansh Saxena and Shion Guha. 2024. Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decision-making.ACM Journal on Responsible Computing(March 2024). doi:10.1145/3616473 Publisher: ACMPUB27New York, NY

  70. [70]

    Trevor Schachner. Febr. Prompt Engineering for HR. https://www.shrm.org/labs/resources/prompt-engineering-for-hr

  71. [71]

    2016.Designing the User Interface: Strategies for Effective Human-Computer Interaction(6th ed.)

    Ben Shneiderman, Catherine Plaisant, Maxine Cohen, Steven Jacobs, Niklas Elmqvist, and Nicholas Diakopoulos. 2016.Designing the User Interface: Strategies for Effective Human-Computer Interaction(6th ed.). Pearson

  72. [72]

    Mona Sloane, Emanuel Moss, and Rumman Chowdhury. 2022. A Silicon Valley love triangle: Hiring algorithms, pseudo-science, and the quest for auditability.Patterns3, 2 (2022)

  73. [73]

    S Shyam Sundar. 2020. Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII).Journal of Computer-Mediated Communication25, 1 (Jan. 2020), 74–88. doi:10.1093/jcmc/zmz026 _eprint: https://academic.oup.com/jcmc/article- pdf/25/1/74/32961171/zmz026.pdf

  74. [74]

    Helena Vasconcelos, Matthew Jörke, Madeleine Grunde-McLaughlin, Tobias Gerstenberg, Michael S Bernstein, and Ranjay Krishna. 2023. Explanations can reduce overreliance on ai systems during decision-making.Proceedings of the ACM on Human-Computer Interaction7, CSCW1 (2023), 1–38

  75. [75]

    Bryan Wilder, Eric Horvitz, and Ece Kamar. 2020. Learning to complement humans.arXiv preprint arXiv:2005.00582(2020)

  76. [76]

    Willo. 2025. Willo. https://www.willo.video

  77. [77]

    Christo Wilson, Avijit Ghosh, Shan Jiang, Alan Mislove, Lewis Baker, Janelle Szary, Kelly Trindel, and Frida Polli. 2021. Building and auditing fair algorithms: A case study in candidate screening. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 666–677

  78. [78]

    Kyra Wilson and Aylin Caliskan. 2024. Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society7, 1 (Oct. 2024), 1578–1590. doi:10.1609/aies.v7i1.31748

  79. [79]

    Kyra Wilson, Mattea Sim, Anna-Maria Gueorguieva, and Aylin Caliskan. 2025. No Thoughts Just AI: Biased LLM Hiring Recommendations Alter Human Decision Making and Limit Human Autonomy. arXiv:2509.04404 [cs.CY] https://arxiv.org/abs/2509.04404 Resume-ing Control: (Mis)Perceptions of Agency Around GenAI Use in Recruiting Workflows FAccT ’26, June 25–28, 2026...

  80. [80]

    Workable. 2025. Workable. https://www.workable.com/

Showing first 80 references.