To Impress an Algorithm: Minoritized Applicants’ Perceptions of Fairness in AI Hiring Systems

Technology firms increasingly leverage artificial intelligence (AI) to enhance human decision-making processes in the rapidly evolving talent acquisition landscape. However, the ramifications of these advancements on workforce diversity remain a topic of intense debate. Drawing upon Gilliland’s procedural justice framework, we explore how IT job candidates interpret the fairness of AI-driven recruitment systems. Gilliland’s model posits that an organization’s adherence to specific fairness principles, such as honesty and the opportunity to perform, profoundly shapes candidates’ self-perceptions, their judgments of the recruitment system’s equity, and the overall attractiveness of the organization. Using focus groups and interviews, we interacted with 47 women, Black and Latinx or Hispanic undergraduates specializing in computer and information science to discern how gender, race, and ethnicity influence attitudes toward AI in hiring. Three procedural justice rules, consistency of administration, job-relatedness, and selection information, emerged as critical in shaping participants’ fairness perceptions. Although discussed less frequently, the propriety of questions held significant resonance for Black and Latinx or Hispanic participants. Our study underscores the critical role of fairness evaluations for organizations, especially those striving to diversify the tech workforce.

The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 I. Sserwanga et al. (Eds.): iConference 2024, LNCS 14597, pp. 43–61, 2024. https://doi.org/10.1007/978-3-031-57860-1_4

Files

Metadata

Work Title To Impress an Algorithm: Minoritized Applicants’ Perceptions of Fairness in AI Hiring Systems
Access
Open Access
Creators
  1. Antonio E. Girona
  2. Lynette Yarger
Keyword
  1. Algorithms
  2. Hiring
  3. Bias
License In Copyright (Rights Reserved)
Work Type Article
Publisher
  1. iConference 2024
Publication Date April 10, 2024
Publisher Identifier (DOI)
  1. https://doi.org/10.1007/978-3-031-57860-1_4
Deposited June 11, 2024

Versions

Analytics

Collections

This resource is currently not in any collection.

Work History

Version 1
published

  • Created
  • Added Iconference_2024_Girona_Yarger.pdf
  • Added Creator Antonio E. Girona
  • Added Creator Lynette Yarger
  • Published
  • Updated
  • Updated Keyword, Publication Date Show Changes
    Keyword
    • Algorithms, Hiring, Bias
    Publication Date
    • 2024-02-16
    • 2024-04-10