Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-26T22:00:44.439Z Has data issue: false hasContentIssue false

Hiring, Algorithms, and Choice: Why Interviews Still Matter

Published online by Cambridge University Press:  16 February 2023

Vikram R. Bhargava
Affiliation:
George Washington University, USA
Pooria Assadi
Affiliation:
California State University, Sacramento, USA
Rights & Permissions [Opens in a new window]

Abstract

Why do organizations conduct job interviews? The traditional view of interviewing holds that interviews are conducted, despite their steep costs, to predict a candidate’s future performance and fit. This view faces a twofold threat: the behavioral and algorithmic threats. Specifically, an overwhelming body of behavioral research suggests that we are bad at predicting performance and fit; furthermore, algorithms are already better than us at making these predictions in various domains. If the traditional view captures the whole story, then interviews seem to be a costly, archaic human resources procedure sustained by managerial overconfidence. However, building on T. M. Scanlon’s work, we offer the value of choice theory of interviewing and argue that interviews can be vindicated once we recognize that they generate commonly overlooked kinds of noninstrumental value. On our view, interviews should thus not be entirely replaced by algorithms, however sophisticated algorithms ultimately become at predicting performance and fit.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the Society for Business Ethics

Why do organizations conduct job interviews, despite the enormous costs associated with the interview process? At first blush, this does not seem like an especially challenging question. This is because a natural and seemingly obvious answer immediately comes to mind: interviews are for predicting a candidate’s future performance and fit with respect to the hiring organization’s requirements, values, and culture—that’s why organizations conduct interviews, despite their costs (Cappelli, Reference Cappelli2019b; Elfenbein & Sterling, Reference Elfenbein and Sterling2018; Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018; Society for Human Resource Management [SHRM], 2017). This is also the traditional view of interviewing espoused by managers and is how the nature and function of interviews are characterized in human resource management (HRM) textbooks (Dessler, Reference Dessler2020; Mathis, Jackson, Valentine, & Meglich, Reference Mathis, Jackson, Valentine and Meglich2016; Mondy & Martocchio, Reference Mondy and Martocchio2016).Footnote 1 Thus, although the costs may be undesirable, they are the price to pay, as it were, to be able to judge whether a candidate will match the needs of the role and the organization.Footnote 2

In this article, we suggest that the question of why to conduct interviews is a more difficult one than it first seems. The force of this question can be appreciated when juxtaposed against a twofold threat we argue the traditional view of interviewing faces. The first threat, the behavioral threat, holds that a large body of behavioral evidence suggests that we are poor predicters of future performance and bad judges of fit. This is for multiple reasons: the judgments of interviewers are riddled with biases, interviewers overestimate their assessment capacities, and organizations rarely assess the performance of candidates they might have passed on (in relation to the candidates they ultimately selected). As one HRM textbook notes, “traditionally, interviews have not been valid predictors of success on the job” (Mondy & Martocchio, Reference Mondy and Martocchio2016: 165). In short, those involved in making hiring decisions are demonstrably bad at predicting future performance and assessing fit.

The behavioral threat has brought some management theorists to suggest abandoning interviews as traditionally conceived (i.e., unstructured interviews) and moving toward structured interviews. Yet structured interviews, too, face problems: they can collapse into unstructured interviews, or alternatively, they start out unstructured either before or after the official start of the interview and, in doing so, increase exposure to the behavioral threat. More fundamentally, the behavioral threat is simply pushed back one step, to the point at which one decides the structure of the interview. Thus, although structured interviews may be an improvement upon unstructured interviews, they, too, do not fare especially well with respect to the behavioral threat.

A defender of the traditional view might acknowledge the force of the behavioral threat yet still respond, “We have no better alternative!” But this argumentative maneuver is cut off by the second threat the traditional view faces: the algorithmic threat. Algorithms already have a superior track record to humans, even expert humans, of predicting the performance and fit of candidates in a number of domains. Indeed, 67 percent of eighty-eight hundred recruiters and hiring managers globally surveyed by LinkedIn in 2018 noted that they use artificial intelligence (AI) tools to save time in sourcing and screening candidates (Ignatova & Reilly, Reference Ignatova and Reilly2018). So, where does this leave the practice of interviewing?

The behavioral and algorithmic threats, taken together, pose what we call the “interview puzzle” for the traditional view of interviewing. If the traditional view is correct about the nature and function of interviews—that interviews are for predicting the future performance and fit of a candidate with respect to the role’s and organization’s needs—then it seems as though the justification for the practice is undermined. Not only is interviewing costly (Cappelli, Reference Cappelli2020; Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018; SHRM, 2017) but we also are bad at it, and we may have better alternatives for predicting performance and fit (i.e., algorithms). Continuing to interview, then, if it is only about predicting performance and fit, seems to be at best an anachronistic human resources (HR) practice or at worst blatant wastefulness sustained by irrational managerial overconfidence. For these reasons, we argue that the traditional view of interviewing must be reexamined.

If interviews were singularly a means to predicting performance and fit, as the traditional view posits, we maintain that the justification for interviews is undermined. However, we argue that the antecedent in this conditional is false: interviews are not singularly a means to predicting performance and fit; rather, they are a much richer normative practice. In particular, we argue that interviews offer different kinds of value that have thus far been overlooked and thus the practice can be worth preserving, despite the behavioral and algorithmic threats. Something of normative significance would be lost were we to abandon the practice of interviewing, and this must be accounted for in our understanding of the nature of interviews.

In other words, we dissolve the interview puzzle by arguing that although the behavioral and algorithmic threats are indeed concerning, they only threaten to undermine our interview practices if the traditional view of interviewing is the whole story. But we argue that the traditional view of interviewing accounts for only part of its function—the parts it overlooks are the other kinds of value that interviews create, and these other kinds of value do not succumb to the behavioral and algorithmic threats. By reframing how we understand the nature of interviews, we advance a broader, normative conception of interviewing that suggests that our ability to choose whom we relate to in the workplace is an important source of value and that our work lives may be worse off without the practice.

We proceed as follows. In section 1, we characterize the traditional view of interviewing and discuss the costs of interviewing that are exhaustively documented in the HRM literature. In section 2, we discuss the behavioral and algorithmic threats and argue that they together undermine the traditional view of interviewing and thus generate the interview puzzle. In section 3, we introduce our value of choice theory of interviewing, grounded in the work of the philosopher T. M. Scanlon (Reference Scanlon and McMurrin1988, Reference Scanlon1998, Reference Scanlon2013, Reference Scanlon2019). We show how the interview puzzle can be dissolved once we grasp the inadequacy of the traditional view of interviewing: it fails to account for a broader range of contenders for the kinds of value that can be realized through interviewing. If the view we advance is correct, then the current understanding in HRM and management scholarship about the nature and function of interviews must be significantly expanded. In section 4, we offer several clarifications of our account and discuss some potential objections. In section 5, we discuss some new avenues of research that follow from our work. Finally, in section 6, we conclude.

1. THE TRADITIONAL VIEW OF INTERVIEWING

The traditional view of interviewing holds that interviews are one class of selection tools (among other tools, such as tests and background checks) that are useful for predicting a candidate’s performance and fit.Footnote 3 In particular, a selection interview is defined as “a selection procedure designed to predict future job performance based on applicants’ oral responses to oral inquiries” (Dessler, Reference Dessler2020: 207) and is considered a tool for assessing a candidate’s knowledge, skills, abilities, and competencies in relation to what is required for the job (Dessler, Reference Dessler2020; Graves & Karren, Reference Graves and Karren1996; McDaniel, Whetzel, Schmidt, & Maurer, Reference McDaniel, Whetzel, Schmidt and Maurer1994).

Interviews are widespread, in part, because of the belief that they are effective in simultaneously assessing candidates’ ability, motivation, personality, aptitude, person–job fit, and person–organization fit (Highhouse, Reference Highhouse2008). Several common assumptions sustain this belief: that making accurate predictions about candidates’ future job performance is possible (Highhouse, Reference Highhouse2008); that experience and intuition are necessary in effective hiring (Gigerenzer, Reference Gigerenzer2007); that human beings (i.e., candidates) can be effectively evaluated only by equally sensitive complex beings (e.g., hiring managers), rather than by tests or algorithms (Highhouse, Reference Highhouse2008); and that oral discussions with candidates can be revealing, as they allow for “reading between the lines” (Highhouse, Reference Highhouse2008: 337).

Despite the widespread use of interviews, they are recognized to be a costly and time-consuming practice. The United States “fills a staggering 66 million jobs a year. Most of the $20 billion that companies spend on human resources vendors goes to hiring” (Cappelli, Reference Cappelli2019b: 50). On average, employers in the United States spend approximately $4,000 per hire to fill non-executive-level positions and about $15,000 per hire to fill executive-level positions (SHRM, 2016, 2017), and a substantial portion of these costs is attributed to interviews. Outside the United States, employers report similar experiences. For example, in Switzerland, on average, employers spend as much as 16 weeks of wage payments to fill a skilled worker vacancy, of which 21 percent involves search costs, and roughly 50 percent of the search costs are direct interview costs (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018). In addition, significant opportunity costs are associated with interviews for all parties involved (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018).

With respect to the time spent on interviews, according to a recent talent acquisition benchmarking report, on average per job, US employers spend approximately eight days conducting interviews (SHRM, 2017). Employers report similar experiences outside the United States. For example, in Switzerland, on average, employers spend approximately 8.5 hours on job interviews per candidate (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018).

Of course, the costs of hiring and interviewing are not uniform. The costs vary depending on the skill requirements of the job (Muehlemann & Strupler Leiser, Reference Muehlemann and Strupler Leiser2018) and the degree of labor market tightness (Davis, Faberman, & Haltiwanger, Reference Davis, Faberman and Haltiwanger2012; Pissarides, Reference Pissarides2009; Rogerson & Shimer, Reference Rogerson, Shimer, Ashenfelter and Card2011), among other factors. That said, these costs on average remain substantial and are increasing—employers today spend twice as much time on interviews as they did in 2009 (Cappelli, Reference Cappelli2019b).Footnote 4

As costly and time consuming as interviews are, there are also difficulties associated with verifying whether they are worth these costs. Indeed, “only about a third of US companies report that they monitor whether their hiring practices lead to good employees; few of them do so carefully, and only a minority even track cost per hire and time to hire” (Cappelli, Reference Cappelli2019b: 50). Even if it were not so difficult to assess whether interviews are worth the costs with respect to the end posited by the traditional view (i.e., predicting performance and fit), two additional threats remain.

2. THE INTERVIEW PUZZLE: THE BEHAVIORAL AND ALGORITHMIC THREATS

2.1 The Behavioral Threat

The traditional conception of interviews—as a means to predict a candidate’s performance and fit in relation to a vacancy—hinges on an important assumption, namely, that performance and fit can be effectively predicted through interviewing. However, a considerable body of knowledge from the social sciences challenges this basic assumption and chronicles the poor track record of predicting performance and fit through interviews (Bishop & Trout, Reference Bishop and Trout2005; Bohnet, Reference Bohnet2016; Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019; McCarthy, Van Iddekinge, & Campion, Reference McCarthy, Van Iddekinge and Campion2010; Rivera, Reference Rivera2012). Specifically, although there is empirical evidence that highlights the outsized role interviews have in the hiring process (Billsberry, Reference Billsberry2007), interview-based hiring decisions have been found only to account for up to 10 percent of the variation in job performance (Conway, Jako, & Goodman, Reference Conway, Jako and Goodman1995). Additionally, biases pervade the process of predicting performance and fit through interviews, both in their unstructured and structured formats (Huffcutt, Roth, & McDaniel, Reference Huffcutt, Roth and McDaniel1996; McDaniel et al., Reference McDaniel, Whetzel, Schmidt and Maurer1994).

2.1.1 Unstructured Interviews

Unstructured interviews do not have a fixed format or a fixed set of questions, nor do they involve a fixed process for assessing the given responses (Schmidt & Hunter, Reference Schmidt and Hunter1998). During unstructured interviews, both the interviewer and the candidate investigate what seems most relevant at the time (Bohnet, Reference Bohnet2016). This process often produces an overall rating for each applicant “based on summary impressions and judgments” (Schmidt & Hunter, Reference Schmidt and Hunter1998: 267). Unstructured interviews are often assumed to be effective in concurrently assessing a range of dimensions associated with predicting performance and person–organization fit (Highhouse, Reference Highhouse2008).

However, recent research shows that unstructured interviews may not in fact aid in hiring decisions. This research maintains that unstructured interviews are riddled with biases and are often swayed by the whims of the interviewers (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019). Specifically, this research suggests that unstructured interviews are ineffective because interviewers tend to overlook the limits of their knowledge (Kausel, Culbertson, & Madrid, Reference Kausel, Culbertson and Madrid2016), “decide on the fly” what questions to ask of which candidates and how to interpret responses (Cappelli, Reference Cappelli2019b: 50), place disproportionate emphasis on a few pieces of information (Dawes, Reference Dawes2001), and confirm their own existing preferences (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019). Subsequently, they become increasingly confident in the accuracy of their decisions, even when irrelevant information is introduced (Bohnet, Reference Bohnet2016; Dawes, Reference Dawes2001).Footnote 5 One reason for interviewers’ overconfidence regarding their predictive abilities is that they cannot often ascertain whether, absent interviews, their predictions would turn out to be better or worse, and they would generally lack a large enough sample to deduce any statistically valid inferences (Bishop & Trout, Reference Bishop and Trout2005).

While managers more heavily value a given trait or ability if evaluated by unstructured interviews rather than by alternative methods (e.g., paper-and-pencil tests) (Lievens, Highhouse, & DeCorte, Reference Lievens, Highhouse and DeCorte2005), a long-standing body of empirical evidence shows that unstructured interviews are unhelpful with selection decisions. For example, in the context of medical school applications, DeVaul, Jervey, Chappell, Caver, Short, and O’Keefe (Reference DeVaul, Jervey, Chappell, Caver, Short and O’Keefe1987) compare the students who were initially accepted versus those who were rejected for medical school and find that only 28 percent of the difference between these groups is related to academic and demographic factors and that 72 percent is related to the admissions committee’s preferences developed through interviews. They report that when it comes to attrition and clinical performance during medical school and a subsequent year of postgraduate training, there are no significant differences between the accepted and the rejected groups, suggesting that interviews in this context are unhelpful to the decision-making process. In a similar fashion, Milstein, Wilkinson, Burrow, and Kessen (Reference Milstein, Wilkinson, Burrow and Kessen1981: 77) compare the performance of “a group of 24 applicants who were interviewed and accepted at the Yale University School of Medicine but went to other medical schools … with a group of 27 applicants who attended the same schools but had been rejected at Yale following an interview and committee deliberation.” In this context, too, the researchers find no statistically significant relationship between admission decisions and performance, again pointing to the inefficacy of interviews in aiding the achievement of the decision-making ends.Footnote 6

Medical school admissions decisions are, of course, not hiring decisions, but similar results are seen in hiring contexts. In a study of the hiring practices at elite professional services firms, Rivera (Reference Rivera2012) finds that employers often seek candidates who enjoy similar leisure pursuits and have shared experiences and self-presentation styles. In doing so, Rivera shows that unstructured interviews may be less about assessing knowledge, skills, and abilities and more about exercising biases through replicating ourselves, including, but not limited to, our culture, gender, and ethnicity, in hiring decisions. Finally, through a meta-analysis, Schmidt and Hunter (Reference Schmidt and Hunter1998) conclude that unstructured interviews are ineffective at predicting the performance of future employees.

Not only do we know that unstructured interviews are unhelpful in hiring decisions but there is also some empirical evidence that unstructured interviews reliably undermine those decisions (Bishop & Trout, Reference Bishop and Trout2005; DeVaul et al., Reference DeVaul, Jervey, Chappell, Caver, Short and O’Keefe1987; Eysenck, Reference Eysenck1954; Kausel et al., Reference Kausel, Culbertson and Madrid2016; Milstein et al., Reference Milstein, Wilkinson, Burrow and Kessen1981; Oskamp, Reference Oskamp1965; Wiesner & Cronshaw, Reference Wiesner and Cronshaw1988). For example, as far back as the middle of the past century, in a large-scale empirical study, Bloom and Brundage (Reference Bloom, Brundage and Stuit1947) found that the predictive gain in adding an interviewer’s assessment of a candidate’s experience, interest, and personality may well be negative. They specifically report that predictions based on test scores and interviewing were 30 percent worse than predictions based on test scores alone. More recently, Behroozi, Shirolkar, Barik, and Parnin (Reference Behroozi, Shirolkar, Barik and Parnin2020) have shown that even when tests are conducted in interview formats, such as “whiteboard technical interviews” common in software engineering, the mechanics and pressure of the interview context reduce the efficacy of the technical tests. This effect is heightened especially among minorities and other underrepresented groups (Munk, Reference Munk2021). Other recent research reports similar findings: for example, research on human judgment documents that when decision makers (e.g., hiring managers, admissions officers, parole boards) judge candidates based on a dossier and an unstructured interview, their decisions tend to be worse than decisions based on the dossier alone (Bishop & Trout, Reference Bishop and Trout2005). In a similar fashion, Dana, Dawes, and Peterson (Reference Dana, Dawes and Peterson2013) show that adding an unstructured interview to diagnostic information when making screening decisions yields less accurate outcomes than not using an unstructured interview at all. In this case, even though the decision makers may sense that they are extracting useful information from unstructured interviews, in reality, that information is not useful (Dana et al., Reference Dana, Dawes and Peterson2013).

2.1.2 Structured Interviews

Unlike the unstructured version, a structured interview involves a formal process that more systematically considers “rapport building, question sophistication, question consistency, probing, note taking, use of a panel of interviewers, and standardized evaluation” (Roulin, Bourdage, & Wingate, Reference Roulin, Bourdage and Wingate2019: 37) in hiring decisions. In this interview format, to predict good hires, an expert interviewer systematically and consistently poses the same set of validated questions about past performance to all candidates and immediately scores each answer based on a set of predetermined criteria relevant to the tasks of the job (Cappelli, Reference Cappelli2019b).

Although structured interviews are available and designed to standardize the hiring process and minimize subjectivity and bias (Bohnet, Reference Bohnet2016; Reskin & McBrier, Reference Reskin and McBrier2000), they are in effect not much more successful than unstructured interviews in aiding hiring decisions for at least three reasons. First, even though structured interviews, in theory, may be less biasedFootnote 7 and a better predictor of future job performanceFootnote 8 than their unstructured counterparts, they are not widely adopted in practice (König, Klehe, Berchtold, & Kleinmann, Reference König, Klehe, Berchtold and Kleinmann2010; Roulin et al., Reference Roulin, Bourdage and Wingate2019). The resistance to structuring interviews (Lievens et al., Reference Lievens, Highhouse and DeCorte2005; van der Zee, Bakker, & Bakker, Reference van der Zee, Bakker and Bakker2002) is driven by interviewers’ belief that a candidate’s character is “far too complex to be assessed by scores, ratings, and formulas” (Highhouse, Reference Highhouse2008: 339) that are predetermined in a structured format.

Second, even in cases when structured interviews are accepted, they are not well implemented for various reasons. For example, structured interviews tend to be more costly to construct (Schmidt & Hunter, Reference Schmidt and Hunter1998) in part because of the difficulties in designing and validating standardized questions and evaluation criteria (Bohnet, Reference Bohnet2016; Roulin et al., Reference Roulin, Bourdage and Wingate2019). Also, in reality, we rarely see structured interviews conducted by trained and experienced interviewers who manage to avoid having their idiosyncratic personalities distort the process (Roulin et al., Reference Roulin, Bourdage and Wingate2019). Even when structured interviews are conducted by trained and experienced interviewers, the process sometimes deviates to a semistructured or unstructured format. For instance, in conforming to a predetermined set of questions, the flow of conversation in a structured interview might feel stilted, awkward, or uncomfortable for both the interviewer and the candidate, thereby inadvertently shifting the interview process to a less structured format (Bohnet, Reference Bohnet2016).

Third, even when structured interviews are conducted by trained and experienced interviewers and the process does not deviate to an unstructured format, empirical evidence shows that structured interviews may not be systematic and free of bias because interviewers may used them to confirm their preexisting judgments rather than to evaluate the candidates—that is, a potential self-fulfilling prophecy (Dougherty, Turban, & Callender, Reference Dougherty, Turban and Callender1994). On the candidates’ side, there is also much room for introducing bias. For example, Stevens and Kristof (Reference Stevens and Kristof1995) show that applicants engage in significant impression management, even in structured interviews, thereby undermining the decision-making process. Furthermore, even when structured interviews are implemented properly, these issues and biases may not be eliminated: they may simply be shifted to the previous step of designing the interview and deciding its structure. Therefore not only are structured interviews rare but, even when they are used and properly implemented, they are afflicted with issues that complicate the evaluation of performance and fit. It is not surprising, then, that Cappelli (Reference Cappelli2019b: 56) argues that a structured interview is the “most difficult technique to get right.”

Although research shows that interviews can undermine the aims of the hiring process, interviews have remained a popular norm for employee selection for more than a hundred years (Buckley, Norris, & Wiese, Reference Buckley, Norris and Wiese2000; van der Zee et al., Reference van der Zee, Bakker and Bakker2002). They have remained popular not necessarily because the inefficacy of interviews is unknown. In fact, Rynes, Colbert, and Brown (Reference Rynes, Colbert and Brown2002) report that HR professionals appreciate the limitations of interviews. Still, hiring managers remain reluctant to outsource their judgment (Bohnet, Reference Bohnet2016).

2.2 The Algorithmic Threat

Interviews, both in their unstructured and structured formats, if not by design, in practice are ineffective at assessing fit or predicting future performance and create a significant opportunity for bias in hiring decisions (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019; Rivera, Reference Rivera2012). However, proponents of the traditional view of interviewing might respond that there are no alternatives. But this assertion falls short in the face of the second threat the traditional view faces, that is, the algorithmic threat. That is, algorithms, even simple ones, in a number of domains, already are no worse (and are at times superior) at predicting the performance and fit of candidates than humans, even expert humans (Bishop & Trout, Reference Bishop and Trout2005; Cappelli, Reference Cappelli2020).

Algorithms can be an effective method for predicting future performance and fit primarily because the hiring challenge at its core is a prediction problem, and statistical algorithms are designed to take on and address prediction problems (Danieli, Hillis, & Luca, Reference Danieli, Hillis and Luca2016). For example, a simple statistical prediction rule (SPR) in a linear model is designed to predict a desired property P (e.g., future performance) based on a series of cues (e.g., education, experience, and past performance) such that P = w 1(c 1) + w 2(c 2) + w 3(c 3) + … + wn(cn), where cn and wn reflect the value and weightFootnote 9 of the nth cue (Bishop & Trout, Reference Bishop and Trout2005). Research shows that even this simple statistical algorithm is, at least in overall effect, better than humans in hiring predictions, in part because such a hiring algorithm is more consistent than humans (and cheaper, to boot). And, in practice, this algorithm can be better scaled and automated in a consistent way (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019). Also, the increasing availability of good data, advances in statistical algorithms, and new capacities to analyze large-scale data have made this algorithmic route even more promising (Cappelli, Reference Cappelli2020).

Indeed, more advanced statistical hiring algorithms based on machine learning can be better than humans at predicting performance and fit because they are specifically designed to “adaptively use the data to decide how to trade off bias and variance to maximize out-of-sample prediction accuracy” (Chalfin et al., Reference Chalfin, Danieli, Hillis, Jelveh, Luca, Ludwig and Mullainathan2016: 124). In this respect, for example, Cowgill (Reference Cowgill2019) finds that more advanced statistical hiring algorithms based on machine learning better predict job performance than humans because they lack some of the biases from which humans suffer. Also, Chalfin et al. (Reference Chalfin, Danieli, Hillis, Jelveh, Luca, Ludwig and Mullainathan2016) find that, compared to the existing rank-ordering police hiring systems, machine learning algorithms that use sociodemographic attributes; prior behavior, including prior arrest records; and polygraph results would yield a 4.8 percent reduction in police shootings and physical and verbal abuse complaints.

In addition to the hiring domain, advanced statistical algorithms based on machine learning have been shown to be more effective than humans in a broader set of screening decisions where “a decision-maker must select one or more people from a larger pool on the basis of a prediction of an unknown outcome of interest” (Rambachan, Kleinberg, Ludwig, & Mullainathan, Reference Rambachan, Kleinberg, Ludwig and Mullainathan2020: 91). For example, Kleinberg, Lakkaraju, Leskovec, Ludwig, and Mullainathan (Reference Kleinberg, Lakkaraju, Leskovec, Ludwig and Mullainathan2018) show that machine learning algorithms exhibit better performance than judges in bail decisions because they incorporate fewer irrelevant perceptions of the defendant (e.g., demeanor) into their decisions. Also, Dobbie, Liberman, Paravisini, and Pathania (Reference Dobbie, Liberman, Paravisini and Pathania2018) illustrate that machine learning algorithms minimize bias against certain types of applicants (e.g., immigrants). Other related studies in lending find that machine learning algorithms are better at predicting default (Fuster, Plosser, Schnabl, & Vickery, Reference Fuster, Plosser, Schnabl and Vickery2019) and are less discriminatory compared to face-to-face lenders (Bartlett, Morse, Stanton, & Wallace, Reference Bartlett, Morse, Stanton and Wallace2019).

Critics of algorithmic decision-making in hiring (and elsewhere) raise at least two objections. The first objection pertains to the seeming ability of humans to pick up on soft, qualitative, or noncodifiable cues during interviews that are difficult to capture in algorithms (Gigerenzer, Reference Gigerenzer2007; Highhouse, Reference Highhouse2008). However, this is precisely where the research shows that there is a high likelihood and magnitude of bias clouding human decision-making. Indeed, the “speculation that humans armed with ‘extra’ qualitative evidence can outperform SPRs has been tested and has failed repeatedly” (Bishop & Trout, Reference Bishop and Trout2005: 33). Even if we grant that humans are skilled at inferring relevant information from subtle personality and intellect cues, as some research suggests (Gigerenzer, Reference Gigerenzer2007), statistical algorithms often simply pull on the same cues. While many algorithms tend to draw on codifiable cues (rather than bias-prone, noncodifiable cues), in contrast to humans, algorithms are more efficient and consistent, and they need not be managed with respect to their sense of self-esteem or self-importance (Chamorro-Premuzic & Akhtar, Reference Chamorro-Premuzic and Akhtar2019).

The second objection regarding the algorithmic method of predicting future performance and assessing fit concerns fairness (Cappelli, Tambe, & Yakubovich, Reference Cappelli, Tambe, Yakubovich, Canals and Heukamp2020; Newman, Fast, & Harmon, Reference Newman, Fast and Harmon2020; Raisch & Krakowski, Reference Raisch and Krakowski2021; Tambe, Cappelli, & Yakubovich, Reference Tambe, Cappelli and Yakubovich2019). In this respect, although legitimate fairness concerns are associated with algorithmic predictions of human performance, research has shown that algorithms are often no worse than the alternative means of hiring, including using human judgment through interviews. For example, using data on teacher and police characteristics, Chalfin et al. (Reference Chalfin, Danieli, Hillis, Jelveh, Luca, Ludwig and Mullainathan2016) show that statistical algorithms predict future performance better than humans. Though there are indeed fairness concerns with algorithms, these concerns are prevalent in human decision-making too (Danieli et al., Reference Danieli, Hillis and Luca2016). Specifically, Danieli et al. grant the prevalence of fairness issues in algorithms but also highlight several comparably concerning psychological biases in human judgment. For example, in hiring contexts, humans engage in bracketing (i.e., overemphasizing subsets of choices over the universe of all options), that is, choosing the top candidate who was interviewed on a given day instead of the top candidate interviewed throughout the search process (Danieli et al., Reference Danieli, Hillis and Luca2016).Footnote 10 In addition, Li (Reference Li2020) summarizes research that shows how human judgment in hiring may discriminate based on race, religion, national origin, sex, sexual orientation, and age. Given this research, Cappelli (Reference Cappelli2020) warns us not to romanticize human judgment and to recognize “how disorganized most of our people management practices are now.” He notes, “At least algorithms treat everyone with the same attributes equally, albeit not necessarily fairly.”

Indeed, a significant portion of the algorithmic fairness issues arguably stems from human actions, as well as the lack of diversity in the humans who designed them (Li, Reference Li2020) and the types of data with which humans trained them (Cappelli, Reference Cappelli2020; De Cremer & De Schutter, Reference De Cremer and De Schutter2021). For example, Dastin (Reference Dastin2018) reports that Amazon’s recruiting algorithm was biased against women because it was trained to assess candidates by discovering patterns in submitted résumés over a ten-year time frame—most of those résumés were submitted by men (see also Cappelli, Reference Cappelli2019a).Footnote 11

As it turns out, recent research challenges the common assumption that biased data in the training stage of machine learning will lead to undesirable social outcomes. Specifically, Rambachan and Roth (Reference Rambachan and Roth2020) empirically examine the “bias in, bias out” assumption and highlight the conditions under which machine learning may reverse bias and ultimately prioritize groups that humans may have marginalized. More specifically, through mathematical modeling and simulation, they show that, unlike the bias generated by measurement errors caused by mislabeled data, the bias generated by sample selection may be flipped by machine learning such that the machine learning outcomes would favor groups that encountered discrimination in the training data.Footnote 12 Rambachan and Roth argue that the bias reversal occurs because members of groups that are underrepresented in the original training data, for example, women, that make the cut are typically ones that are statistically outstanding performers. As such, in subsequent rounds of learning, the algorithm is fed data in which women are overly positively correlated with being outstanding performers. Rambachan and Roth show that this can ultimately reverse the underrepresentation in the data that is due to human decision makers.

We have thus far considered two objections to using algorithms instead of interviews, and we’ve suggested that these objections fall short. Yet one might correctly point out that many more objections to algorithms have recently appeared in the algorithmic ethics literature (Birhane, Reference Birhane2021; Hunkenschroer & Luetge, Reference Hunkenschroer and Luetge2022; Martin, Reference Martin2019; Müller, Reference Müller and Zalta2021; Tasioulas, Reference Tasioulas2019; Tsamados et al., Reference Tsamados, Aggarwal, Cowls, Morley, Roberts, Taddeo and Floridi2022). For example, there are concerns related to algorithms systemically excluding certain individuals (Creel & Hellman, Reference Creel and Hellman2022), eliciting organizational monocultures (Kleinberg & Raghavan, Reference Kleinberg and Raghavan2021), or disproportionately harming marginalized groups (Birhane, Reference Birhane2021); worries related to the legitimacy and trustworthiness of algorithms (Benn & Lazar, Reference Benn and Lazar2022; Martin & Waldman, Reference Martin and Waldman2022; Tong, Jia, Luo, & Fang, Reference Tong, Jia, Luo and Fang2021) and the lack of explainability in the case of opaque algorithms (Anthony, Reference Anthony2021; Kim & Routledge, Reference Kim and Routledge2022; Lu, Lee, Kim, & Danks, Reference Lu, Lee, Kim and Danks2020; Rahman, Reference Rahman2021; Rudin, Reference Rudin2019; Selbst & Powles, Reference Selbst and Powles2017; Véliz, Prunkl, Phillips-Brown, & Lechterman, Reference Véliz, Prunkl, Phillips-Brown and Lechterman2021; Wachter, Mittelstadt, & Floridi, Reference Wachter, Mittelstadt and Floridi2017);Footnote 13 issues related to whether algorithms preclude us from taking people seriously as individuals (Lippert-Rasmussen, Reference Lippert-Rasmussen2011; Susser, Reference Susser, Jones and Mendieta2021); and concerns related to whether automated systems create responsibility or accountability gaps (Bhargava & Velasquez, Reference Bhargava and Velasquez2019; Danaher, Reference Danaher2016; Himmelreich, Reference Himmelreich2019; Nyholm, Reference Nyholm2018; Roff, Reference Roff, Allhoff, Evans and Henschke2013; Simpson & Müller, Reference Simpson and Müller2016; Sparrow, Reference Sparrow2007; Tigard, Reference Tigard2021), among other concerns (Bedi, Reference Bedi2021; Tasioulas, Reference Tasioulas2019; Tsamados et al., Reference Tsamados, Aggarwal, Cowls, Morley, Roberts, Taddeo and Floridi2022; Yam & Skorburg, Reference Yam and Skorburg2021). In short, there’s now a rich literature involving a wide range of concerns related to adopting algorithms in lieu of human decision makers (Hunkenschroer & Luetge, Reference Hunkenschroer and Luetge2022; Martin, Reference Martin2022; Müller, Reference Müller and Zalta2021; Tsamados et al., Reference Tsamados, Aggarwal, Cowls, Morley, Roberts, Taddeo and Floridi2022). And the thought might be put more forcefully: insofar as these two aforementioned concerns could be objections to using algorithms (and in turn objections to the force of the interview puzzle), many more objections—like the ones articulated in the algorithmic ethics literature—may succeed.Footnote 14

We grant the force of this concern. Taken together, the arguments developed in the algorithmic ethics literature constitute a powerful concern regarding using algorithms in lieu of human decision makers. Furthermore, to the extent that these objections to algorithms succeed, it would weaken the strength of the algorithmic threat (and, correspondingly, the force of the interview puzzle). However, for our ultimate aims, this does not concern us. This is because our broader project is not to defend algorithms—we do so in the context of the interview puzzle strictly for the sake of argument. Our ultimate aim is instead to argue that even if these wide-ranging objections to the use of algorithms fall short, there nevertheless remain independent moral considerations that tell against abdicating hiring choices to an algorithm. Crucially, the kinds of moral considerations on which we draw do not depend on certain bad outcomes that may arise due to algorithms. This is to say, even if algorithms were not systemically excluding individuals in arbitrary ways (Creel & Hellman, Reference Creel and Hellman2022), did not result in an organizational monoculture (Kleinberg & Raghavan, Reference Kleinberg and Raghavan2021), did not create responsibility gaps (Himmelreich, Reference Himmelreich2019; Johnson, Reference Johnson2015; Martin, Reference Martin2019; Matthias, Reference Matthias2004; Roff, Reference Roff, Allhoff, Evans and Henschke2013; Sparrow, Reference Sparrow2007), or did not elicit other morally untoward outcomes, there nevertheless remains an independent moral concern about firms abdicating their choices in the hiring domain to an algorithm. So, the argument we will now provide might be understood as providing further, independent grounds to resist using algorithms (at least in the context of hiring). Moreover, the arguments we offer do not hinge on certain bad outcomes arising due to using algorithms; as such, the force of our arguments remains, even if the bad outcomes associated with algorithms are ultimately engineered away.

2.3 Taking Stock of the Interview Puzzle

The behavioral and algorithmic threats present a significant twofold challenge and raise the interview puzzle for proponents of the traditional view of interviewing. To be sure, this does not mean that the traditional view is not, in part, correct. Finding high-performing candidates who fit the job requirements, as the traditional view posits, is plausibly an important end for firms to pursue. However, the behavioral and algorithmic threats, taken in conjunction, challenge whether interviews are a suitable means toward that end. Crucially, if interviews are only about this end, then the interview puzzle remains and threatens to undermine our justification for conducting interviews. We will now argue, however, that there is more to be said on behalf of interviews than the traditional view accounts for.

Before proceeding, we offer a brief clarification about an assumption we make in the next section: we treat the interview process as equivalent to a hiring process with human decision makers. But, strictly speaking, this assumption is not always correct. Hiring processes with human decision makers can occur without interviews, because interviews are not the only available basis for selection. For example, tests or work samples might instead be used. However, tests and work samples are apt in a much narrower range of positions. Moreover, as HRM textbooks note, “interviews are one of the most common methods used for selection” (Mathis et al., Reference Mathis, Jackson, Valentine and Meglich2016: 259), and “interviews continue to be the primary method companies use to evaluate applicants” (Mondy & Martocchio, Reference Mondy and Martocchio2016: 165). In fact, “while not all employers use tests, it would be very unusual for a manager not to interview a prospective employee” (Dessler, Reference Dessler2020: 192). For these reasons, we use “the interview process” interchangeably with “hiring process conducted by human decision makers.” At the end of section 4, we briefly discuss the implications of relaxing this assumption.

3. THE VALUE OF CHOICE THEORY OF INTERVIEWS

The interview puzzle can be dissolved once we recognize that interviews play additional roles beyond predicting performance and fit. For this reason, even if the behavioral and algorithmic threats undermine the plausibility of interviews serving as a means toward the end of securing an employee who fits the role’s and organization’s needs, we need not conclude that the practice of interviewing is unjustified or something that ought to be abandoned: this is because interviews are a source of other kinds of value and are not exclusively a means for predicting performance and fit.

To be clear, on the view we develop, we do not challenge the importance of the end posited by the traditional view (i.e., the end of hiring an employee who fits the role’s and organization’s needs); rather, we argue that additional kinds of value are implicated in the practice of interviewing. Thus we offer a pluralistic theory of interviewing and argue that once we recognize the wider range of contenders for the kinds of value generated through interviewing, we can see that abandoning interviews would risk the loss of certain important kinds of value.

To understand the additional kinds of value implicated in the practice of interviews, we draw on philosopher T. M. Scanlon’s (Reference Scanlon and McMurrin1988, Reference Scanlon1998) account of the value of choice. Scanlon’s (Reference Scanlon2013: 12) account “begins from the fact that people often have good reason to want what happens in their lives to depend on the choices they make, that is, on how they respond when presented with the alternatives.” His work on the value of choice has been significant for debates and fields of inquiry as wide-ranging as paternalism (Cornell, Reference Cornell2015), bioethics (Walker, Reference Walker2022), the freedom and moral responsibility debate (Duus-Otterström, Reference Duus-Otterström2011; Fischer, Reference Fischer2008), and contract theory (Dagan, Reference Dagan2019).

On the value of choice account, at least three different kinds of value can be generated when making a choice: instrumental, representative, and symbolic. The first is the instrumental value of a choice: if I am the one who makes the choice, I might make it more likely that I realize some end than were I not given the opportunity to choose. So, for example, if I’m a prospective car buyer and am given the choice over what color I want for my car, my making this choice realizes a certain instrumental value: of making it more likely that the car will satisfy my aesthetic preferences (in contrast to, for example, were the dealership to choose the color of the car on my behalf or were the color to be selected using a random color generator). So, the instrumental value in a choice is realized when it makes it more likely that a desired end of a prospective decision maker is achieved.

The second is the representative value of choice: this is the value that is generated when my making the choice alters the meaning of the outcome of the choice—crucially, this value is realized even if my making the choice is instrumentally worse at achieving certain ends than an alternative method of decision-making (e.g., an algorithm, a coin flip, deference to an expert). For example, it’s important that I am the one who chooses a gift for my partner, not because I’m more likely to satisfy their preferences than they are (were they to choose the gift themselves), but rather because there is value in the fact that I was the one who chose it; in choosing the gift, I expressed myself (e.g., my desires, beliefs, and attitudes toward my significant other) through that act. More simply, representative value relates to how the outcome of the choice takes on a different meaning in virtue of who makes the choice.

The third is the symbolic value of choice: this is the value associated with certain choices reflecting that one is a competent member of the moral community who has standing that is “normally accorded an adult member of the society” (Scanlon, Reference Scanlon1998: 253). For example, if I, as an adult, were not permitted to choose my bedtime, this would be demeaning and infantilizing. This is so even if a sleep specialist choosing my bedtime would result in outcomes better for my circadian rhythm and other physiological markers. My being able to choose reflects the judgment that I am a “competent, independent adult” (Scanlon, Reference Scanlon1998: 253). This is the value that is risked when one is denied the opportunity to make certain choices, ones that, in a given social context, are choices that “people are normally expected to make … for themselves” (Scanlon, Reference Scanlon1998: 253).

These are the three candidates for the value generated through making a choice. The first is instrumental, and the second two are noninstrumental sources of value. This may not exhaust the candidates for the kinds of value generated in making a choice, but it does taxonomize three important kinds of value that are generated in making a choice. Thus, if a choice is abdicated, (at least) these three kinds of value are at risk and are thus potential candidates for the value that would be lost.

Returning to the context of interviewing, when firms conduct interviews, they are making choices about whom to employ. So, let’s now turn to how the value of choice account bears on interviewing. We will discuss each sort of value generated through choice—instrumental, representative, and symbolic—in turn.

The first is the instrumental value of choice. Securing instrumental value is the chief value with which the traditional view of interviewing is concerned. The thought goes as follows: interviewing realizes the instrumental value to the extent that it helps the firm predict a candidate’s performance and fit. Those who are inclined to preserve interviews, on the basis of the traditional view of interviewing, might expect that the instrumental value of choice realized in interviewing—helping a firm better predict a candidate’s performance and fit—is what both explains why we interview and also what justifies its costs.

Yet the instrumental value of interviewing is precisely what is called into question by the interview puzzle. Interviewing does not excel at generating the purported instrumental value that it is thought to elicit (namely, predicting future performance and fit). So, if the sole kind of value that could be generated through interviewing is instrumental value, then the grounds for the practice are undermined. But as the value of choice account tells us, there is a wider range of contenders for the kinds of value generated in making a choice. The critical oversight of the traditional view is its failure to recognize that the value generated through interviewing is not entirely conditional on the instrumental value of choice, given that there can be noninstrumental value generated through the choice.

This brings us to the second potential value—and one overlooked by the traditional view—that is realized through interviews: the representative value of choice. As Scanlon (Reference Scanlon1998, 253) points out, we value and want certain choices to “result from and hence to reflect [our] own taste, imagination, and powers of discrimination and analysis.” In the interview context, we may value the fact that we are the ones choosing with whom we work, and there is value lost (i.e., representative value) when we abdicate that choice, even if our choosing does not as effectively realize the ends of predicting performance and fit as an algorithm. An algorithm might be better at predicting which romantic partner we should date, whom we should befriend, or which university we should attend—while this all might be correct, abdicating these choices and deferring to an algorithm would result in us losing something of value: representative value. Choosing to whom we relate in the workplace is a way “to see features of ourselves manifested in actions and their results” (Scanlon, Reference Scanlon1998: 252). The representative value of a choice is the value that arises in virtue of the choice taking on a different meaning: because of both the fact of who makes the choice and the choice representing or expressing the person’s judgments, desires, and attitudes.

The third value generated through interviewing, and another oversight of the traditional view of interviewing, is the symbolic value of choice. Scanlon (Reference Scanlon2019: 4) points out, “If it is generally held in one’s society that it is appropriate for people in one’s position to make certain decisions for themselves, then failing to make such a decision for oneself or being denied the opportunity to make it, can be embarrassing, or even humiliating.” Thus the symbolic value of choice is what is lost when a person for whom it would be appropriate (in a given social context) to make a certain decision is precluded from making that decision. For example, to the extent that workplace norms in a given society involve members of an organization typically having a choice in their future colleagues—people with whom they would collaborate but also, in some cases, those whom they would befriend or with whom they would commiserate and form community (Casciaro, Reference Casciaro, Brass and Borgatti2019; Estlund, Reference Estlund2003; Porter, Woo, Allen, & Keith, Reference Porter, Woo, Allen and Keith2019)—through interviewing, depriving people of that choice may result in a loss of symbolic value.Footnote 15 Relatedly, a certain prestige and status are implicated in making certain choices (including selecting future colleagues through interviewing) that figure into the symbolic value of choice; this is especially vivid, for example, when alumni of a university are involved in on-campus recruiting at their alma mater (Binder, Davis, & Bloom, Reference Binder, Davis and Bloom2015). This prestige and status that are implicated in the symbolic value of choice are also part of what would be lost were firms to forsake interviews. Crucially, substituting interviews with algorithms can result in a loss of symbolic value even if, as a matter of fact, an algorithm may arrive at a better assessment of a candidate’s expected performance and fit.Footnote 16

Although the representative value of choice and the symbolic value of choice may seem similar, especially because, as Scanlon (Reference Scanlon1998: 253) puts it, “representative and symbolic value may be difficult to distinguish in some cases,” they are not the same. Symbolic value concerns how making certain choices reflects one’s standing, whereas representative value concerns how the meaning of a certain outcome depends on who is making the choice that elicited the outcome. Despite these differences, both are kinds of noninstrumental value, and neither depends on the instrumental effectiveness of the choice with respect to some end (Aristotle, Reference Ostwald1962; Donaldson, Reference Donaldson2021; Donaldson & Walsh, Reference Donaldson and Walsh2015; Gehman, Treviño, & Garud, Reference Gehman, Treviño and Garud2013; Kant, Reference Kant, Gregor and Timmermann2012; O’Neill, Reference O’Neill1992; Zimmerman & Bradley, Reference Zimmerman, Bradley and Zalta2019).

Our interviewing practices can be vindicated once we recognize that the choice involved in the interview process can realize both representative and symbolic value. The key point is that “the reasons people have for wanting outcomes to be dependent on their choices often have to do with the significance that this dependence itself has for them, not merely with its efficacy in promoting outcomes that are desirable on other grounds” (Scanlon, Reference Scanlon1998: 253). And the fact that representative and symbolic value are threatened when abdicating the choice involved in interviewing a candidate—the choice of whom to relate to in the workplace—generates pro tanto moral reason to preserve interviews as an organizational practice. Crucially, the representative and symbolic value undergirding our interview practices is not imperiled by the behavioral or algorithmic threats.

In other words, once we recognize the broader range of contenders for the kinds of value generated through interviewing, we can see that the behavioral and algorithmic threats only undermine part of the potential value in interviewing—its instrumental value. But we still have pro tanto moral reason to continue the practice of interviewing, given the noninstrumental value—representative and symbolic value—that may be lost were we to abandon the practice.

4. CLARIFICATIONS AND OBJECTIONS

We now turn our attention to a few clarifications and some potential objections. First, it’s worth keeping in mind that even the noninstrumental values in a choice do not always tell in favor of preserving, rather than abdicating, a choice. For example, with respect to representative value, we might prefer, in some circumstances, for our choices not to reflect our judgments, desires, and attitudes. If one’s organization is considering hiring one’s close friend, one might prefer to have the “question of who will get a certain job (whether it will be my friend or some well-qualified stranger) not depend on how I respond when presented with the choice: I want it to be clear that the outcome does not reflect my judgment of their respective merits or my balancing of the competing claims of merit and loyalty” (Scanlon, Reference Scanlon1998: 252). In other words, in circumstances that might present a conflict of interest, for example, there might be reasons related to representative value that tell against preserving the choice.

Second, the value of choice is not simply about having a greater number of options from which to select. This is to say, the value of choice generates reasons that “count in favor of ‘having a choice,’ but for reasons of all three kinds having more choice (over a wider range of alternatives) is not always better than less. Being faced with a wider range of alternatives may simply be distracting, and there are some alternatives it would be better not to have” (Scanlon, Reference Scanlon2019: 4). So, in the context of interviewing, we remain agnostic about how the value of choice is affected by having more candidates from whom to select.

Third, one might doubt whether symbolic value would in fact be risked were we to forgo interviews. The point might be pressed as follows: because many (or even most) employees are not involved in hiring decisions, it is not clear that symbolic value would be lost (or that the failure to be involved in the interview process would be demeaning).Footnote 17 We grant that symbolic value may not be risked in many instances of abdicating a choice. But this clarification points the way to an advantage of our value of choice account: its contextual sensitivity. As Scanlon (Reference Scanlon1998: 253) notes, a key point with respect to whether symbolic value is risked in a given situation is whether the situation is one “in which people are normally expected to make choices of a certain sort for themselves.” Ascertaining whether there is such an expectation in place in a given hiring context and, in turn, whether symbolic value would be lost will depend on certain sociological facts pertaining to the expectations in the given workplace and the norms governing that workplace culture, field, or industry.Footnote 18 This means that there is an important role for empiricists to play in ascertaining the workplace contexts, fields, or industries in which symbolic value is risked to a greater or lesser extent. And in contexts in which the strength of the norms associated with choosing members of one’s organization are weaker, the reasons provided by the symbolic value of choice would be correspondingly weaker.

Fourth, one might raise the following question: what about organizations that outsource hiring to an external head-hunting firm? On our view, such an approach would, in effect, be morally akin to abdicating the choice to an algorithm, with respect to the value of choice. That said, there might be other sorts of considerations—for example, the various objections discussed in the algorithmic ethics literature mentioned earlier—that make relying on algorithms morally worse than abdicating the choice to an external head-hunting firm. Still, it is quite right that the value of choice-related considerations would be morally akin. But this need not mean that there is no role for external head-hunting firms at all. This is because the concerns with respect to the value of choice primarily arise insofar as the firm defers to the judgment of the external head-hunting firm. This, however, does not preclude soliciting advice about hiring decisions from HR consultants or head-hunting firms. Notably, in the context of algorithms, deference to the algorithm is much more likely given that many algorithms are opaque. Moreover, failing to defer to the judgments of the algorithm—that is, picking and choosing on a case-by-case basis when to follow its prescriptions—drastically undercuts its overall instrumental benefits (Bishop & Trout, Reference Bishop and Trout2005).

Fifth, perhaps, all things considered, in some instances the costs of interviewing may be too burdensome and a firm might be forced to forgo the practice. Perhaps, in other instances, the importance of finding the right person is far too weighty—for example, selecting an airline pilot—for a human to make the decision if an algorithm would do so more effectively. But even in these cases, were we to abandon interviewing for a different selection method (e.g., an algorithm), it’s worth keeping in mind that there may still be something of normative significance lost, that is, representative or symbolic value.Footnote 19

How might these trade-offs be managed? One potential approach might be as follows: suppose one regards instrumental value to be of much greater significance in the business realm than the sorts of noninstrumental value to which we’ve drawn attention. In such a case, a hybrid approach might be considered. Such an approach might involve conducting the initial screening with an algorithm and leaving the ultimate decision to a member of the organization. This may allow for reducing the potential trade-offs between the instrumental and the noninstrumental sources of value of choice.Footnote 20

In other words, our view is not that, in instances when an algorithm is vastly superior at achieving a given end, firms should pursue the drastically less instrumentally effective approach. As Scanlon (Reference Scanlon2019: 4) notes, the various reasons for the value of choice “can conflict with reasons of other kinds, particularly with instrumental reasons.” So, we are not claiming that firms must always conduct interviews, instead of using algorithms. Nor are we claiming that the instrumental considerations are not of moral significance—in some instances, they may very well be of overriding moral importance.Footnote 21 Rather, our point is that multiple kinds of value can be generated through the practice of interviewing—including sources of value that may generate conflicting reasons—and that an adequate theory of interviewing should not overlook this fact. If we are to abdicate interviews in a given context, we should do so in full view of the kinds of value that are risked.Footnote 22

Sixth, it’s now worth revisiting the assumption we articulated at the end of section 2: treating the interview process as equivalent to a hiring process with human decision makers. As we acknowledged, this assumption is not always, strictly speaking, correct. A hiring process—including one in which humans are making the decisions—might not involve interviews at all; perhaps the hiring process involves choosing on the basis of work samples or tests.

So, when we relax this assumption, what follows? Our view would still imply that abdicating the hiring process entirely to algorithms would risk the various values of choice. However, our value of choice account does not entail a particular mode of choosing for a human decision maker—whether interviews, work samples, or tests. With respect to the narrow range of professions where work samples or tests can aptly be implemented, our value of choice arguments are neutral between choosing such an approach and interviewing (but of course, the value of choice account is not neutral between either of these routes and abdicating the choice to an algorithm).Footnote 23 Interviews are a way—the most prominent and common way, and the way most broadly applicable across a range of positions—for us to choose the members of our organizations, but they are indeed not the only way to choose in the hiring process.

To summarize, we have offered an account of some heretofore underappreciated normative dimensions of a widespread business practice, namely, interviewing. Our view helps address some of the challenges to which the traditional conception of interviewing succumbs. The traditional view has difficulty explaining why interviews persist and justifying why we should not abandon them, given their costs, our poor ability to predict performance and fit, and the presence of algorithmic alternatives. Our value of choice theory of interviewing both explains why interviews persist and justifies why there are grounds not to abandon the practice: interviews play an important normative function by securing noninstrumental sources of value in hiring.

5. FUTURE AVENUES OF RESEARCH

Our value of choice account of interviewing suggests several new avenues of research. First, a significant body of research in employment ethics primarily emphasizes the ethics of how employers ought to treat their employees (Arnold, Reference Arnold, Brenkert and Beauchamp2010; Barry, Reference Barry2007; Bhargava, Reference Bhargava2020; Brennan, Reference Brennan2019; McCall, Reference McCall2003; Werhane, Radin, & Bowie, Reference Werhane, Radin and Bowie2004), but there is much less, apart from discrimination-related issues, surrounding the ethics of what is owed to prospective employees. Our work highlights the significance of a range of understudied issues to explore in this domain. Although some have explored the question of what is owed to former employees of a firm (Kim, Reference Kim2014), what, if at all, is owed to potential employees, such as candidates who participate in interviews? Other such issues include, for example, the ethics of exploding offers, accepting applications from candidates that will never be considered, and alerting candidates of rejection. On the side of the candidate, issues include the ethics of feigning enthusiasm for an interview, pursuing an interview merely to solicit an external offer for negotiation leverage, and holding on to offers that one is confident one will not accept.

Second, our account of interviewing points the way to questions related to what may make employment relationships meaningful (Robertson, O’Reilly, & Hannah, Reference Robertson, O’Reilly and Hannah2020). Some contributors to the future of work scholarly conversation have argued that employers owe it to their employees to provide meaningful work (Bowie, Reference Bowie1998; Kim & Scheller-Wolf, Reference Kim and Scheller-Wolf2019; Michaelson, Reference Michaelson2021; Veltman, Reference Veltman2016).Footnote 24 By attending to the broader range of values associated with interviewing, managers may have the opportunity to make work and employment relationships more meaningful (Bartel, Wrzesniewski, & Wiesenfeld, Reference Bartel, Wrzesniewski and Wiesenfeld2012; Freeman, Harrison, Wicks, Parmar, & De Colle, Reference Freeman, Harrison, Wicks, Parmar and De Colle2010; Rosso, Dekas, & Wrzesniewski, Reference Rosso, Dekas and Wrzesniewski2010). So, an important question to address will be how the process of being selected for a position (i.e., through an interview or through selection by way of an algorithm) can contribute to preserving or promoting the meaningfulness of work (Carton, Reference Carton2018; Grant, Reference Grant2012; Jiang, Reference Jiang2021; Kim, Sezer, Schroeder, Risen, Gino, & Norton, Reference Kim, Sezer, Schroeder, Risen, Gino and Norton2021; Rauch & Ansari, Reference Rauch and Ansari2022).

Third, there is a sense in which using algorithms in hiring decisions deepens the informational asymmetry between candidates and employers (Curchod, Patriotta, Cohen, & Neysen, Reference Curchod, Patriotta, Cohen and Neysen2020; Yam & Skorburg, Reference Yam and Skorburg2021: 614). Switching to algorithms in hiring may prevent candidates from developing a better understanding of their prospective colleagues and the prospective employer’s workplace culture and norms. On the other hand, if an interview was conducted, the candidate might have acquired this sort of valuable information, even if fallibly. Future scholars should explore the public policy implications of forgoing interviews, especially in jurisdictions with employment at will. The symmetrical right to exit is sometimes discussed as a potential justification for employment at will (Bhargava & Young, Reference Bhargava and Young2022; Hirschman, Reference Hirschman1970; Maitland, Reference Maitland1989; Taylor, Reference Taylor2017). But when candidates and employers enter the employment relationship on starkly asymmetric informational grounds (Caulfield, Reference Caulfield2021), it’s worth exploring whether the fact of both parties having a right to exit the relationship loses some of its justificatory force with respect to employment at will and considering whether supplementary regulatory constraints would be in order.

6. CONCLUSION

The traditional view of interviewing espoused by both practitioners and management scholars alike holds that interviews are conducted—despite the steep costs associated with the process—to predict a candidate’s performance and fit in relation to a vacancy. We argue that the traditional view faces a twofold threat: the behavioral and the algorithmic threats. The behavioral threat arises in virtue of a large body of behavioral evidence that points to us being poor predictors of future performance and bad judges of fit. The algorithmic threat arises in virtue of algorithms already being superior predictors of performance and fit than us in a number of domains, including the hiring domain.

If the traditional view of interviewing captures all there is to interviewing, then the justification for conducting interviews is undermined by the behavioral and algorithmic threats. However, we argue that the practice of interviewing can be vindicated once we recognize that there are a broader range of contenders for the kinds of value that can be realized through interviewing—crucially, some of these kinds of noninstrumental value that are realized through interviewing remain insulated from the behavioral and algorithmic threats. In short, we argue that even if algorithms are better predictors of performance and fit than us, it does not follow that we ought to abandon our interview practices: this is because important kinds of noninstrumental value are generated through interviewing that could be lost were we to forgo the practice.

Acknowledgments

The authors contributed equally. For helpful comments, feedback, or conversation, we thank Alan Strudler, Ben Bronner, Carson Young, Esther Sackett, Gui Carvalho, JR Keller, Julian Dreiman, Matthew Bidwell, Matthew Caulfield, Peter Cappelli, Robert Prentice, Samuel Mortimer, Sonu Bedi, Suneal Bedi, Thomas Choate, Thomas Donaldson, and audiences at the 2019 Summer Stakeholder Seminar at the University of Virginia’s Darden School of Business, the 2021 Society for Business Ethics meeting, Georgetown Institute for the Study of Markets and Ethics, and the Dartmouth Ethics Institute. We also are grateful to associate editor Jeffrey Moriarty and three anonymous reviewers for their helpful feedback.

Vikram R. Bhargava (, corresponding author) is an assistant professor of strategic management and public policy at the George Washington University School of Business. He received a joint PhD from the University of Pennsylvania’s Wharton School and Department of Philosophy.

Pooria Assadi is an assistant professor of management and organizations in the College of Business at California State University, Sacramento. He received his PhD in strategic management from Simon Fraser University’s Beedie School of Business and was a visiting scholar at the University of Pennsylvania’s Wharton School.

Footnotes

1 This is not to say that all contemporary HRM scholars necessarily endorse the efficacy of interviews toward their stated ends. Indeed, a number of HRM scholars doubt the effectiveness of interviews toward predicting future performance and fit. The key point is that, even though a number of HRM scholars are skeptical of the efficacy of interviews at predicting future performance and fit, they nevertheless agree that the nature and function of interviews are for predicting future performance and for assessing candidate fit.

2 We note that with respect to a range of candidates, especially ones with more experience, the evaluation process is often mutual (i.e., a candidate may be evaluating whether a position at a given firm would satisfy the candidate’s needs).

3 Two types of fit characterized in a number of HRM textbooks include “person–job fit,” the candidate’s fit in relation to the role (Dessler, Reference Dessler2020; Mathis, Jackson, Valentine, & Meglich, Reference Mathis, Jackson, Valentine and Meglich2016; Mondy & Martocchio, Reference Mondy and Martocchio2016), and “person–organization fit,” the candidate’s fit in relation to the organization (Dessler, Reference Dessler2020; Mondy & Martocchio, Reference Mondy and Martocchio2016).

4 Although the focus of our article is on employers, candidates bear significant costs too. For example, candidates must expend resources to sort through job opportunities, schedule commitments, and purchase new professional attire, among other costs. Relatedly, expending effort and time on interviewing could involve intangible short- and long-term opportunity costs that take candidates away from other productive activities. Furthermore, the psychological effects of the interview process can be onerous for the candidates. Although, in our article, we primarily highlight the costs employers bear, we acknowledge that the costs candidates bear ought to be taken seriously in their own right.

5 Recent research suggests that part of why overconfidence persists, despite its considerable costs, is the status benefits it confers; moreover, these status benefits largely persist, even when the person’s overconfidence is exposed (Anderson, Brion, Moore, & Kennedy, Reference Anderson, Brion, Moore and Kennedy2012; Kennedy, Anderson, & Moore, Reference Kennedy, Anderson and Moore2013).

6 See also Oskamp’s (Reference Oskamp1965) study of the clinical decisions of psychologists, which shows that the accuracy of their decisions does not increase significantly with additional information from interviews (but confidence in their decision-making steadily increases).

7 The average validity of the structured interviews (at about 0.51) is greater than the average validity of the unstructured interviews (at about 0.38) and far greater than the average validity of poorly conducted unstructured interviews (Schmidt & Hunter, Reference Schmidt and Hunter1998: 267).

8 With respect to the predictive power of structured interviews, they “predict performance in job training programs with a validity of about .35” (Schmidt & Hunter, Reference Schmidt and Hunter1998: 267).

9 The weight for each cue reflects its importance and is assigned based on the comparison of any given cue to a large set of data on performance (Bishop & Trout, Reference Bishop and Trout2005).

10 What about the possibility of complementing algorithmic predictions with human oversight? In other words, one might be tempted by the thought that a firm should use both algorithms and its own judgment; that is, one should consider the predictions of the algorithms, but vet these predictions against one’s own assessment of the candidate. After all, algorithms will, at least on occasion, offer what seem to be obviously mistaken prescriptions. And if one’s intuition contradicts what the algorithm is prescribing in such a case, one might defect from the algorithmic strategy.

Although tempting, this strategy faces serious problems. A crucial lesson from the literature on how to benefit from SPRs (including decision assistance algorithms) is that partial or selective compliance with the strategy results in significantly worse overall outcomes (Bishop & Trout, Reference Bishop and Trout2005; Dawes, Faust, & Meehl, Reference Dawes, Faust and Meehl1989; Meehl, Reference Meehl1957). This has been confirmed on multiple occasions in the laboratory context and is a problem in contexts as wide-ranging as medical decision systems and criminal recidivism, as well as in interviews (Bishop & Trout, Reference Bishop and Trout2005: 46–47, 91; Goldberg, Reference Goldberg1968; Leli & Filskov, Reference Leli and Filskov1984; Sawyer, Reference Sawyer1966). Specifically, when one opts for a selection strategy based on a SPR (such as an algorithm), but then defects from this strategy on a case-by-case basis—because this particular case seems unique—this yields worse overall outcomes (Bishop & Trout, Reference Bishop and Trout2005). This is so even if there is a strong sense that the particular circumstance at hand is somehow exceptional (see the literature on the “broken leg problem” [Bishop & Trout, Reference Bishop and Trout2005: 45–46; Dawes et al., Reference Dawes, Faust and Meehl1989; Meehl, Reference Meehl1957] when the decision maker “comes to believe she has strong evidence for defecting from the strategy” [Bishop & Trout, Reference Bishop and Trout2005: 46]). In other words, to secure the most overall instrumental benefits of an algorithm, its advice generally cannot be taken a la carte.

11 We recognize that, in some instances, algorithms risk amplifying our biases and can further entrench bad organizational cultures (because firms would use their own past HR decisions as data sets, which can in turn deepen morally untoward hiring practices). In such cases, this is indeed a significant added concern with using algorithms in lieu of humans. This, of course, would undermine the strength of our characterization of the algorithmic threat and, in turn, lessen the force of the puzzle we raise for the traditional view of interviewing, but it does not undermine our ultimate thesis that there are strong grounds for preserving the practice of interviewing—indeed, this would amount to a further independent consideration that supports our thesis.

12 The algorithm will continue to replicate and exacerbate any bias generated by measurement errors caused by mislabeled data.

13 See also the related debate concerning trade-offs between interpretability and accuracy (London, Reference London2019).

14 We are grateful to an anonymous reviewer for raising this concern.

15 For a discussion of the downsides of workplace friendships for organizations, see Pillemer and Rothbard (Reference Pillemer and Rothbard2018).

16 It is worth noting that the term algorithm is often used to refer to multiple different kinds of processes, systems, and technologies (Leavitt, Schabram, Hariharan, & Barnes, Reference Leavitt, Schabram, Hariharan and Barnes2021). For instance, some algorithms are rule based (or symbolic) systems, whereas others are association-based systems. Within these broad and rough categories are many varieties of algorithms and ways in which they might be combined and used. For the purposes of our argument, we put to one side the details regarding the technical specifications of algorithms while merely noting that the extent to which a value of choice is undermined by abdicating the choice to an algorithm may also depend on the type and nature of the algorithm.

17 We are grateful to an anonymous reviewer for this point. We also acknowledge that many hiring decisions are made by internal HR divisions. But it is worth noting that even if these members of HR divisions may not ultimately work with the people they are hiring (unless, of course, the interview is for an HR position), the members of these HR divisions are themselves usually employees of the organization too. Moreover, in a number of fields, it is not uncommon in the final rounds of interviews for candidates to be interviewed by individuals who would be their immediate team members and managers if selected for the position.

18 Suppose a firm is deciding on candidates as a collective by using some sort of majoritarian procedure that nevertheless results in an outcome that is no individual’s most preferred choice (List & Pettit, Reference List and Pettit2011; Pettit, Reference Pettit2007). First, does the individual’s choice still matter? Our aim here in this article is not to enter the debate regarding the metaphysics and morality of group agents. That said, we note that the value of choice of the individual still matters, given that it is a key component of fixing the collective’s choice. It is quite unlike cases in which an individual’s choice (arguably) may not matter due to an outcome being causally overdetermined. That an individual’s most preferred choice was not instantiated is a different matter from the value realized through making the choice. Second, such a collective decision procedure seems morally unobjectionable—could automating it render it objectionable? It may very well, albeit perhaps not for reasons related to the value of choice. This is because automating a procedure can change its very nature, morally speaking, for reasons of the sort discussed in the algorithmic ethics literature. We are grateful to an anonymous reviewer for these two questions.

19 Quite apart from the representative or symbolic value that is risked when abdicating a choice to an algorithm are concerns about how doing so might undermine organizational learning (Balasubramanian, Ye, & Xu, Reference Balasubramanian, Ye and Xu2022).

20 Of course, as earlier noted, picking and choosing when to comply with the predictions of the algorithm significantly undercuts the overall instrumental benefits of the algorithm (Bishop & Trout, Reference Bishop and Trout2005). Insofar as one pursues such a hybrid approach, it’s worth keeping in mind that the various other moral objections to the use of algorithms discussed in the algorithmic ethics literature would still be relevant.

21 Suppose a physician faces two options: interpret medical images herself or rely on a predictive algorithm. Further suppose that the algorithm yields better instrumental results with respect to patient welfare. Must the physician insist on making the choice herself? Our view does not rule out that the physician should rely on the algorithm here—in other words, there may very well be cases that the good or bad at stake is so weighty that the instrumental value of relying on the algorithm swamps the various values of choice that may be realized in making the choice oneself. We are grateful to an anonymous reviewer for this example.

22 Our argument is neither about the badness of having fewer choices to make nor about the goodness of having more choices to make (nor is it about preserving the status quo number of choices one makes). With respect to the value of choice, that some other choice is made (e.g., to defer to an algorithm) has little bearing on whether, what kind, and the extent to which one of the values of choice would be undermined in abdicating this choice. Adding a choice elsewhere doesn’t somehow replenish the value of choice that is undermined in no longer choosing one’s colleagues.

23 Of course, the various ways in which we are bad at interviewing characterized in the behavioral threat section might tell in favor of choosing by way of these alternative modes of selection (e.g., tests or work samples) when possible. But we hesitate to make this judgment with confidence, given that different kinds of normative concerns may be associated with relying strictly on work samples or tests; for example, it potentially reduces people to a contrived and narrow set of criteria, rather than treating them with respect as individuals and as fellow members of the moral community. For an additional approach to hiring, see Sterling and Merluzzi’s (Reference Sterling and Merluzzi2019) exploration of “tryouts” and their theoretical and practical potential.

24 For a comprehensive discussion of the future of the office, specifically the decisions of firms and employees to work remotely, in a hybrid form, or at an office, see Cappelli (Reference Cappelli2021).

References

REFERENCES

Anderson, C., Brion, S., Moore, D. A., & Kennedy, J. A. 2012. A status-enhancement account of overconfidence. Journal of Personality and Social Psychology, 103(4): 718–35.CrossRefGoogle ScholarPubMed
Anthony, C. 2021. When knowledge work and analytical technologies collide: The practices and consequences of black boxing algorithmic technologies. Administrative Science Quarterly, 66(4): 1173–212.10.1177/00018392211016755CrossRefGoogle Scholar
Aristotle. 1962. Nicomachean ethics (Ostwald, M., Trans.). London: Macmillan.Google Scholar
Arnold, D. G. 2010. Working conditions: Safety and sweatshops. In Brenkert, G. G. & Beauchamp, T. L. (Eds.), The Oxford handbook of business ethics: 628–54. New York: Oxford University Press.CrossRefGoogle Scholar
Balasubramanian, N., Ye, Y., & Xu, M. 2022. Substituting human decision-making with machine learning: Implications for organizational learning. Academy of Management Review, 47(3): 448–65.CrossRefGoogle Scholar
Barry, B. 2007. The cringing and the craven: Freedom of expression in, around, and beyond the workplace. Business Ethics Quarterly, 17(2): 263–96.CrossRefGoogle Scholar
Bartel, C. A., Wrzesniewski, A., & Wiesenfeld, B. M. 2012. Knowing where you stand: Physical isolation, perceived respect, and organizational identification among virtual employees. Organization Science, 23(3): 743–57.10.1287/orsc.1110.0661CrossRefGoogle Scholar
Bartlett, R., Morse, A., Stanton, R., & Wallace, N. 2019. Consumer-lending discrimination in the FinTech era. Working paper no. w25943e, National Bureau of Economic Research, Cambridge, MA.CrossRefGoogle Scholar
Bedi, S. 2021. The myth of the chilling effect. Harvard Journal of Law and Technology, 35(1): 267307.Google Scholar
Behroozi, M., Shirolkar, S., Barik, T., & Parnin, C. 2020. Does stress impact technical interview performance? In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering: 481–92. New York: Association for Computing Machinery.CrossRefGoogle Scholar
Benn, C., & Lazar, S. 2022. What’s wrong with automated influence? Canadian Journal of Philosophy, 52(1): 125–48.CrossRefGoogle Scholar
Bhargava, V. R. 2020. Firm responses to mass outrage: Technology, blame, and employment. Journal of Business Ethics, 163(3): 379400.CrossRefGoogle Scholar
Bhargava, V. R., & Velasquez, M. 2019. Is corporate responsibility relevant to artificial intelligence responsibility? Georgetown Journal of Law and Public Policy, 17: 829–51.Google Scholar
Bhargava, V. R., & Young, C. 2022. The ethics of employment-at-will: An institutional complementarities approach. Business Ethics Quarterly, 32(4): 519–45.CrossRefGoogle Scholar
Billsberry, J. 2007. Experiencing recruitment and selection. West Sussex, UK: John Wiley.Google Scholar
Binder, A. J., Davis, D. B., & Bloom, N. 2015. Career funneling: How elite students learn to define and desire “prestigious” jobs. Sociology of Education, 89(1): 2039.CrossRefGoogle Scholar
Birhane, A. 2021. Algorithmic injustice: A relational ethics approach. Patterns, 2(2): 100205.CrossRefGoogle ScholarPubMed
Bishop, M. A., & Trout, J. D. 2005. Epistemology and the psychology of human judgment. New York: Oxford University Press.CrossRefGoogle Scholar
Bloom, R. F., & Brundage, E. G. 1947. Predictions of success in elementary school for enlisted personnel. In Stuit, D. B. (Ed.), Personnel research and test development in the Naval Bureau of Personnel: 233–61. Princeton, NJ: Princeton University Press.Google Scholar
Bohnet, I. 2016. How to take the bias out of interviews. Harvard Business Review, April 18.Google Scholar
Bowie, N. E. 1998. A Kantian theory of meaningful work. Journal of Business Ethics, 17(9): 1083–92.CrossRefGoogle Scholar
Brennan, J. 2019. Should employers pay a living wage? Journal of Business Ethics, 157(1): 1526.CrossRefGoogle Scholar
Buckley, M. R., Norris, A. C., & Wiese, D. S. 2000. A brief history of the selection interview: May the next 100 years be more fruitful. Journal of Management History, 6(3): 113–26.Google Scholar
Cappelli, P. 2019a. Data science can’t fix hiring (yet). Harvard Business Review, May–June.Google Scholar
Cappelli, P. 2019b. Your approach to hiring is all wrong. Harvard Business Review, May–June.Google Scholar
Cappelli, P. 2020. 4 things to consider before you start using AI in personnel decisions. Harvard Business Review, November 3.Google Scholar
Cappelli, P. 2021. The future of the office: Work from home, remote work, and the hard choices we all face. Philadelphia: Wharton School Press.Google Scholar
Cappelli, P., Tambe, P., & Yakubovich, V. 2020. Can data science change human resources? In Canals, J. & Heukamp, F. (Eds.), The future of management in an AI world: Redefining purpose and strategy in the fourth industrial revolution: 93115. Cham, Switzerland: Springer.CrossRefGoogle Scholar
Carton, A. M. 2018. “I’m not mopping the floors, I’m putting a man on the moon”: How NASA leaders enhanced the meaningfulness of work by changing the meaning of work. Administrative Science Quarterly, 63(2): 323–69.CrossRefGoogle Scholar
Casciaro, T. 2019. Networks and affect in the workplace. In Brass, D. J. & Borgatti, S. P. (Eds.), Social networks at work: 219–38. New York: Routledge.Google Scholar
Caulfield, M. 2021. Pay secrecy, discrimination, and autonomy. Journal of Business Ethics, 171(2): 399420.CrossRefGoogle Scholar
Chalfin, A., Danieli, O., Hillis, A., Jelveh, Z., Luca, M., Ludwig, J., & Mullainathan, S. 2016. Productivity and selection of human capital with machine learning. American Economic Review, 106(5): 124–27.CrossRefGoogle Scholar
Chamorro-Premuzic, T., & Akhtar, R. 2019. Should companies use AI to assess job candidates? Harvard Business Review, May 17.Google Scholar
Conway, J. M., Jako, R. A., & Goodman, D. F. 1995. A meta-analysis of interrater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80(5): 565–79.CrossRefGoogle Scholar
Cornell, N. 2015. A third theory of paternalism. Michigan Law Review, 113(8): 1295–336.Google Scholar
Cowgill, B. 2019. Bias and productivity in humans and algorithms: Theory and evidence from résumé screening. Working paper, Columbia University, New York.Google Scholar
Creel, K., & Hellman, D. 2022. The algorithmic leviathan: Arbitrariness, fairness, and opportunity in algorithmic decision-making systems. Canadian Journal of Philosophy, 52(1): 2643.CrossRefGoogle Scholar
Curchod, C., Patriotta, G., Cohen, L., & Neysen, N. 2020. Working for an algorithm: Power asymmetries and agency in online work settings. Administrative Science Quarterly, 65(3): 644–76.CrossRefGoogle Scholar
Dagan, H. 2019. The value of choice and the justice of contract. Jurisprudence, 10(3): 422–33.CrossRefGoogle Scholar
Dana, J., Dawes, R., & Peterson, N. 2013. Belief in the unstructured interview: The persistence of an illusion. Judgment and Decision Making, 8(5): 512–20.CrossRefGoogle Scholar
Danaher, J. 2016. Robots, law and the retribution gap. Ethics and Information Technology, 18(4): 299309.CrossRefGoogle Scholar
Danieli, O., Hillis, A., & Luca, M. 2016. How to hire with algorithms. Harvard Business Review, October 17.Google Scholar
Davis, S. J., Faberman, R. J., & Haltiwanger, J. C. 2012. Recruiting intensity during and after the Great Recession: National and industry evidence. American Economic Review, 102(3): 584–88.CrossRefGoogle Scholar
Dawes, R. M. 2001. Everyday irrationality: How pseudoscientists, lunatics, and the rest of us systematically fail to think rationally. Boulder, CO: Westview Press.Google Scholar
Dawes, R. M., Faust, D., & Meehl, P. E. 1989. Clinical versus actuarial judgment. Science, 243(4899): 1668–74.CrossRefGoogle ScholarPubMed
De Cremer, D., & De Schutter, L. 2021. How to use algorithmic decision-making to promote inclusiveness in organizations. AI and Ethics, 1(4): 563–67.CrossRefGoogle Scholar
Dessler, G. 2020. Fundamentals of human resource management (5th ed.). New York: Pearson.Google Scholar
DeVaul, R. A., Jervey, F., Chappell, J. A., Caver, P., Short, B., & O’Keefe, S. 1987. Medical school performance of initially rejected students. JAMA, 257(1): 4751.CrossRefGoogle ScholarPubMed
Dobbie, W., Liberman, A., Paravisini, D., & Pathania, V. 2018. Measuring bias in consumer lending. Working paper no. w24953, National Bureau of Economic Research, Cambridge, MA.CrossRefGoogle Scholar
Donaldson, T. 2021. How values ground value creation: The practical inference framework. Organization Theory, 2(4): 127.CrossRefGoogle Scholar
Donaldson, T., & Walsh, J. P. 2015. Toward a theory of business. Research in Organizational Behavior, 35: 181207.10.1016/j.riob.2015.10.002CrossRefGoogle Scholar
Dougherty, T. W., Turban, D. B., & Callender, J. C. 1994. Confirming first impressions in the employment interview: A field study of interviewer behavior. Journal of Applied Psychology, 79(5): 659–65.CrossRefGoogle Scholar
Duus-Otterström, G. 2011. Freedom of will and the value of choice. Social Theory and Practice, 37(2): 256–84.CrossRefGoogle Scholar
Elfenbein, D. W., & Sterling, A. D. 2018. (When) is hiring strategic? Human capital acquisition in the age of algorithms. Strategy Science, 3(4): 668–82.CrossRefGoogle Scholar
Estlund, C. 2003. Working together: How workplace bonds strengthen a diverse democracy. New York: Oxford University Press.10.1093/oso/9780195158281.001.0001CrossRefGoogle Scholar
Eysenck, H. J. 1954. Uses and abuses of psychology. Baltimore: Penguin Books.Google Scholar
Fischer, J. M. 2008. Responsibility and the kinds of freedom. Journal of Ethics, 12: 203–28.CrossRefGoogle Scholar
Freeman, R. E., Harrison, J. S., Wicks, A. C., Parmar, B. L., & De Colle, S. 2010. Stakeholder theory: The state of the art. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Fuster, A., Plosser, M., Schnabl, P., & Vickery, J. 2019. The role of technology in mortgage lending. Review of Financial Studies, 32(5): 1854–99.CrossRefGoogle Scholar
Gehman, J., Treviño, L. K., & Garud, R. 2013. Values work: A process study of the emergence and performance of organizational values practices. Academy of Management Journal, 56(1): 84112.CrossRefGoogle Scholar
Gigerenzer, G. 2007. Gut feelings: The intelligence of the unconscious. London: Penguin Books.Google Scholar
Goldberg, L. R. 1968. Simple models of simple processes? Some research on clinical judgments. American Psychologist, 23(7): 483–96.CrossRefGoogle ScholarPubMed
Grant, A. M. 2012. Leading with meaning: Beneficiary contact, prosocial impact, and the performance effects of transformational leadership. Academy of Management Journal, 55(2): 458–76.CrossRefGoogle Scholar
Graves, L. M., & Karren, R. J. 1996. The employee selection interview: A fresh look at an old problem. Human Resource Management, 35(2): 163–80.3.0.CO;2-W>CrossRefGoogle Scholar
Highhouse, S. 2008. Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1(3): 333–42.CrossRefGoogle Scholar
Himmelreich, J. 2019. Responsibility for killer robots. Ethical Theory and Moral Practice, 22(3): 731–47.CrossRefGoogle Scholar
Hirschman, A. O. 1970. Exit, voice, and loyalty: Responses to decline in firms, organizations, and states. Cambridge, MA: Harvard University Press.Google Scholar
Huffcutt, A. I., Roth, P. L., & McDaniel, M. A. 1996. A meta-analytic investigation of cognitive ability in employment interview evaluations: Moderating characteristics and implications for incremental validity. Journal of Applied Psychology, 81(5): 459–73.CrossRefGoogle Scholar
Hunkenschroer, A. L., & Luetge, C. 2022. Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178(4): 9771007.CrossRefGoogle Scholar
Ignatova, M., & Reilly, K. 2018. The 4 trends changing how you hire in 2018 and beyond. LinkedIn Talent Blog. https://www.linkedin.com/business/talent/blog/talent-strategy/trends-shaping-future-of-hiring.Google Scholar
Jiang, W. Y. 2021. Sustaining meaningful work in a crisis: Adopting and conveying a situational purpose. Administrative Science Quarterly, 66(3): 806–53.CrossRefGoogle Scholar
Johnson, D. G. 2015. Technology with no human responsibility? Journal of Business Ethics, 127(4): 707–15.CrossRefGoogle Scholar
Kant, I. 2012. Groundwork of the metaphysics of morals (2nd ed., Gregor, M. & Timmermann, J., Trans.). Cambridge: Cambridge University Press.Google Scholar
Kausel, E. E., Culbertson, S. S., & Madrid, H. P. 2016. Overconfidence in personnel selection: When and why unstructured interview information can hurt hiring decisions. Organizational Behavior and Human Decision Processes, 137: 2744.CrossRefGoogle Scholar
Kennedy, J. A., Anderson, C., & Moore, D. A. 2013. When overconfidence is revealed to others: Testing the status-enhancement theory of overconfidence. Organizational Behavior and Human Decision Processes, 122(2): 266–79.CrossRefGoogle Scholar
Kim, T. W. 2014. Decent termination: A moral case for severance pay. Business Ethics Quarterly, 24(2): 203–27.CrossRefGoogle Scholar
Kim, T. W., & Routledge, B. R. 2022. Why a right to an explanation of algorithmic decision-making should exist: A trust-based approach. Business Ethics Quarterly, 32(1): 75102.CrossRefGoogle Scholar
Kim, T. W., & Scheller-Wolf, A. 2019. Technological unemployment, meaning in life, purpose of business, and the future of stakeholders. Journal of Business Ethics, 160(2): 319–37.CrossRefGoogle Scholar
Kim, T., Sezer, O., Schroeder, J., Risen, J., Gino, F., & Norton, M. I. 2021. Work group rituals enhance the meaning of work. Organizational Behavior and Human Decision Processes, 165: 197212.CrossRefGoogle Scholar
Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. 2018. Human decisions and machine predictions. Quarterly Journal of Economics, 133(1): 237–93.Google ScholarPubMed
Kleinberg, J., & Raghavan, M. 2021. Algorithmic monoculture and social welfare. Proceedings of the National Academy of Sciences of the United States of America, 118(22): e2018340118.Google ScholarPubMed
König, C. J., Klehe, U.-C., Berchtold, M., & Kleinmann, M. 2010. Reasons for being selective when choosing personnel selection procedures. International Journal of Selection and Assessment, 18(1): 1727.CrossRefGoogle Scholar
Leavitt, K., Schabram, K., Hariharan, P., & Barnes, C. M. 2021. Ghost in the machine: On organizational theory in the age of machine learning. Academy of Management Review, 46(4): 750–77.CrossRefGoogle Scholar
Leli, D. A., & Filskov, S. B. 1984. Clinical detection of intellectual deterioration associated with brain damage. Journal of Clinical Psychology, 40(6): 1435–41.3.0.CO;2-0>CrossRefGoogle ScholarPubMed
Li, M. 2020. To build less-biased AI, hire a more-diverse team. Harvard Business Review, October 26.Google Scholar
Lievens, F., Highhouse, S., & DeCorte, W. 2005. The importance of traits and abilities in supervisors’ hirability decisions as a function of method of assessment. Journal of Occupational and Organizational Psychology, 78(3): 453–70.CrossRefGoogle Scholar
Lippert-Rasmussen, K. 2011. “We are all different”: Statistical discrimination and the right to be treated as an individual. Journal of Ethics, 15(1–2): 4759.10.1007/s10892-010-9095-6CrossRefGoogle Scholar
List, C., & Pettit, P. 2011. Group agency: The possibility, design, and status of corporate agents. Oxford: Oxford University Press.CrossRefGoogle Scholar
London, A. J. 2019. Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1): 1521.CrossRefGoogle ScholarPubMed
Lu, J., Lee, D., Kim, T. W., & Danks, D. 2020. Good explanation for algorithmic transparency. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society: 93. New York: Association for Computing Machinery.Google Scholar
Maitland, I. 1989. Rights in the workplace: A Nozickian argument. Journal of Business Ethics, 8(12): 951–54.CrossRefGoogle Scholar
Martin, K. 2019. Ethical implications and accountability of algorithms. Journal of Business Ethics, 160(4): 835–50.CrossRefGoogle Scholar
Martin, K. 2022. Ethics of data and analytics: Concepts and cases. New York: Taylor and Francis.CrossRefGoogle Scholar
Martin, K., & Waldman, A. 2022. Are algorithmic decisions legitimate? The effect of process and outcomes on perceptions of legitimacy of AI decisions. Journal of Business Ethics. DOI: 10.1007/s10551-021-05032-7.CrossRefGoogle Scholar
Mathis, R. L., Jackson, J. H., Valentine, S., & Meglich, P. 2016. Human resource management (15th ed.). Boston: Cengage.Google Scholar
Matthias, A. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3): 175–83.CrossRefGoogle Scholar
McCall, J. J. 2003. A defense of just cause dismissal rules. Business Ethics Quarterly, 13(2): 151–75.CrossRefGoogle Scholar
McCarthy, J. M., Van Iddekinge, C. H., & Campion, M. A. 2010. Are highly structured job interviews resistant to demographic similarity effects? Personnel Psychology, 63(2): 325–59.CrossRefGoogle Scholar
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. 1994. The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79(4): 599616.CrossRefGoogle Scholar
Meehl, P. E. 1957. When shall we use our heads instead of the formula? Journal of Counseling Psychology, 4(4): 268–73.CrossRefGoogle Scholar
Michaelson, C. 2021. A normative meaning of meaningful work. Journal of Business Ethics, 170(3): 413–28.CrossRefGoogle Scholar
Milstein, R. M., Wilkinson, L., Burrow, G. N., & Kessen, W. 1981. Admission decisions and performance during medical school. Journal of Medical Education, 56(2): 7782.Google ScholarPubMed
Mondy, W. R., & Martocchio, J. J. 2016. Human resource management (14th ed.). Upper Saddle Creek, NJ: Pearson.Google Scholar
Muehlemann, S., & Strupler Leiser, M. 2018. Hiring costs and labor market tightness. Labour Economics, 52: 122–31.CrossRefGoogle Scholar
Müller, V. C. 2021. Ethics of artificial intelligence and robotics. In Zalta, E. N. (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/.CrossRefGoogle Scholar
Munk, C. W. 2021. Tech companies say they can’t find good employees. The companies may be the problem. Wall Street Journal, March 8.Google Scholar
Newman, D. T., Fast, N. J., & Harmon, D. J. 2020. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organizational Behavior and Human Decision Processes, 160: 149–67.CrossRefGoogle Scholar
Nyholm, S. 2018. Attributing agency to automated systems: Reflections on human–robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4): 1201–19.CrossRefGoogle ScholarPubMed
O’Neill, J. 1992. The varieties of intrinsic value. Monist, 75(2): 119–37.CrossRefGoogle Scholar
Oskamp, S. 1965. Overconfidence in case-study judgments. Journal of Consulting Psychology, 29(3): 261–65.CrossRefGoogle ScholarPubMed
Pettit, P. 2007. Responsibility incorporated. Ethics, 117(2): 171201.CrossRefGoogle Scholar
Pillemer, J., & Rothbard, N. P. 2018. Friends without benefits: Understanding the dark sides of workplace friendship. Academy of Management Review, 43(4): 635–60.CrossRefGoogle Scholar
Pissarides, C. A. 2009. The unemployment volatility puzzle: Is wage stickiness the answer? Econometrica, 77(5): 1339–69.Google Scholar
Porter, C. M., Woo, S. E., Allen, D. G., & Keith, M. G. 2019. How do instrumental and expressive network positions relate to turnover? A meta-analytic investigation. Journal of Applied Psychology, 104(4): 511–36.CrossRefGoogle ScholarPubMed
Rahman, H. A. 2021. The invisible cage: Workers’ reactivity to opaque algorithmic evaluations. Administrative Science Quarterly, 66(4): 945–88.CrossRefGoogle Scholar
Raisch, S., & Krakowski, S. 2021. Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1): 192210.CrossRefGoogle Scholar
Rambachan, A., Kleinberg, J., Ludwig, J., & Mullainathan, S. 2020. An economic perspective on algorithmic fairness. AEA Papers and Proceedings, 110: 9195.CrossRefGoogle Scholar
Rambachan, A., & Roth, J. 2020. Bias in, bias out? Evaluating the folk wisdom. Leibniz International Proceedings in Informatics, 156: 6:16:15.Google Scholar
Rauch, M., & Ansari, S. 2022. Waging war from remote cubicles: How workers cope with technologies that disrupt the meaning and morality of their work. Organization Science, 33(1): 83104.CrossRefGoogle Scholar
Reskin, B. F., & McBrier, D. B. 2000. Why not ascription? Organizations’ employment of male and female managers. American Sociological Review, 65(2): 210–33.Google Scholar
Rivera, L. A. 2012. Hiring as cultural matching: The case of elite professional service firms. American Sociological Review, 77(6): 9991022.CrossRefGoogle Scholar
Robertson, K. M., O’Reilly, J., & Hannah, D. R. 2020. Finding meaning in relationships: The impact of network ties and structure on the meaningfulness of work. Academy of Management Review, 45(3): 596619.CrossRefGoogle Scholar
Roff, H. M. 2013. Killing in war: Responsibility, liability, and lethal autonomous robots. In Allhoff, F., Evans, N. G., & Henschke, A. (Eds.), Routledge handbook of ethics and war: Just war theory in the 21st century: 352–64. London: Routledge.Google Scholar
Rogerson, R., & Shimer, R. 2011. Search in macroeconomic models of the labor market. In Ashenfelter, O. & Card, D. (Eds.), Handbook of labor economics (vol. 4): 619700. Amsterdam: Elsevier.Google Scholar
Rosso, B. D., Dekas, K. H., & Wrzesniewski, A. 2010. On the meaning of work: A theoretical integration and review. Research in Organizational Behavior, 30: 91127.10.1016/j.riob.2010.09.001CrossRefGoogle Scholar
Roulin, N., Bourdage, J. S., & Wingate, T. G. 2019. Who is conducting “better” employment interviews? Antecedents of structured interview components use. Personnel Assessment and Decisions, 5(1): 3748.CrossRefGoogle Scholar
Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5): 206–15.CrossRefGoogle ScholarPubMed
Rynes, S. L., Colbert, A. E., & Brown, K. G. 2002. HR professionals’ beliefs about effective human resource practices: Correspondence between research and practice. Human Resource Management, 41(2): 149–74.CrossRefGoogle Scholar
Sawyer, J. 1966. Measurement and prediction, clinical and statistical. Psychological Bulletin, 66(3): 178200.CrossRefGoogle ScholarPubMed
Scanlon, T. M. 1988. The significance of choice. In McMurrin, S. M. (Ed.), The Tanner lectures in human values (vol. 7): 149216. Salt Lake City: University of Utah Press.Google Scholar
Scanlon, T. M. 1998. What we owe to each other. Cambridge, MA: Belknap Press of Harvard University Press.Google Scholar
Scanlon, T. M. 2013. Responsibility and the value of choice. Think, 12(33): 916.CrossRefGoogle Scholar
Scanlon, T. M. 2019. Responsibility for health and the value of choice. In The Lanson lecture in bioethics: 118. Hong Kong: Chinese University of Hong Kong Centre for Bioethics.Google Scholar
Schmidt, F. L., & Hunter, J. E. 1998. The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124(2): 262–74.CrossRefGoogle Scholar
Selbst, A. D., & Powles, J. 2017. Meaningful information and the right to explanation. International Data Privacy Law, 7(4): 233–42.CrossRefGoogle Scholar
Simpson, T. W., & Müller, V. C. 2016. Just war and robots’ killings. Philosophical Quarterly, 66(263): 302–22.CrossRefGoogle Scholar
Society for Human Resource Management. 2016. 2016 human capital benchmarking report. https://www.shrm.org/hr-today/trends-and-forecasting/research-and-surveys/Documents/2016-Human-Capital-Report.pdf.Google Scholar
Sparrow, R. 2007. Killer robots. Journal of Applied Philosophy, 24(1): 6277.CrossRefGoogle Scholar
Sterling, A., & Merluzzi, J. 2019. A longer way in: Tryouts as alternative hiring arrangements in organizations. Research in Organizational Behavior, 39: 100122.CrossRefGoogle Scholar
Stevens, C. K., & Kristof, A. L. 1995. Making the right impression: A field study of applicant impression management during job interviews. Journal of Applied Psychology, 80(5): 587606.CrossRefGoogle Scholar
Susser, D. 2021. Predictive policing and the ethics of preemption. In Jones, B. & Mendieta, E. (Eds.), The ethics of policing: New perspectives on law enforcement: 268–92. New York: NYU Press.CrossRefGoogle Scholar
Tambe, P., Cappelli, P., & Yakubovich, V. 2019. Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4): 1542.CrossRefGoogle Scholar
Tasioulas, J. 2019. First steps towards an ethics of robots and artificial intelligence. Journal of Practical Ethics, 7(1): 6195.Google Scholar
Taylor, R. S. 2017. Exit left: Markets and mobility in republican thought. Oxford: Oxford University Press.CrossRefGoogle Scholar
Tigard, D. W. 2021. There is no techno-responsibility gap. Philosophy and Technology, 34(3): 589607.CrossRefGoogle Scholar
Tong, S., Jia, N., Luo, X., & Fang, Z. 2021. The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(9): 16001631.CrossRefGoogle Scholar
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. 2022. The ethics of algorithms: Key problems and solutions. AI and Society, 37(1): 215–30.CrossRefGoogle Scholar
van der Zee, K. I., Bakker, A. B., & Bakker, P. 2002. Why are structured interviews so rarely used in personnel selection? Journal of Applied Psychology, 87(1): 176–84.CrossRefGoogle ScholarPubMed
Véliz, C., Prunkl, C., Phillips-Brown, M., & Lechterman, T. M. 2021. We might be afraid of black-box algorithms. Journal of Medical Ethics, 47(5): 339.CrossRefGoogle Scholar
Veltman, A. 2016. Meaningful work. New York: Oxford University Press.CrossRefGoogle Scholar
Wachter, S., Mittelstadt, B., & Floridi, L. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2): 7699.CrossRefGoogle Scholar
Walker, T. 2022. Value of choice. Journal of Medical Ethics, 48(1): 6164.CrossRefGoogle ScholarPubMed
Werhane, P., Radin, T. J., & Bowie, N. E. 2004. Employment and employee rights. Malden, MA: Blackwell.CrossRefGoogle Scholar
Wiesner, W. H., & Cronshaw, S. F. 1988. A meta-analytic investigation of the impact of interview format and degree of structure on the validity of the employment interview. Journal of Occupational Psychology, 61(4): 275–90.CrossRefGoogle Scholar
Yam, J., & Skorburg, J. A. 2021. From human resources to human rights: Impact assessments for hiring algorithms. Ethics and Information Technology, 23(4): 611–23.CrossRefGoogle Scholar
Zimmerman, M. J., & Bradley, B. 2019. Intrinsic vs extrinsic value. In Zalta, E. N. (Ed.), The Stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/spr2019/entries/value-intrinsic-extrinsic/.Google Scholar