Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-17T09:17:26.476Z Has data issue: false hasContentIssue false

On Algorithmic Fairness in Medical Practice

Published online by Cambridge University Press:  20 January 2022

Thomas Grote*
Affiliation:
Ethics and Philosophy Lab, Cluster of Excellence: Machine Learning: New Perspectives for Science, University of Tübingen, Tübingen, Germany
Geoff Keeling
Affiliation:
Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
*
*Corresponding author. Email: [email protected]

Abstract

The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. In this paper, we provide the building blocks for an account of algorithmic bias and its normative relevance in medicine.

Type
Departments and Columns
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This section features original work on ethical, legal, policy and social aspects of the use of computing and information technology in health, biomedical research and the health professions. For submissions, contact Kenneth Goodman at: [email protected].

References

Notes

1. For a recent meta-analysis, see Liu, X, Faes, L, Kale, AU, Wagner, SK, Fu, DJ, Bruynseels, A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. The Lancet Digital Health 2019;1(6):e271–97. doi:10.1016/S2589-7500(19)30123-2.CrossRefGoogle ScholarPubMed

2. Yim, J, Chopra, R, Spitz, T, Winkens, J, Obika, A, Kelly, C, et al. Predicting conversion to wet age-related macular degeneration using deep learning. Nature Medicine 2020;26 : 892–9. doi:10.1038/s41591-020-0867-7.CrossRefGoogle ScholarPubMed

3. Schultebraucks, K, Shalev, A, Michopoulos, V, Grudzen, CR, Shin, S-M, Stevens, JS, et al. A validated predictive algorithm of post-traumatic stress course following emergency department admission after a traumatic stressor. Nature Medicine 2020;26:1084–88. doi:10.1038/s41591-020-0951-z.CrossRefGoogle ScholarPubMed

4. Zhang, S, Bamakan, S, Qu, Q, Li, S. Learning for personalized medicine: A comprehensive review from a deep learning perspective. IEEE Reviews in Biomedical Engineering 2019;12:194208. doi:10.1109/RBME.2018.2864254.CrossRefGoogle ScholarPubMed

5. Geirhos R, Jacobsen JH, Michaelis C. Shortcut learning in deep neural networks. 2020. arXiv:2004.07780 [cs.CV].

6. See Caliskan A, Bryson JJ, Narayanan A. Semantics derived automatically from language corpora contain human-like biases. Science 2017;356(6334):183. doi:10.1126/science.aal4230; Obermeyer, Z, Powers, B, Vogeli, C, Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366(6464):447. doi:10.1126/science.aax2342.CrossRefGoogle Scholar

7. Ploug T, Holm S. The right to refuse diagnostics and treatment planning by artificial intelligence. Medicine, Health Care and Philosophy 2020;23(1):107–14. doi:10.1007/s11019-019-09912-8; Grote, T, Berens, P. On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics 2020;46(3):205. doi:10.1136/medethics-2019-105586;CrossRefGoogle ScholarPubMed McDougall, RJ. Computer knows best? The need for value-flexibility in medical AI. Journal of Medical Ethics 2019;45(3):156. doi:10.1136/medethics-2018-105118;CrossRefGoogle ScholarPubMed Bjerring, JC, Busch, J. Artificial intelligence and patient-centered decision-making. Philosophy & Technology 2021;34:349–71. doi:10.1007/s13347-019-00391-6.Google Scholar

8. Cf. Jordan, MI, Mitchell, TM. Machine learning: Trends, perspectives, and prospects. Science 2015;349(6245):255.CrossRefGoogle ScholarPubMed doi:10.1126/science.aaa8415; LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436–44. doi:10.1038/nature14539.

9. See for reviews: see note 1, Topol 2019; see note 3, Liu et al. 2019.

10. Cf. Lippert-Rasmussen K. Born Free and Equal? A Philosophical Inquiry into the Nature of Discrimination. Oxford, New York: Oxford University Press; 2014, at 30–6.

11. Sinz, FH, Pitkow, X, Reimer, J, Bethge, M, Tolias, AS. Engineering a less artificial intelligence. Neuron 2019;103(6):971. doi:10.1016/j.neuron.2019.08.034.CrossRefGoogle ScholarPubMed

12. Cf. Green B. The false promise of risk assessments: Epistemic reform and the limits of fairness. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. FAT* ‘20. New York: Association for Computing Machinery; 2020, at 594–606.

13. Shilo S, Rossman H, Segal E. Axes of a revolution: Challenges and promises of big data in healthcare. Nature Medicine 2020;26(1):29–38. doi:10.1038/s41591-019-0727-5.

14. Cf. Ballantyne A. How should we think about clinical data ownership? Journal of Medical Ethics 2020;46:289–94.

15. Mittelstadt BD, Floridi L. The ethics of big data: Current and foreseeable issues in biomedical contexts. Science and Engineering Ethics 2016;22(2):303–41. doi:10.1007/s11948-015-9652-2; Véliz C. Not the doctor’s business: Privacy, personal responsibility and data rights in medical settings. Bioethics 2020;34(7):712–8. doi:10.1111/bioe.12711.

16. Hummel P, Braun M, Dabrock P. Own data? Ethical reflections on data ownership. Philosophy & Technology 2020. doi:10.1007/s13347-020-00404-9.

17. See note 12, Shilo et al. 2020.

18. Hoffman, KM, Trawalter, S, Axt, JR, Oliver, MN. Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proceedings of the National Academy of Sciences 2016;113(16):4296. doi:10.1073/pnas.1516047113.CrossRefGoogle ScholarPubMed

19. Cf. Holdcroft A. Gender bias in research: How does it affect evidence based medicine? Journal of the Royal Society of Medicine 2007;100(1):2–3. doi:10.1177/014107680710000102.

20. See note 5, Obermeyer et al. 2020.

21. Cf. See note 9, Lippert-Rasmussen 2014, at 54–6.

22. Cf. See note 9, Lippert-Rasmussen 2014, at 30–5.

23. Hausman DM. What’s wrong with health inequalities?*. Journal of Political Philosophy 2007;15(1):46–66. doi:10.1111/j.1467-9760.2007.00270.x; Marmot M. Social determinants of health inequalities. The Lancet 2005;365(9464):1099–104. doi:10.1016/S0140-6736(05)71146-6.

24. Lippert-Rasmussen, K. When group measures of health should matter. In: Eyal, N, Hurst, S, Norheim, OF, Wikler, D, eds. Inequalities in Health: Concepts, Measures, and Ethics. New York: Oxford University Press; 2013, at 5265.CrossRefGoogle Scholar

25. See note 22, Lippert-Rasmussen 2013.

26. Angwin J, Larson J, Mattu S, Kirchner L. Machine Bias; 2016; available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (last retrieved on 1 Apr 2020).

27. Barocas S, Selbst A. Big data’s disparate impact. 104 California Law Review 2016;671.

28. See for an overview: Mehrabi N, Morstatter F, Saxena N, Lerman, K, Galstyan A. A survey on bias and fairness in machine learning. arXiv:1908.09635; 2019; Barocas S, Hardt M, Narayanan A. fairness and machine learning: fairmlbook.org; 2019.

29. Cf. Hellman D. Measuring algorithmic fairness. Virginia Public Law and Legal Theory Research Paper No. 2019–39 [Internet]; forthcoming; available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3418528.

30. Thanks to an anonymous reviewer for pointing this out.

31. Cf. Rawls J. A Theory of Justice. Cambridge, MA: Harvard University Press; 1971, at §14.

32. Cf. See note 25, Barocas, Selbst 2016

33. Wachter S, Mittelstadt B. A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review 2018(2):443–93.

34. Spencer QNJ. A racial classification for medical genetics. Philosophical Studies 2018;175(5):1013–37. doi:10.1007/s11098-018-1072-0; Yudell, M, Roberts, D, DeSalle, R, Tishkoff, S. Taking race out of human genetics. Science 2016;351(6273):564. doi:10.1126/science.aac4951.CrossRefGoogle ScholarPubMed

35. See note 5, Obermeyer et al. 2020.

36. The complexity of machine learning models raises wider epistemic and ethical concerns with respect to the noninterpretability of algorithmic decisions. We will not address these issues in this paper. Sullivan, E. Understanding from machine learning models. British Journal for the Philosophy of Science 2020 (online first) gives a good introduction to the epistemic side of the topic.

37. Kusner, M, Loftus, J. The long road to fairer algorithms. Nature 2020;578(7793):34–6.CrossRefGoogle Scholar

38. See note 37, Kusner, Loftus 2020, at 35.

39. Broome, J. Weighing Lives. Oxford: Oxford University Press, 2004, at 38 CrossRefGoogle Scholar; Broome, J. Weighing Goods. Oxford: Blackwell Publishers, 1991.Google Scholar

40. Hooker, B. Fairness. Ethical Theory and Moral Practice 2005;8:329–30.CrossRefGoogle Scholar

41. Cf. Saunders, B. Fairness between competing claims. Res Publica 2010;16:42–4.CrossRefGoogle Scholar

42. Broome J. Fairness. Proceedings of the Aristotelian Society 1990;91:87–101; see also Broome J. Weighing Goods. Oxford: Blackwell Publishers; 1991, at 192–200; Broome J. Kamm on fairness. Philosophy and Phenomenological Research 1998;58:955–61; see note 35, Broome 2004, at 37–40; see note 36, Hooker 2005.

43. See note 38, Broome 1998, at 959.

44. See note 35, Broome 2004.

45. Note that this is a simplification. Although claims to medical resources are grounded in medical need, the strength of claims may vary in accordance with other factors such as age. For example, in the current Covid-19 pandemic, an older and a younger person may have the same medical need, in the sense that both have the same probability of survival if put on a ventilator. But it might nevertheless be argued that the younger person has a weaker claim to the resource than the older patient. We are grateful to an anonymous reviewer for pressing us on this point.