Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-26T15:42:13.535Z Has data issue: false hasContentIssue false

How Could We Know When a Robot was a Moral Patient?

Published online by Cambridge University Press:  10 June 2021

Henry Shevlin*
Affiliation:
Leverhulme Centre for the Future of Intelligence, University of Cambridge, CambridgeCB2 1SB, United Kingdom
*
*Corresponding author. Email: [email protected]

Abstract

There is growing interest in machine ethics in the question of whether and under what circumstances an artificial intelligence would deserve moral consideration. This paper explores a particular type of moral status that the author terms psychological moral patiency, focusing on the epistemological question of what sort of evidence might lead us to reasonably conclude that a given artificial system qualified as having this status. The paper surveys five possible criteria that might be applied: intuitive judgments, assessments of intelligence, the presence of desires and autonomous behavior, evidence of sentience, and behavioral equivalence. The author suggests that despite its limitations, the latter approach offers the best way forward, and defends a variant of that, termed the cognitive equivalence strategy. In short, this holds that an artificial system should be considered a psychological moral patient to the extent that it possesses cognitive mechanisms shared with other beings such as nonhuman animals whom we also consider to be psychological moral patients.

Type
Response
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. For notable recent discussions of the topic, see: Basl, J. Machines as moral patients we should not care about (yet): The interests and welfare of current machines. Philosophy & Technolnology 2014;27(1):7996. doi:10.1007/s13347-013-0122-y;CrossRefGoogle Scholar Gunkel DJ. The other question: Can and should robots have rights? Ethics and Information Technology 2018;20(2):87–99. doi:10.1007/s10676-017-9442-4; Coeckelbergh M. Why care about robots? Empathy, moral standing, and the language of suffering. Kairos Journal of Philosophy & Science 2018;20(1):141–58. doi:10.2478/kjps-2018-0007; Bryson JJ. Robots should be slaves. In: Wilks Y, ed. Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues. Amsterdam: John Benjamins Publishing; 2010; Schwitzgebel E, Garza M. A defense of the rights of artificial intelligences. Midwest Studies in Philosophy 2015;39(1):89–119. doi:10.1111/misp.12032; Tomasik B. Do Artificial Reinforcement-Learning Agents Matter Morally?; 2014 arXiv preprint arXiv:1410.8233. 2014 Oct 30; Sparrow R. The turing triage test. Ethics and Information Technology 2004;6(4):203–13. doi:10.1007/s10676-004-6491-2; Neely EL. Machines and the moral community. Philosophy & Technology 2014;27(1):97–111. doi:10.1007/s13347-013-0114-y; Danaher J. Welcoming robots into the moral circle: A defence of ethical behaviourism. Science and Engineering Ethics 2020;26(4):2023–2049. doi:10.1007/s11948-019-00119-x.

2. The questions of how to analyze notions such as moral patiency and moral status have been discussed widely. For more detailed discussion of the general notion, see, for example, Korsgaard, CM. The Sources of Normativity. Vol 110. Cambridge: Cambridge University Press; 1996. doi:10.1017/cbo9780511554476;CrossRefGoogle Scholar Kamm FM. Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford, New York: Oxford University Press; 2007. doi:10.1093/acprof:oso/9780195189698.001.0001; McMahan J. The Ethics of Killing: Problems at the Margins of Life, 2001; available at https://philpapers-org.ezp.lib.cam.ac.uk/rec/MCMTEO-10 (last accessed 31 May 2019), and for its use in relation to artificial beings see note 1, Basl 2014; Bryson JJ. Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology 2018;20(1):15–26. doi:10.1007/s10676-018-9448-6; Floridi L. The Ethics of Information. Oxford: Oxford University Press; 2013.

3. See note 1, Basl 2014.

4. Wallace, DF. Consider the Lobster: And Other Essays. New York: Little, Brown; 2005.Google Scholar

5. Demertzi, A, Schnakers, C, Ledoux, D, Chatelle, C, Bruno, M-A, Vanhaudenhuyse, A, et al. Different beliefs about pain perception in the vegetative and minimally conscious states: A European survey of medical and paramedical professionals. Progress in Brain Research 2009;177:329–38. doi:10.1016/S0079-6123(09)17722-1.CrossRefGoogle ScholarPubMed

6. Korsgaard, CM. Two distinctions in goodness. The Philosophical Review 1983;92(2):169. doi:10.2307/2184924.CrossRefGoogle Scholar

7. See note 1, Gunkel 2018.

8. See note 1, Coeckelbergh 2018.

9. For more detailed discussion of these positions and arguments against them, see note 1 Schwitzgebel, Garza 2015 and Danaher 2019.

10. Chalmers, DJ. The singularity: A philosophical analysis. Journal of Consciousness Studies 2010;17(910):7–65.Google Scholar

11. For a recent vigorous defence of mind-brain identity theory and criticism of functionalism, see Polger, TW, Shapiro, LA. The Multiple Realization Book. Oxford: Oxford University Press; 2016.CrossRefGoogle Scholar Note in particular their comments on AIs with psychological capacities equivalent to those of a human: “If cognitive digital computers are possible, then multiple realization is probably true and the identity theory is probably false… [however] the ambitions of artificial intelligence [do not] decide the question of multiple realization, although they surely amount to a wager on the outcome.”

12. Singer, P. Speciesism and moral status. Metaphilosophy 2009;40(34):567–81. doi:10.1111/j.1467-9973.2009.01608.x.CrossRefGoogle Scholar

13. For similar views about our capacity to directly perceive the mental states of other humans, see Dretske, FI. Perception and other minds. Noûs 1973;1:3444. doi:10.2307/2216182CrossRefGoogle Scholar; McDowell JH. Meaning, Knowledge, and Reality. Cambridge, MA: Harvard University Press; 1998.

14. See note 1, Gunkel 2018.

15. Rosenthal-von der Pütten, AM, Krämer, NC, Hoffmann, L, Sobieraj, S, Eimler, SC. An experimental study on emotional reactions towards a robot. International Journal of Social Robotics 2013;5(1):1734. doi:10.1007/s12369-012-0173-8.CrossRefGoogle Scholar

16. Suzuki, Y, Galli, L, Ikeda, A, Itakura, S, Kitazaki, M. Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports 2015;5:15924.CrossRefGoogle ScholarPubMed

17. See note 1, Bryson 2010.

18. Schwitzgebel, E, Garza, M. A defense of the rights of artificial intelligences. Midwest Studies in Philosophy 2015;39(1):89119. doi:10.1111/misp.12032.CrossRefGoogle Scholar

19. See note 1, Tomasik 2014.

20. Turing AM. Computing machinery and intelligence. Machine Intelligence: Perspectives on Computational Model 2012;LIX(236):1–28. doi:10.1093/mind/lix.236.433.

21. See note 1, Sparrow 2004.

22. Note, for example, that the British Animals (Scientific Procedures) Act of 1986 which previously protected all vertebrate animals was extended by amendment in 1993 to include octopuses. This amendment was made not on the grounds of intelligence per se, but rather because the complex nervous system of the octopus created reasonable grounds for inferring that octopuses might feel pain.

23. See note 1, Sparrow 2004.

24. See Wang, P. On defining artificial intelligence. Journal of Artificial General Intelligence 2019;10(2):137. doi:10.2478/jagi-2019-0002 for a thorough discussion of this topic.CrossRefGoogle Scholar

25. Shevlin, H, Vold, K, Crosby, M, Halina, M. The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge. EMBO Reports 2019;20(10):e49177. doi:10.15252/embr.201949177.CrossRefGoogle ScholarPubMed

26. Hernández-Orallo, J. The Measure of All Minds: Evaluating Natural and Artificial Intelligence. Cambridge: Cambridge University Press; 2017.CrossRefGoogle Scholar

27. See note 1, Neely 2014.

28. Dawkins, MS. Animal welfare with and without consciousness. Journal of Zoology 2017;301(1):110. doi:10.1111/jzo.12434.CrossRefGoogle Scholar

29. See note 28, Dawkins 2017.

30. See note 1, Basl 2014.

31. Singer, P. Practical Ethics. Cambridge: Cambridge university press; 2011.CrossRefGoogle Scholar

32. It should be noted that Neely herself would likely endorse the adoption of extremely broad notions of autonomy and desire, acknowledging that her proposals involves “a large expansion to the moral community.” As suggested by Basl’s example of the maple trees, however, this faces the worry that any moral obligations consequent upon such expansive notions would have little practical moral relevance.

33. Barron AB, Klein C. What insects can tell us about the origins of consciousness. Proceedings of the National Academy of Sciences of the United States of America 2016;113:4900–4908. doi:10.1073/pnas.1520084113.

34. Key, B. Fish do not feel pain and its implications for understanding phenomenal consciousness. Biology and Philosophy 2015;30(2):149165. doi:10.1007/s10539-014-9469-4.CrossRefGoogle Scholar

35. Carruthers, P. Comparative psychology without consciousness. Consciousness and Cognition 2018;63:4760. doi:10.1016/j.concog.2018.06.012.CrossRefGoogle ScholarPubMed

36. See note 1, Gunkel 2018.

37. Owen, AM, Coleman, MR, Boly, M, Davis, MH, Laureys, S, Pickard, JD. Detecting awareness in the vegetative state: Supporting information. Science (80-) 2006;313(5792):14021402. doi:10.1126/science.1130197.CrossRefGoogle Scholar

38. Block, N. Comparing the major theories of consciousness. In: Gazzaniga, MS, ed. The Cognitive Neurosciences. Cambridge, MA: MIT Press; 2009, at 11111122.Google Scholar

39. See note 1, Danaher 2019.

40. This is a broader worry for theories of moral patiency that operationalise the morally relevant psychological capacity in terms of some specific behaviour of information-processing capability. As Tomasik notes, “[w]hen we develop a simple metric for measuring something… we can game the system by constructing degenerate examples of systems exhibiting that property.” See note 1, Tomasik 2014.

41. See note 1, Danaher 2019.

42. It is possible that Danaher would agree on the importance of such forms of theoretical explanation, and would merely wish to reconstruct them in terms of expected outcomes on behavior. If so, there is perhaps little disagreement between us, although given that such reconstructions are at least in principle possible for most of the theoretical vocabulary of cognitive science (e.g., via Ramsey sentences, as suggested by Carnap R. Empiricism, semantics, and ontology empiricism, semantics, and ontology. University of Chicago Press 1950;4(1950):20–40. doi:10.2307/23932367), it suggests the term ethical behaviourism for his view is at least misleading.

43. Selbst, AD, Barocas, S. The Intuitive Appeal of Explainable Machines. Vol 87. Rochester, NY; 2018; available at https://papers.ssrn.com/abstract=3126971 (last accessed 12 November 2018).Google Scholar

44. Comparisons between the capabilities of artificial beings and nonhuman animals is an exciting and active area of research. Consider, for example, the recent Animal-AI Olympics (http://www.animalaiolympics.com/), that pitted AIs against a set of canonical tasks from animal cognition.

45. Ferrier, D. The Functions of the Brain. New York: G.P. Putnam’s Sons; 1886.Google Scholar

46. Gentle, MJ. Pain-related behaviour following sodium urate arthritis is expressed in decerebrate chickens. Physiology & Behavior 1997;62(3):581584. doi:10.1016/S0031-9384(97)00164-9.CrossRefGoogle ScholarPubMed

47. Nairne, JS. The three “Ws” of episodic memory: What, when, and where. American Journal of Psychology 2015;128(2):267279. doi:10.5406/amerjpsyc.128.2.0267.CrossRefGoogle Scholar

48. See note 1, Tomasik 2014.

49. Shevlin, H, Halina, M. Apply rich psychological terms in AI with care. Nature Machine Intelligence 2019;1(4):165167. doi:10.1038/s42256-019-0039-y.CrossRefGoogle Scholar

50. See note 25, Shevlin et al. 2019.

51. Agar, N. How to treat machines that might have minds. Philosophy & Technology 2020;33:269282. doi:10.1007/s13347-019-00357-8.CrossRefGoogle Scholar