Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-24T07:42:51.077Z Has data issue: false hasContentIssue false

What Do We Owe to Novel Synthetic Beings and How Can We Be Sure?

Published online by Cambridge University Press:  10 June 2021

Alex McKeown*
Affiliation:
Department of Psychiatry, Wellcome Centre for Ethics and Humanities, University of Oxford, Warneford Hospital, OxfordOX3 7JX, United Kingdom
*
Corresponding author. Email: [email protected]

Abstract

Embodiment is typically given insufficient weight in debates concerning the moral status of Novel Synthetic Beings (NSBs) such as sentient or sapient Artificial Intelligences (AIs). Discussion usually turns on whether AIs are conscious or self-aware, but this does not exhaust what is morally relevant. Since moral agency encompasses what a being wants to do, the means by which it enacts choices in the world is a feature of such agency. In determining the moral status of NSBs and our obligations to them, therefore, we must consider how their corporeality shapes their options, preferences, values, and is constitutive of their moral universe. Analysing AI embodiment and the coupling between cognition and world, the paper shows why determination of moral status is only sensible in terms of the whole being, rather than mental sophistication alone, and why failure to do this leads to an impoverished account of our obligations to such NSBs.

Type
Commentary
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. Miller, HB. Science, ethics, and moral status. Between the Species 1993;10(10):10–8.Google Scholar

2. Steinbock, B. Moral status, moral value, and human embryos: Implications for stem cell research. In: Steinbock, B, ed. The Oxford Handbook of Bioethics. Berlin, Heidelberg Oxford University Press; 2007:416–40.Google Scholar

3. Steinbock, B. Speciesism and the idea of equality. Philosophy 1978;53(204):247–56.CrossRefGoogle Scholar

4. Liao, SM. The basis of human moral status. Journal of Moral Philosophy 2010;7(2):159–79.CrossRefGoogle Scholar

5. See note 4, Liao 2010, at 161.

6. See note 1, Miller 1993.

7. Blasimme, A, Bortolotti, L. Intentionality and the welfare of minded non-humans. Teorema 2010;29(2):8396.Google Scholar

8. The prima facie caveat here is significant here in view of remarks elsewhere in this paper about the moral status of beings beneath the threshold of sentience, since I hold that our obligations to particular beings diminishes the further downward one moves from this threshold.

9. Puryear, S. Sentience, rationality, and moral status: A further reply to Hsiao. Journal of Agricultural and Environmental Ethics 2016;29(4):697704.CrossRefGoogle Scholar

10. Dion, M. The moral status of non-human beings and their ecosystems. Ethics, Place & Environment 2000;3(2):221–9.CrossRefGoogle Scholar

11. DeGrazia, D. Moral status as a matter of degree? The Southern Journal of Philosophy 2008;46(2):181–98.CrossRefGoogle Scholar

12. Singer, P. Speciesism and moral status. Metaphilosophy 2009;40(3–4):567–81.CrossRefGoogle Scholar

13. Singer P. Why speciesism is wrong: A response to Kagan. Journal of Applied Philosophy 2016;33(1):31–5.

14. Bayne, T, Brainard, D, Byrne, R, Chittka, L, Clayton, N, Hayes, C, et al. What is cognition? Current Biology 2019;29(13):608–15.CrossRefGoogle ScholarPubMed

15. See note 14, Bayne et al. 2019.

16. I anticipate the objection that cognition is not the only morally significant feature of mind. For example, if moral status is conferred partly by the capacity for suffering, then many creatures of limited mental sophistication have moral status, since suffering is characterized by affective as well as cognitive capacity. As with much of the analysis here regarding the moral status of animals, I do not have a perfect answer to this; my only, approximate, response is that if one commits to the view that the moral status of animal species exists in a hierarchy calibrated by mental sophistication, then the moral status of animal species increases in line with increasing cognitive capacity. I adopt this view for the purpose of my argument, but whether it is, ultimately, correct is a legitimate question beyond the scope of this paper.

17. See note 11, De Grazia 2008, at 195.

18. Dion (2000, p. 204) argues that humans do have obligations to beings such as plants in virtue of “their own species interests. In other words, an individual organism has an interest to be healthy, to grow, not for itself, but insofar as it is a member of a given species, whose interest must be promoted.” I dispute this, however. While it may be necessary for a species’ survival that it can flourish, this is not the same as saying that an individual organism has its own interests, given that the notion of “an interest” is a product of human psychology, projected onto other beings with variable cognitive capacity, in the case of animals, or an absence of it, in the case of organisms such as plants. Nevertheless, even if one does hold that plants, for example, have interests, I argue nevertheless that no wrong is being done to them by killing them, as they do not have the kind of internal mental experience according to which one could cause them suffering through curtailing their plans, not least because to talk of a plant having “plans” would make no sense.

19. See note 9, Puryear 2016.

20. I respond here to an objection raised in discussion by Mark Sheehan about whether obligations only begin from sentience upwards or whether we have duties beneath this threshold. For example, one could hold that it is wrong to vandalise a garden or an ancient tree for fun, even though the beings damaged or extinguished do not suffer. If this is plausible, it is because we have an obligation not to do it despite the absence of consciousness. This objection is reasonable but can be responded to in two ways. First, our obligation here may not be grounded by the moral status of the being, but grounded by us in the kind of norms that we wish to uphold; for example, a norm that it is wrong to engage in gratuitous destruction, because this is more likely to conduce to a society in which closer attention is paid to considerations of duty, to what and to whom. So, even if one were sceptical that a plant or a tree has an inner life, it is still valuable and important not to engage in their gratuitous destruction. Second, and relatedly, the objection does not undermine the legitimacy of the—admittedly indistinct—threshold of obligation I am drawing; rather, it shows that above this threshold we have different kinds of moral obligation to sentient or sapient NSBs, grounded not only in whether our actions reflect the kind of society in which we would like to live, or according to an ideal of the kind of person that I should be, but additionally located in the NSB by virtue of the presence of a consciousness that enables it to have interests, desires, plans, wishes, and so on.

21. Duffy B, Joue G. Intelligent robots: The question of embodiment. Proceedings of the Brain-Machine Workshop 2000;1–8:3.

22. Ibáñez, A, Cosmelli, D. Moving beyond computational cognitivism: Understanding intentionality, intersubjectivity and ecology of mind. Integrative Psychological and Behavioural Science 2008;42(2):129–36.CrossRefGoogle ScholarPubMed

23. Kolers, PA, Smythe, WE. Symbol manipulation: Alternatives to the computational view of mind. Journal of Verbal Learning and Verbal Behaviour 1984;23(3):289314.CrossRefGoogle Scholar

24. Ziemke T. Disentangling notions of embodiment. Workshop on Developmental Embodied Cognition 2001;83–8.

25. Ziemke, T. The construction of “reality” in the robot: Constructivist perspectives on situated artificial intelligence and adaptive robotics. Foundational Science 2001;6(1):163233, p. 164.CrossRefGoogle Scholar

26. Clark, A. Embodiment and the philosophy of mind. Royal Institute of Philosophy Supplement 2012;43:3551, p. 35.CrossRefGoogle Scholar

27. Smit, H, Hacker, PMS. Seven misconceptions about the mereological fallacy: A compilation for the perplexed. Erkenntnis. 2014;79(5):1077–97.CrossRefGoogle Scholar

28. Hacker, PMS. The relevance of Wittgenstein’s philosophy of psychology to the psychological sciences. Proceedings of the Leipzig Conference on Wittgenstein and Science 2007;123. https://www.pmshacker.co.uk/downloads.Google Scholar

29. Hacker, PMS. The conceptual framework for the investigation of emotions. International Review of Psychiatry 2004;16(3):199208.CrossRefGoogle ScholarPubMed

30. Bennett, MR, Hacker, PMS. On explaining and understanding cognitive behaviour. Australian Journal of Psychology 2015;67(4):241–50.CrossRefGoogle Scholar

31. Vernon, D, Furlong, D. Philosophical foundations of AI. In: Lungarella, M, Lida, F, Bongard, J, Pfiefer, R, eds. 50 Years of Artificial Intelligence. Springer; 2007:5363, p. 60.Google Scholar

32. Prem, E. Epistemological aspects of embodied artificial intelligence. Cybernetic Systems 1997;28(5):39.CrossRefGoogle Scholar

33. Clark, A. Embodied, situated, and distributed cognition. In: Bechtel, W, Graham, G, eds. A Companion to Cognitive Science. Wiley-Blackwell; 2017:506–17.CrossRefGoogle Scholar

34. See note 32, Prem 1997, at 4.

35. See note 33, Clark 2017.

36. Chrisley, R. Embodied artificial intelligence. Artificial Intelligence 2003;149(1):131–50, p. 132.CrossRefGoogle Scholar

37. Strawson, G. Real naturalism. Proceedings and Addresses of the American Philosophical Association 2012;86(2):125–54.Google Scholar

38. Strawson, G. Real intentionality. Phenomenology and the Cognitive Sciences 2004;3(3):287313.CrossRefGoogle Scholar

39. See note 37, Strawson 2012, at 126.

40. See note 37, Strawson 2012 and note 38, Strawson 2004.

41. Cowley, SJ. Why brains matter: An integrational perspective on the symbolic species. Language Sciences 2002;24(1):7395.CrossRefGoogle Scholar

42. Jonze S. Her. 2013.

43. Liu C. The Three Body Problem. 2008.

44. See notes 2730, Smit, Hacker 2014, Hacker 2007, Hacker 2004, Bennett, Hacker 2015.

45. See notes 30, Bennett, Hacker 2015.

46. See note 27, Smit, Hacker 2014.

47. See note 28, Hacker 2007.

48. See note 27, Smit, Hacker 2014, at 1087.

49. I am not suggesting here that an AI could have a stomach ache or a sore throat, since an AI is likely to lack both organs. Rather, these are analogies for states of mind brought about in the NSB by its interactions with the world.

50. See note 26, Clark 2012, at 36.

51. See note 36, Chrisley 2003, at 132.

52. See note 36, Chrisley 2003, at 132.

53. See note 36, Chrisley 2003, at 132.

54. See notes 37 and 38, Strawson 2012 and Strawson 2004.

55. Pfeifer, R, Lungarella, M, Sporns, O, Kuniyoshi, Y. On the information theoretic implications of embodiment—principles and methods. In: Lungarella, M, Lida, F, Bongard, J, Pfeifer, R, eds. 50 Years of Artificial Intelligence. Springer; 2007:7686, p. 81.Google Scholar

56. See note 24, Ziemke 2001, at 86.

57. Mataric, MJ. Studying the role of embodiment in cognition. Cybernetic Systems. 1997;28(6):457–70, p. 460.CrossRefGoogle Scholar

58. Johnson, M. Ethics. In: Bechtel, W; Graham, G, eds. A Companion to Cognitive Science. Wiley-Blackwell; 2017: 691701, p. 693.CrossRefGoogle Scholar

59. Dewey, J. Human Nature and Conduct. Dover Publications; 1922, at 204.Google Scholar

60. See note 33, Clark 2017, at 516.

61. There are (at least) two challenges to consider here. First, David Lawrence suggested that I have not gone far enough here insofar as it is not probable but certain that an AI would have plans involving its physical capabilities. Second, by contrast, in Superintelligence (2014), Bostrom expresses scepticism—in the case of superintelligent AI at least—that an AI’s plans would be scrutable to us, since this could be comparable to, for example, a beetle attempting to discern the intentions of a human; or in spite of its intelligence it might have very narrow technical goals that exclude much of what we recognize as important for flourishing in humans. I concede to have no definite answer to either challenge beyond observation of humans and sentient animals. I also accept in response to the second the possibility that my judgement may be overly anthropomorphic. Nevertheless, since we have not yet encountered true AI, we have to start somewhere, so I suggest that this cautious heuristic is reasonable. Thanks also to Eddie Jacobs for a helpful revision of this point as it pertains to knowledge of the intentions of Sophons, and to David Lyreskog for related remarks offering a new interpretation of the reasons for the mutual inscrutability between Samantha and Theodore.