Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-20T09:20:10.877Z Has data issue: false hasContentIssue false

Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible

Published online by Cambridge University Press:  10 June 2021

Daniel W. Tigard*
Affiliation:
Institute for History and Ethics of Medicine, Technical University of Munich, Ismaninger Str. 22 81675Munich, Germany
*
*Corresponding author. Email: [email protected]

Abstract

Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a widening responsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions of agency or to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion of artificial moral responsibility.

Type
Articles
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. For the archived story of the family’s settlement, see: https://www.nytimes.com/1983/08/11/us/around-the-nation-jury-awards-10-million-in-killing-by-robot.html (last accessed 4 Dec 2019).

2. Matthias, A. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology 2004;6:175183.CrossRefGoogle Scholar

3. See Sharkey, N. Saying “no!” to lethal autonomous targeting. Journal of Military Ethics 2010;9:369383CrossRefGoogle Scholar; Asaro, P. On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross 2012;94:687709CrossRefGoogle Scholar; Wagner, M. Taking humans out of the loop: Implications for international humanitarian law. Journal of Law, Information & Science 2012;21:155165.Google Scholar

4. Sparrow, R. Killer robots. Journal of Applied Philosophy 2007;24:6277.CrossRefGoogle Scholar

5. See Char, DS, Shah, NH, Magnus, D. Implementing machine learning in healthcare – addressing ethical challenges. New England Journal of Medicine 2018;378:981983CrossRefGoogle ScholarPubMed; Sharkey, A. Should we welcome robot teachers? Ethics and Information Technology 2016;18:283297CrossRefGoogle Scholar; Van Wynsberghe, A. Service robots, care ethics, and design. Ethics and Information Technology 2016;18:311321CrossRefGoogle Scholar; Himmelreich, J. Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice 2018;21:669684CrossRefGoogle Scholar.

6. Allen, C, Wallach, W. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press; 2009Google Scholar; Allen, C, Wallach, W. Moral machines: Contradiction in terms or abdication of human responsibility? In: Lin, P, Abney, K, Bekey, GA, eds. Robot Ethics: The Ethical and Social Implications of Robotics. Cambridge, MA: MIT Press; 2011:5568.Google Scholar

7. This distinction will be discussed below. For a helpful breakdown of moral subjects, moral agency, and morally responsible agency, see the opening chapter of McKenna, M. Conversation and Responsibility. New York: Oxford University Press; 2012.CrossRefGoogle Scholar

8. See Book III of Aristotle’s Nicomachean Ethics.

9. See Mele, A. Agency and mental action. Philosophical Perspectives 1997;11:231249.Google Scholar

10. These have recently been dubbed contrastive or “instead of” reasons. See Dorsey, D. Consequentialism, cognitive limitations, and moral theory. In: Timmons, M, ed. Oxford Studies in Normative Ethics 3. Oxford: Oxford University Press; 2013:179202CrossRefGoogle Scholar; Shoemaker, D. Responsibility from the Margins. New York: Oxford University Press; 2015.CrossRefGoogle Scholar

11. That is, assuming one’s desires are consistent and not overruled by second-order desires. See Frankfurt, H. Freedom of the will and the concept of a person. Journal of Philosophy 1971;68:520.CrossRefGoogle Scholar

12. See Himma, K. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology 2009;11:1929.CrossRefGoogle Scholar

13. See note 7, McKenna 2012, at 11.

14. This condition may be diagnosed as an antisocial personality disorder, such as psychopathy. See the American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5). Washington, DC; 2013.

15. In this way, those who plead ignorance are attempting to eschew responsibility by dissolving their agency. The common reply—“you should have known”—is, then, a way of restoring agency and proceeding with blame. See Biebel, N. Epistemic justification and the ignorance excuse. Philosophical Studies 2018;175:30053028.CrossRefGoogle Scholar

16. According to recent work on implicit biases, it seems very few of us are moral agents in the robust sense outlined here. See, e.g., Doris, J. Talking to Our Selves: Reflection, Ignorance, and Agency. Oxford University Press; 2015CrossRefGoogle Scholar; Levy, N. Implicit bias and moral responsibility: Probing the data. Philosophy and Phenomenological Research 2017;94:326CrossRefGoogle Scholar; Vargas, M. Implicit bias, responsibility, and moral ecology. In: Shoemaker, D, ed. Oxford Studies in Agency and Responsibility 4. Oxford University Press; 2017Google Scholar.

17. Much of Frans de Waal’s work supports this idea; e.g., Preston, S, de Waal, F. Empathy: its ultimate and proximate bases. Behavioral and Brain Sciences 2002;25:120CrossRefGoogle ScholarPubMed.

18. See note 6, Allen and Wallach 2009, at 4.

19. In Asimov’s “A Boy’s Best Friend,” for example, the child of a family settled on a future lunar colony cares more for his robotic canine companion than for a real-life dog. Thanks to Nathan Emmerich for the pointer.

20. Gunkel, D. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge, MA: MIT Press; 2012.CrossRefGoogle Scholar

21. Here I have in mind the Kantian idea that we have indirect duties to non-human animals on the grounds that cruelty towards them translates to cruelty towards humans. See Kant’s Lectures on Ethics 27:459. For related discussion, on our treatment of artifacts, See Parthemore, J, Whitby, B. Moral agency, moral responsibility, and artifacts: What existing artifacts fail to achieve (and why), and why they, nevertheless, can (and do!) make moral claims upon us. International Journal of Machine Consciousness 2014;6:141161.CrossRefGoogle Scholar

22. See note 6, Allen and Wallach 2009, at 68.

23. Ibid., at 25–26. See also Nyholm S. Humans and Robots: Ethics, Agency, and Anthropomorphism. Rowman & Littlefield; 2020.

24. In some ways, I’ve so far echoed the expansion of agency seen in Floridi, L, Sanders, JW. On the morality of artificial agents. Minds and Machines 2004;14:349379.CrossRefGoogle Scholar Differences will emerge, however, as my focus turns to various ways of holding others responsible, rather than expanding agency to encompass artificial entities. Similarities can also be drawn to Coeckelbergh M. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. AI & Society 2009;24:181–189. Still, my account will rely less on AMAs’ appearance and more on human attitudes and interactions within the moral community.

25. See Bastone N. Google assistant now has a ‘pretty please’ feature to help everybody be more polite. Business Insider 2018 Dec 1. available at: https://www.businessinsider.co.za/google-assistant-pretty-please-now-available-2018-11 (last accessed 12 Mar 2019).

26. See, e.g., Coninx, A. Towards long-term social child-robot interaction: Using multi-activity switching to engage young users. Journal of Human-Robot Interaction 2016;5:3267.CrossRefGoogle Scholar

27. For the same reasons, it may be beneficial to design some AI and robotic systems with a degree of ‘social responsiveness.’ See Tigard D, Conradie N, Nagel S. Socially responsive technologies: Toward a co-developmental path. AI & Society 2020;35:885–893.

28. Strawson, PF. Freedom and resentment. Proceedings of the British Academy 1962;48:125.Google Scholar

29. Ibid., at 5.

30. For more on our “responsibility responses” and their various targets, see Shoemaker, D. Qualities of will. Social Philosophy and Policy 2013;30:95120.CrossRefGoogle Scholar

31. See Tognazzini, N. Blameworthiness and the affective account of blame. Philosophia 2013;41:12991312CrossRefGoogle Scholar; also Shoemaker, D. Response-dependent responsibility; or, a funny thing happened on the way to blame. Philosophical Review 2017;126:481527CrossRefGoogle Scholar.

32. See note 7, McKenna 2012; also Fricker, M. What’s the point of blame? A paradigm based explanation. Noûs 2016;50:165183.CrossRefGoogle Scholar

33. See note 10, Shoemaker 2015, at 19–20.

34. The advantages sketched here are persuasively harnessed by the notion of rational sentimentalism, notably in D’Arms, J, Jacobson, D. Sentiment and value. Ethics 2000;110:722748CrossRefGoogle Scholar; D’Arms, J, Jacobson, D. Anthropocentric constraints on human value. In: Shafer-Landau, R, ed. Oxford Studies in Metaethics 1. Oxford: Oxford University Press; 2006: 99126Google Scholar.

35. See note 5 for similar arguments in healthcare, education, and transportation.

36. See note 4, Sparrow 2007, at 65.

37. Ibid., at 66; italics added.

38. Ibid., at 69; italics added. Comparable inconsistencies are seen in Floridi and Sanders 2004 (note 24).

39. Ibid.

40. Ibid., at 71; italics added.

41. See, e.g., Wolf, M, Miller, K, Grodzinsky, F. Why we should have seen that coming: Comments on Microsoft’s Tay “experiment” and wider implications. ACM SIGCAS Computers and Society 2017;47:5464.CrossRefGoogle Scholar

42. For discussion of promising medical uses, see Jiang, F, et al. Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology 2017;2:230243.CrossRefGoogle ScholarPubMed

43. For medical errors, and even unavoidable harms, blame should often be taken by attending practitioners. See Tigard, D. Taking the blame: Appropriate responses to medical error. Journal of Medical Ethics 2019;45:101105.CrossRefGoogle ScholarPubMed

44. Champagne, M, Tonkens, R. Bridging the responsibility gap in automated warfare. Philosophy and Technology 2015;28:125137.CrossRefGoogle Scholar See also Johnson, DG. Technology with no human responsibility? Journal of Business Ethics 2015;127:707715.CrossRefGoogle Scholar My account will be consistent with Johnson’s view that the “responsibility gap depends on human choices.” However, while Johnson focuses on the design choices in technology itself, the choices that occupy my attention concern how and where we direct our responsibility practices. I’m grateful to an anonymous reviewer for comments here.

45. See note 4, Sparrow 2007, at 71.

46. Ibid., at 72.

47. Ibid.

48. Consider also that we punish corporations (e.g. by imposing fines) despite the implausibility of such entities displaying the right sort of response, an anonymous reviewer aptly notes. By contrast, consequential accounts of punishment can be seen as inadequate depictions of moral blame, since they don’t fully explain our attitudes and might not properly distinguish wrongdoers from others. See Wallace, RJ. Responsibility and the Moral Sentiments. Harvard University Press 1994; 5262.Google Scholar I’m grateful to Sven Nyholm for discussion here.

49. Proponents of the ‘process view’ applied to technology can be said to include Johnson, DG, Miller, KW. Un-making artificial moral agents. Ethics and Information Technology 2008;10:123133.CrossRefGoogle Scholar Despite some similarities to this work, my account does not fit neatly into Johnson and Miller’s Computational Modelers or Computers-in-Society group.

50. See Watson, G. Agency and Answerability. Oxford: Oxford University Press 2004; 260288.CrossRefGoogle Scholar

51. See note note 50, Watson 2004, at 274.

52. See note 10, Shoemaker 2015, at 57.

53. Exemptions are contrasted with excuses (and justifications). See, e.g., Watson 2004;224–225 (note 50).

54. See note 10, Shoemaker 2015, at 146–182.

55. However, these sorts of sanctioning mechanisms are less likely to succeed where the target AI system has surpassed humans in general intelligence. See the discussion of ‘incentive methods’ for controlling AI, in Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press 2014: 160163.Google Scholar

56. Such ‘bottom-up’ moral development in AI is discussed in Allen and Wallach 2009 (note 6). Compare also Hellström, T. On the moral responsibility of military robots. Ethics and Information Technology 2013;15:99107CrossRefGoogle Scholar. Again, for some (e.g. Wallace 1994, in note 48), consequential accounts of responsibility will be unsatisfying. While a fuller discussion isn’t possible here, in short, my goal has been to unearth general mechanisms for holding diverse objects responsible, which admittedly will deviate from the robust sorts of responsibility (and justifications) we ascribe to natural moral agents. Again, I’m here indebted to Sven Nyholm.

57. See, e.g., Ren, F. Affective information processing and recognizing human emotion. Electronic Notes in Theoretical Computer Science 2009;225:3950CrossRefGoogle Scholar. Consider also recent work on Amazon’s Alexa, e.g., in Knight W. Amazon working on making Alexa recognize your emotions. MIT Technology Review 2016.

58. Similarly, Helen Nissenbaum suggests that although accountability is often undermined by computing, we can and should restore it, namely by promoting an ‘explicit standard of care’ and imposing ‘strict liability and producer responsibility.’ Nissenbaum, H. Computing and accountability. Communications of the ACM 1994;37:7280CrossRefGoogle Scholar; Nissenbaum, H. Accountability in a computerized society. Science and Engineering Ethics 1996;2:2542.CrossRefGoogle Scholar I thank an anonymous reviewer for connecting my account with Nissenbaum’s early work.

59. See Smith, PT. Just research into killer robots. Ethics and Information Technology 2019;21:281293.CrossRefGoogle Scholar

60. See Combs, TS, et al. Automated vehicles and pedestrian safety: exploring the promise and limits of pedestrian detection. American Journal of Preventive Medicine 2019;56:17.CrossRefGoogle ScholarPubMed

61. John Danaher likewise frames the problem in terms of trade-offs, namely increases in efficiency and perhaps well-being, but at the cost of human participation and comprehension. See Danaher, J. The threat of algocracy: reality, resistance and accommodation. Philosophy and Technology 2016;29:245268CrossRefGoogle Scholar; also Robots, Danaher J, law and the retribution gap. Ethics and Information Technology 2016;18:299309.Google Scholar

62. In a follow-up paper, I explain further how pluralistic conceptions of responsibility can address the alleged gap created by emerging technologies. See Tigard D. There is no techno-responsibility gap. Philosophy and Technology 2020; available at: https://doi.org/10.1007/s13347-020-00414-7.