Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-20T13:19:33.534Z Has data issue: false hasContentIssue false

Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence

Published online by Cambridge University Press:  10 June 2021

Abstract

The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status.

Type
Commentary
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. Monroe R. Robot future. XKCD 2018 Mar 16; available at https://xkcd.com/1968/ (last accessed 22 Apr 2019).

2. Bryson, JJ. Robots should be slaves. In: Barden, J, ed. Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues. Amsterdam: John Benjamins; 2010:6374, at 66.CrossRefGoogle Scholar

3. Koebler J. The man who broke Ticketmaster. Motherboard 2017 Feb 10; available at https://motherboard.vice.com/en_us/article/the-man-who-broke-ticketmaster (last accessed 22 Apr 2019).

4. Cisco Systems, Inc. Total global email & spam volume for March 2019. Talos IP & Domain Reputation Center; 2019 Apr 21; available at https://www.talosintelligence.com/reputation_center/email_rep (last accessed 22 Apr 2019).

5. Bessen J. The evidence is in: Patent trolls do hurt innovation. Harvard Business Review 2014; available at https://hbr.org/2014/07/the-evidence-is-in-patent-trolls-do-hurt-innovation (last accessed 22 Apr 2019).

6. Calo R. Keynote address, singularity: AI and the law. Seattle University Law Review 2018;41:1123–38, at 1125.

7. Nagel, T. What is it like to be a bat? Philosophical Review 1974;83(4):435–50.CrossRefGoogle Scholar

8. For instance, Schwitzgebel, E, Garza, M. A defense of the rights of artificial intelligences. Midwest Studies in Philosophy 2015;39:98119. More examples are given later in the article.CrossRefGoogle Scholar

9. Torrance, S. Artificial consciousness and artificial ethics: Between realism and social-relationism. Philosophy & Technology 2014;27(1):929.CrossRefGoogle Scholar

10. Coeckelbergh, M. Why care about robots? Empathy, moral standing, and the language of suffering. Kairos Journal of Philosophy & Science 2018;20:143–58.CrossRefGoogle Scholar

11. Neely, EL. Machines and the moral community. Philosophy and Technology 2014;27(1):97111, at 107.CrossRefGoogle Scholar

12. For example, see Spatola, N, Urbanska, K. Conscious machines: Robot rights. Science 2018;359(637):400.Google ScholarPubMed

13. Winfield A. The rights of robot. Alan Winfield’s Web Log; 2007 Feb 13; available at http://alanwinfield.blogspot.com/2007/02/rights-of-robot.html (last accessed 22 Apr 2019).

14. Robertson, J. Human rights vs robot rights: Forecasts from Japan. Critical Asian Studies 2014;46(4):571–98; available at https://www.tandfonline.com/doi/full/10.1080/14672715.2014.960707 (last accessed 8 Sept 2019).CrossRefGoogle Scholar

15. European Parliament. European Parliament resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)); 2017 Feb 16, paragraph 59f; available at http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2f%2fEP%2f%2fTEXT%2bREPORT%2bA8-2017-0005%2b0%2bDOC%2bXML%2bV0%2f%2fEN&language=EN (last accessed 22 Apr 2019).

16. Pavlus J. Curious about consciousness? Ask the self-aware machines. Quanta Magazine 2019 July 11; available at https://www.quantamagazine.org/hod-lipson-is-building-self-aware-robots-20190711/ (last accessed 31 Aug 2019).

17. Gallup, GG Jr. Chimpanzees: Self-recognition. Science 1970;167:86–7.CrossRefGoogle Scholar

18. Levy, D. The ethical treatment of conscious robots. International Journal of Social Robotics 2009;1(3):209–16.CrossRefGoogle Scholar

19. See note 11, Neely 2014. Neely specifies, however, that for goals to be determined by the agent, the goals cannot simply be chosen by following a program. It is not clear whether or not goals generated as a result of the interaction over time of a machine-learning agent with its environment, beginning from an initial programming, would count as agent-determined. Another possible objection is that since no machine-learning agent can change its objective function, no such agent can change its fundamental goal of optimizing this function: it can only change subgoals. However, it is unclear to what extent humans are able to change their own fundamental drives.

20. Chalmers, DJ. The puzzle of conscious experience. Scientific American 1995;273(6):80–6.CrossRefGoogle ScholarPubMed

21. Tononi, G. Consciousness as integrated information: A provisional manifesto. Biological Bulletin 2008;215:216–42, at 216.CrossRefGoogle ScholarPubMed

22. Minsky, M. Why people think computers cannot think. AI Magazine 1982;3(4):315.Google Scholar

23. Lovelace, A. Notes on L. Menabrea’s “sketch of the analytical engine invented by Charles Babbage, Esq.” Scientific Memoirs 1843;3:666731.Google Scholar

24. Torrance, S. Ethics and consciousness in artificial agents. AI & Society 2008 Apr;22(4):495521.CrossRefGoogle Scholar

25. Turing, A. Computing machinery and intelligence. Mind 1950;59(236):433–60, at 452.CrossRefGoogle Scholar

26. Pinker, S. Can a computer be conscious? U.S. News & World Report 1997;123(7):63–5, at 63.Google Scholar

27. Sparrow, R. The Turing triage test. Ethics in Information Technology 2004;6(4):203–13.CrossRefGoogle Scholar

28. Morley S, Lawrence D. Written evidence (AIC0036). House of Lords Select Committee Call for Evidence on Artificial Intelligence; 2017 Aug 30, paragraph 6; available at http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69435.html (last accessed 22 Apr 2019). This passage was first published in Lawrence, D. More human than human. Cambridge Quarterly of Healthcare Ethics 2017;26(3):476–90.CrossRefGoogle Scholar

29. Duncan B. 2017-01-17—EITEST Rig-V from 92.53.127.86 sends Spora malware. Malware-Traffic-Analysis.Net Blog; 2017 Jan 17; available at http://malware-traffic-analysis.net/2017/01/17/index2.html (last accessed 22 Apr 2019).

30. Zeng, Y, Zhao, Y, Bai, J. Towards robot self-consciousness (I): Brain-inspired robot mirror neuron system model and its application in mirror self-recognition. In: Liu, CL, Hussain, A, Luo, B, Tan, K, Zeng, Y, Zhang, Z, eds. Advances in Brain Inspired Cognitive Systems. LNAI 10023. Cham: Springer Nature; 2016:1121.CrossRefGoogle Scholar

31. Ray V, Duncan B. Compromised servers & fraud accounts: Recent Hancitor attacks. Palo Alto Networks Blog; 2018 Feb 7; available at https://unit42.paloaltonetworks.com/unit42-compromised-servers-fraud-accounts-recent-hancitor-attacks/ (last accessed 22 Apr 2019).

32. Bader J. The DGAs of Necurs. Johannes Bader’s Blog; 2015 Feb 20; available at https://www.johannesbader.ch/2015/02/the-dgas-of-necurs/ (last accessed 22 Apr 2019).

33. Schultz J. The many tentacles of the Necurs botnet. Cisco Blog; 2018 Jan 18; available at https://blogs.cisco.com/security/talos/the-many-tentacles-of-the-necurs-botnet (last accessed 22 Apr 2019).

34. Cimanu C. You can now rent a Mirai botnet of 400,000 bots. Bleeping Computer; 2016 Nov 24; available at https://www.bleepingcomputer.com/news/security/you-can-now-rent-a-mirai-botnet-of-400-000-bots/ (last accessed 22 Apr 2019).

35. “MonkeyCee,” Re: Deepfakes. El Reg Forums Post; 2019 Jan 13; available at https://forums.theregister.co.uk/forum/all/2019/01/13/ai_roundup/ (last accessed 22 Apr 2019).

36. Norton, S. Era of AI-powered cyberattacks has started. Wall Street Journal, CIO Journal Blog; 2017 Nov 15; available at https://blogs.wsj.com/cio/2017/11/15/artificial-intelligence-transforms-hacker-arsenal/ (last accessed 31 Aug 2019).

37. Gardiner, J, Nagaraja, S. On the security of machine learning in malware C&C detection. ACM Computing Surveys 2016;49(3):139.CrossRefGoogle Scholar

38. SophosLabs. Surge in Sinowal distribution. Naked Security; 2009 July 12; available at https://nakedsecurity.sophos.com/2009/07/12/surge-sinowal-distribution/ (last accessed 22 Apr 2019).

39. Lindorfer, M. Detecting environment-sensitive malware [Master’s thesis]. Vienna: Vienna University of Technology; 2011.Google Scholar

40. Volkov DA, inventor; Trust Ltd., assignee. Method of and system for analysis of interaction patterns of malware with control centers for detection of cyberattack. United States patent application US20180012021A1. 2018, paragraph 0064.

41. Sattler J. What we have learned from 10 years of the Conficker mystery. F-Secure Blog; 2019 Jan 8; available at https://blog.f-secure.com/what-weve-learned-from-10-years-of-the-conficker-mystery/ (last accessed 22 Apr 2019).

42. Goretsky A. 1000 days of Conficker. WeLiveSecurity Blog; 2011 Aug 17; available at https://www.welivesecurity.com/2011/08/17/1000-days-of-conficker/ (last accessed 22 Apr 2019).

43. Worswick S. @mitsukuchatbot Tweet; 2017 Dec 21; available at https://twitter.com/MitsukuChatbot/status/943957821774815232 (last accessed 22 Apr 2019).

44. Worswick S. @mitsukuchatbot Tweet; 2018 Jan 23; available at https://twitter.com/MitsukuChatbot/status/955928580034310144 (last accessed 22 Apr 2019).

45. Narang S. Tinder: Spammers flirt with popular mobile dating app. Symantec Blog; 2003 July 1; available at https://www.symantec.com/connect/blogs/tinder-spammers-flirt-popular-mobile-dating-app (last accessed 22 Apr 2019).

46. Newitz A. Almost none of the women in the Ashley Madison database ever used the site [updated]. Gizmodo; 2015 Aug 26; available at https://gizmodo.com/almost-none-of-the-women-in-the-ashley-madison-database-1725558944 (last accessed 22 Apr 2019).

47. Epstein, R. From Russia, with love. Scientific American Mind 2007;18(5):16–7.CrossRefGoogle Scholar

48. Garreau J. Bots on the ground. Washington Post 2007 May 6; available at http://www.washingtonpost.com/wp-dyn/content/article/2007/05/05/AR2007050501009.html (last accessed 22 Apr 2019).

49. Whitty, M, Buchanan, T. The online dating romance scam: The psychological impact on victims—both financial and non-financial. Criminology & Criminal Justice 2016;16(2):176–94, at 182–3.CrossRefGoogle Scholar

50. Pfeiffer, UJ, Timmermans, B, Bente, G, Vogeley, K, Schilbach, L. A non-verbal Turing test: Differentiating mind from machine in gaze-based social interaction. PLoS One 2011;6(11):e27591.CrossRefGoogle ScholarPubMed

51. Riek, LD, Rabinowitch, T-C, Chakrabarti, B, Robinson, P. Empathizing with robots: Fellow feeling along the anthropomorphic spectrum. In: Cohn, J, Nijholt, A, Pantic, M, eds. Proceedings of 3rd International Conference on Affective Computing and Intelligent Action. Amsterdam: IEEE; 2009:43–8.Google Scholar

52. Bartneck, C, Bleeker, T, Bun, J, Fens, P, Riet, L. The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Paladyn: Journal of Behavioural Robotics 2010;1(2):109–15.Google Scholar

53. Nass, CI, Moon, Y. Machines and mindlessness: Social responses to computers. Journal of Social Issues 2000;56(1):81103.CrossRefGoogle Scholar

54. Briggs, G, Scheutz, M. How robots can affect human behaviour: Investigating the effects of robotic displays of protest and distress. International Journal of Social Robotics 2014;6(2):113, at 7.CrossRefGoogle Scholar

55. Proudfoot, D. Anthropomorphism and AI: Turing’s much misunderstood imitation game. Artificial Intelligence 2011;175(56):950–7.CrossRefGoogle Scholar

56. Sung, J-Y, Guo, L, Grinter, RE, Christensen, HI. “My Roomba is Rambo”: Intimate home appliances. In: Krumm, J, Abowd, GD, Seneviratne, A, Strang, T, eds. UbiComp. LNCS 4717. Berlin: Springer; 2017:145–62, at 150, 152, and 154.Google Scholar

57. See note 15, European Parliament 2017: Licence for designers.

58. Stewart J. Ready for the robot revolution? BBC News 2011 Oct 3; available at https://www.bbc.co.uk/news/technology-15146053 (last accessed 22 Apr 2019).

59. Mowbray, M. Ethics for bots. In: Smit, I, Lasker, GE, eds. Cognitive, Emotive and Ethical Aspects of Decision Making and Human Action. Baden-Baden: IIAS; 2002;1:24–8; available at https://www.hpl.hp.com/techreports/2002/HPL-2002-48R1.pdf (last accessed 22 Apr 2019).Google Scholar

60. Matwyshyn, AM. The Internet of Bodies. William & Mary Law Review 2019;61(1):77167.Google Scholar

61. See, for example, Li, J, de Avila, BE, Gao, W, Zhang, L, Wang, J. Micro/nanobots for biomedicine: Delivery, surgery, sensing and detoxification. Science Robotics 2017;2:19.CrossRefGoogle Scholar

62. See, for example, Kahan, M, Gil, B, Adar, R, Shapiro, E. Towards molecular computers that operate in a biological environment. Physica D: Nonlinear Phenomena 2008;237:9(1):1165–72.CrossRefGoogle Scholar