We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter is based on ethical distinctions. Creating clarity in ethical thought depends on the clarity of distinctions we make in discussing ethical issues. Achieving clarity and consistency in ethical behavior requires understanding some basic distinctions.
In an essay about his science fiction, Isaac Asimov reflected that “it became very common … to picture robots as dangerous devices that invariably destroyed their creators.” He rejected this view and formulated the “laws of robotics,” aimed at ensuring the safety and benevolence of robotic systems. Asimov’s stories about the relationship between people and robots were only a few years old when the phrase “artificial intelligence” (AI) was used for the first time in a 1955 proposal for a study on using computers to “solve kinds of problems now reserved for humans.” Over the half-century since that study, AI has matured into sub-disciplines that have yielded a constellation of methods that enable perception, learning, reasoning, and natural language understanding.
The development and popularity of computer-mediated communication (CMC), social network sites (SNSs), and social media communication (SMC) sparked twenty-first-century ethical dilemmas (Patching & Hirst, 2014; Barnes, 2003). At the heart of social media ethical concerns are data privacy and ownership. The fallout from the Cambridge Analytica data breach on Facebook, which followed a class action settlement in 2012 over the Beacon program, offers clear evidence that lack of user consent over gathering and disseminating information is a long-standing problem (Terelli, Jr. & Splichal, 2018). Facebook appears to have made the problem worse by allowing user data access to outside, third-party program applications (“apps”), and granting user friends the ability to further weaken privacy (Stratton, 2014).
Twenty-first-century innovations in the technical fields designed for human consumption and ultimately as daily life necessities such as personal robots, intelligent implants, driverless cars, and drones require innovations in ethical standards, laws and, rules of ethics. Ethical issues around robots and artificial (AI) intelligence, for example, present a new set of challenges about the new capabilities they afford. These capabilities outpace law and policy in ethics. Tesla and Space X CEO Elon Musk recently warned the governors of the United States that “robots will do everything better than us” and that “AI is a fundamental existential risk for human civilization.” He called for the proactive government regulation of AI, “I think by the time we are reactive in AI regulation, it’s too late” (Domonoske, 2017)
The concerns and corporate practice of business ethics have evolved over the past sixty years. But none of the changes of the past are as great as those that will occur in the next ten years as artificial intelligence (AI) and machine learning become ubiquitous tools in American society. This chapter presents a concise history of corporate attention to business ethics over this historical period in order to identify how “next-generation business ethics” will demonstrate both continuity with and divergence from past attention to business ethics.
In recent times, both journalism and who is defined as a journalist have undergone significant change. With the growth of the internet, and the subsequent ability of anyone with a smartphone camera and a web connection to publish, the business model of journalism that had remained stable for decades has been declared broken and the public service model of journalism under threat. Meanwhile, a US president communicates via Twitter; Facebook Live spreads news while the mainstream media scramble to keep up.
Some of the significant features of our era include the design of large-scale systems; advances in medicine, manufacturing, and artificial intelligence (AI); the role of social media in influencing behavior and toppling governments; and the surge of online transactions that are replacing human face-to-face interactions. Most of these features have resulted from advances in technology. While spanning a variety of disciplines, these features also have two important aspects in common: the necessity for sound decision-making about the technology that is evolving, and the need to understand the ethical implications of these decisions to all stakeholders.
Numerous engineering projects create products and services that are important to society; many have explicit safety implications; some are distinguished by explicitly supporting national security. Failures and deficiencies that might be considered “routine” in some settings can in these cases directly cause injuries and lost lives, in addition to harming national security. In such a setting, decisions regarding quality, testing, reliability, and other “engineering” matters can become ethical decisions, where balancing cost and delivery schedule, for example, against marginal risks and qualities is not a sufficient basis for a decision. When operating in the context of an engineering project with such important societal implications, established engineering processes must therefore be supplemented with additional considerations and decision factors. In this chapter, long-time defense contractor executive and US National Academy of Engineering member Neil Siegel discusses specific examples of ways in which these ethical considerations manifest themselves. The chapter starts with his thesis, asserting that bad engineering risks transitioning into bad ethics under certain circumstances, which are described in the chapter. It then uses a story from the NASA manned space program to illustrate the thesis; unlike some stories, this one has a “happy ending.” The author then moves to the main aspects of the chapter, starting by explaining the behavioral, evolutional, and situational factors that can tempt engineers into unethical behavior: how do engineers get into situations of ethical lapse? No one enters a career in engineering intended to put lives and missions at risk through ethical lapses; at the very least, this is not the path to promotion and positive career recognition. With the basis for such behavior established, the author then defines what he calls the characteristics of modern systems that create risk of ethical lapse; he identifies five specific traits of modern societal systems – systems of the sort that today’s engineers are likely to be engaged in building – as being those that can allow people to slip from bad engineering into bad ethics. These characteristics are then illustrated with examples, from everyday engineering situations, such as working to ensure the reliability of the electric power grid, and designing today’s automobiles. The very complexities and richness of features that distinguish many of today’s products and critical societal systems are shown to become a channel through which bad engineering can transition into bad ethics. Lastly, the chapter discusses some of the author’s ideas about how to correct these situations, and guard against these temptations.
Over the last decade, I have served as the Dean of Religious Life at the University of Southern California (USC), where I oversee more than ninety student religious groups and more than fifty campus chaplains on campus; collectively representing all the world’s great religious traditions and many humanist, spiritual, and denominational perspectives as well. I also have the great privilege to do this work on a campus with more international students than almost any other university in the United States, in the heart of Los Angeles, the most religiously diverse city in human history (Loskota, 2015). As a result, the opportunities to think deeply about geo-religious diversity, interfaith engagement, and global ethics are unparalleled at USC (Mayhew, Rockenbach, & Bowman, 2016).
The past few years have seen a remarkable amount of attention on the long-term future of artificial intelligence (AI). Icons of science and technology such as Stephen Hawking (Cellan-Jones, 2014), Elon Musk (Musk, 2014), and Bill Gates (Gates, 2015) have expressed concern that superintelligent AI may wipe out humanity in the long run. Stuart Russell, coauthor of the most-cited textbook of AI (Russell & Norvig, 2003), recently began prolifically advocating (Dafoe & Russell, 2016) for the field of AI to take this possibility seriously. AI conferences now frequently have panels and workshops on the topic. There has been an outpouring of support from many leading AI researchers for an open letter calling for greatly increased research dedicated to ensuring that increasingly capable AI remains “robust and beneficial,” and gradually a field of “AI safety” is coming into being (Pistono & Yampolskiy, 2016; Yampolskiy, 2016, 2018; Yampolskiy & Spellchecker, 2016). Why all this attention?
This chapter is a “case study,” that is, a collection of facts organized into a story (the case) analyzed to yield one or more lessons (the study). Collecting facts is always a problem. There is no end of facts. Even a small event in the distant past may yield a surprise or two if one looks carefully enough. But the problem of collecting facts is especially severe when the facts change almost daily as the story “unfolds” in the news. One must either stop collecting on some arbitrarily chosen day or go on collecting indefinitely. I stopped collecting on October 3, 2016 (the day on which I first passed this chapter to the editor of this volume). There is undoubtedly much to be learned from the facts uncovered since then, but this chapter leaves to others the collecting and analyzing of those newer facts. The story I tell is good enough for the use I make of it here – and for future generations to consider. Increasingly, whistleblowing is being understood to be part of the professional responsibilities of an engineer.
This chapter presents reflections on next-generation ethical issues by four deans at the University of Southern California: Public Policy, Medicine, Business, and Engineering. Each of the deans was asked to reflect on some of the important ethical issues that they believe we face today or that we will face in the near future. Their responses follow.
The way people work in teams is changing. The changes are affecting what work teams look like and how those teams function. In years past people worked for the same organizations for many years, perhaps even their whole careers (see Sullivan, 1999 for review). Because their colleagues also stayed in the same organizations for many years, they were likely to work on teams that had relatively stable memberships. This has changed. People now switch employers more frequently and they change roles within organizations more often (Miles & Snow, 1996; Rousseau & Wade-Benzoni, 1995). They are also more likely to work as independent contractors rather than as employees of the company and seek to develop a “boundaryless career” defined as “a sequence of job opportunities that go beyond the boundaries of a single employment setting” (DeFillippi & Arthur, 1996, p. 116).
The study of cyberethics represents an evolution of computer ethics. When the computer first appeared it was seen as a “revolutionary machine,” because of the scale of its activities and its capability to “solve” certain problems with the help of sophisticated software. Attention was soon focused on the disruptive potential of databases, vexing questions about software ownership, and the “hacker ethic.” Traditional moral concepts and values such as responsibility, privacy, and freedom had to be creatively adapted to this new reality (Johnson & Nissenbaum, 1995).
Engineers who operate under constraints and obligations established by codes of ethics or professional responsibility maintained by professional organizations of which they are members and by state government authorities view these constraints and obligations, at times, as limitations or barriers. It is important to recognize, however, that these codes can also work to the benefit of the engineers governed by their terms. The codes of ethics of professional organizations and state authorities can serve a defensive and empowering function for engineers by providing a basis for preserving legal rights of the engineers and by reducing their risk of personal liability based on misconduct. Engineers should understand thoroughly the ethical obligations established by these codes and should identify the provisions of the codes that they can apply in their daily practice to help establish and document their personal defenses against potential future claims of misconduct.