We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
At the beginning of this book, I wrote that the blockchain stands at the intersection of three great themes of modern society: technology, money and democracy. At its heart, the blockchain is a technology for democratizing money – along with many other aspects of our daily lives. Its aim is to use advances in cryptography and computing power to improve the way that our economy works and to give us all greater control over our information, our data and, ultimately, our lives. In the Age of Technology, this is what democracy is supposed to look like. Not a day goes past that we do not hear laments about the stranglehold that big technology firms like Apple, Google and Facebook have over our online identities. Giving power back to the people is an elegant solution to this problem. But decentralization also has its drawbacks. It can be chaotic. It can be confusing.
In this chapter, the author explores the philosophical roots of the blockchain. Beginning with an account of Satoshi Nakamoto's famous bitcoin white paper, the chapter then goes on to describe the history of virtual currencies, the cypherpunks, and debates in political theory over the benefits of centralization and decentralization.
In this chapter, the author discusses how blockchain works, with a particular focus on bitcoin.Beginning with an account of an infamous flaw in bitcoin, the chapter goes on to describe the cryptography that allows the blockchain to serve as a decentralized repository of immutable information.
In this chapter, William Magnuson provides an introduction to the key themes of the book, including blockchain's relationship with broader issues in society. Beginning with an account of the Mt. Gox hack, the chapter then goes on to explore how blockchain revolves around three key ideas: money, technology and democracy.
Over the last decade, cost pressures, technology, automation, globalisation, de-regulation, and changing client relationships have transformed the practice of law, but legal education has been slow to respond. Deciding what learning objectives a law degree ought to prioritise, and how to best strike the balance between vocational and academic training, are questions of growing importance for students, regulators, educators, and the legal profession. This collection provides a range of perspectives on the suite of skills required by the future lawyer and the various approaches to supporting their acquisition. Contributions report on a variety of curriculum initiatives, including role-play, gamification, virtual reality, project-based learning, design thinking, data analytics, clinical legal education, apprenticeships, experiential learning and regulatory reform, and in doing so, offer a vision of what modern legal education might look like.
Our world and the people within it are increasingly interpreted and classified by automated systems. At the same time, automated classifications influence what happens in the physical world. These entanglements change what it means to interact with governance, and shift what elements of our identity are knowable and meaningful. In this cyber-physical world, or 'world state', what is the role for law? Specifically, how should law address the claim that computational systems know us better than we know ourselves? Monitoring Laws traces the history of government profiling from the invention of photography through to emerging applications of computer vision for personality and behavioral analysis. It asks what dimensions of profiling have provoked legal intervention in the past, and what is different about contemporary profiling that requires updating our legal tools. This work should be read by anyone interested in how computation is changing society and governance, and what it is about people that law should protect in a computational world.
Advances in technologies that were unimaginable a century ago have helped in establishing the current high standards of living. Undoubtedly, the oil and gas industry has played a pivotal role in this respect. Thanks to the advent of the petroleum industry, the use of oil and gas has created new factories and revolutionized industries such as transportation and power generation for more than a century. Liquid fuels have impacted transportation and have made various communities closer. The reliance on liquid and gaseous fuels has affected the lives of every person in the world with the invention of air transportation and personal vehicles.
In the 1990s, British writers began using “transparency” as a portmanteau word to describe that desirable state of organizational management and governance characterized by candor, openness, honesty, clarity, legal compliance, and full disclosure (Handy, 1990). At first, the word didn’t take hold on this side of the Atlantic, perhaps because it was too vague and philosophical for American tastes in managerial buzz words (which tend to run more to the precise and practical).
Given the rapid rate of technological innovation and a desire to be proactive in addressing potential ethical challenges that arise in contexts of innovation, engineers must learn to engage in value-sensitive design – design that is responsive to a broad range of values that are implicated in the research, development, and application of technologies. One widely-used tool is Life Cycle Assessment (LCA). Physical products, as with organisms, have a life cycle, starting with extraction of raw materials, and including refining, transport, manufacturing, use, and finally end-of-life treatment and disposal. LCA is a quantitative modeling framework that can estimate emissions that occur throughout a product’s life cycle, as well as any harmful effects that these emissions have on the environment and/or public health. Importantly, LCA tools allow engineers to evaluate multiple types of environmental and health impacts simultaneously and are not limited to a single endpoint or score. However, LCA is only useful to the extent that its models accurately include the full range of values implicated in the use of a technology, and to the extent that stakeholders, from designers to decisionmakers, understand and are able to communicate these values and how they are assigned. Effective LCA requires good ethical training to understand these values.
There was a time when we were all six-sigma-ing. We did so because Jack Welch had bought into the six-sigma phenomenon and he had created a phenomenally performing General Electric (GE). Then we moved along from good to great to the search for excellence to becoming great by choice to whatever superlative Jim Collins told us was the way to a company that was built to last. Then someone moved our cheese. We had no time for that because we were just one-minute managers. We smoothed earnings, incentivized employees, and created three tiers of employees – including getting rid of the bottom tier of employees, whether they deserved termination or kudos. We all wanted to be part of the Fortune 100, the Fortune Most Admired Companies, even as we were led by Fortune CEOs and CFOs of the year – many of whom ended up doing time.
Modern engineering and technology have allowed us to connect with each other and even to reach the moon. But technology has also polluted vast areas of the planet and empowered surveillance and authoritarian governments with dangerous tools. There are numerous cases where engineers and other stakeholders routinely ask what they are capable of inventing, and what they actually should invent. Nuclear weapons and biotechnology are two examples. But when analyzing the transformations arising from less controversial modern socio-technological tools – like the Internet, smartphones, and connected devices, which augment and define our work and social practices – two very distinct areas of responsibility become apparent. On the one hand, a question arises around the values and practices of the engineers who create the technologies. What values should guide their endeavors and how can society promote good conduct? On the other hand, there are questions regarding the effects of people using these technologies. While engineering and design choices can either promote or hinder commendable social behavior and appropriate use, this chapter will focus on the first question.
As technology becomes more powerful, intelligent, and autonomous, its usage also creates unintended consequences and ethical challenges for a vast array of stakeholders. The ethical implications of technology on society, for example, range from job losses (such as potential loss of truck driver jobs due to automation) to lying and deception about a product that may occur within a technology firm or on user-generated content platforms. The challenges around ethical technology design are so multifaceted that there is an essential need for each stakeholder to accept responsibility. Even policymakers who are charged with providing the appropriate regulatory framework and legislation about technologies have an obligation to learn about the pros and cons of proposed options.
There is a delicate balance associated with ethics and privacy in “enterprise continuous monitoring systems.” On the one hand, it can be critical to enterprises to continuously monitor the ethical behavior of different agents, and thus, facilitate enterprise risk management, as noted in the KPMG quote. In particular, continuous monitoring systems help firms monitor related internal and external agents to make sure that the agents hired by or engaged by the enterprise are behaving ethically. However, on the other hand, such continuous monitoring systems can pose ethical and privacy risks to those being monitored and provide risks and costs to the company doing the monitoring. For example, inappropriate information can be assembled, stored, and inferred about a range of individuals. Thus, information obtained by continuous monitoring generally should follow privacy principles that require that the data be up-to-date and conform to the purpose for which the data was originally gathered, and other constraints.
When the US Army established the Institute for Creative Technologies (ICT) at the University of Southern California in 1999, the vision was to push the boundaries of immersive technologies for the purpose of enhancing training and education; not only for the military but also for the rest of society. Over the past two decades great progress has been made on the technologies that support this vision. Breakthroughs in graphics, computer vision, artificial intelligence (AI), affective computing, and mixed reality have already transformed how we interact with one another, with digital media, and with the world. Yet this is in many ways only a starting point, since the application of these technologies is just beginning to be realized. The potential for making a positive impact on individuals and society is great, but there is also the possibility of misuse. This chapter describes some of the capabilities underlying the emerging field of immersive digital media; provides a couple of examples of how they can be used in a positive way; and then discusses the inherent dangers from an ethical standpoint.
Each year hundreds of new biomedical devices and therapies are developed to attempt to solve unmet medical needs. However, many fail due to unforeseen challenges of complex ethical, regulatory, and societal issues. We propose that a number of these issues can be effectively transformed into drivers of innovation for medical solutions if ethical analysis is considered early, iteratively, and comprehensively in the research and development process.