We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many of the significant developments of our era have resulted from advances in technology, including the design of large-scale systems; advances in medicine, manufacturing, and artificial intelligence; the role of social media in influencing behaviour and toppling governments; and the surge of online transactions that are replacing human face-to-face interactions. These advances have given rise to new kinds of ethical concerns around the uses (and misuses) of technology. This collection of essays by prominent academics and technology leaders covers important ethical questions arising in modern industry, offering guidance on how to approach these dilemmas. Chapters discuss what we can learn from the ethical lapses of #MeToo, Volkswagen, and Cambridge Analytica, and highlight the common need across all applications for sound decision-making and understanding the implications for stakeholders. Technologists and general readers with no formal ethics training and specialists exploring technological applications to the field of ethics will benefit from this overview.
Is computing just for men? Are men and women suited to different careers? This collection of global perspectives challenges these commonly held western views, perpetuated as explanations for women's low participation in computing. By providing an insider look at how different cultures worldwide impact the experiences of women in computing, the book introduces readers to theories and evidence that support the need to turn to environmental factors, rather than innate potential, to understand what determines women's participation in this growing field. This wakeup call to examine the obstacles and catalysts within various cultures and environments will help those interested in improving the situation understand where they might look to make changes that could impact women's participation in their classrooms, companies, and administrations. Computer scientists, STEM educators, students of all disciplines, professionals in the tech industry, leaders in gender equity, anthropologists, and policy makers will all benefit from reading this book.
Barlow’s declaration of independence was a cry for the preservation of the libertarian wild west of the early internet, an ideal of a space of limitless opportunity that its denizens could shape to their liking. He makes two claims here: first, that governments have no real power over the internet, which is a fundamentally unregulable, separate space, both outside of legal jurisdiction and practical reach of governments. The second – a moral claim – is that the rules of online social spaces would evolve to be better – more democratic, more free – than the rules of territorially bound nation-states.
Technology companies are the sheriffs of what used to be the wild west of the internet. In the 1990s, when the internet was young, the imagery of the western frontier really seemed like a good analogy. The internet seemed to radically decentralize power: no longer could massive publishers or broadcasters control the media; anyone could be a publisher and get their message out.1 The internet seemed inherently designed to preserve the freedom of individuals. It seemed impossible to enforce laws against the apparently anonymous masses of internet users distributed around the world. The commercial internet grew out of a military design that avoided single points of failure and was resilient against both nuclear attack and interference by hostile governments.2
In August 2017, several hundred white nationalists marched on the small university town of Charlottesville, Virginia. The rally turned tragic when one of the protesters rammed his car into a crowd of counterprotesters, killing 32-year-old Heather Heyer. The Washington Post characterized the protesters as “a meticulously organized, well-coordinated and heavily armed company of white nationalists.”1
In an article in January 2018 warning of an impending “techlash,” The Economist painted a bleak picture for the CEOs of Amazon, Facebook, Google, Apple, Netflix, and Microsoft. “Things have been rough in Europe for a while,” the article pointed out, and “America is not the haven it was” for the giants of tech that dominate the internet.1 From the presidential candidates in the next election to a group of concerned state attorneys general, The Economist predicted a great deal of anti-tech sentiment was coming from regulators. The year didn’t get much better for major tech companies from there. As the investigation into Russian interference in the 2016 presidential elections unfolded, not just Facebook, but all of the major technology companies faced a sudden shift in public opinion on a wave of negative press.
So far, we have heard a lot about how private actors are trying to regulate the internet. Governments across the world have also been very active in trying to get internet companies to regulate what information their citizens can access and share online. The decentralized, resilient design of the internet makes government censorship much more difficult than in the mass media era, where it was much simpler to embed controls within the operations of a small number of major newspaper publishers and television and radio networks. Governments are adapting, though, and quickly becoming much more sophisticated in how they monitor and control the flow of information online.
To an extent that nobody else has managed, the copyright industries have been able to bake protection for their rights into the very infrastructure of the internet. The challenge of limiting illicit file sharing is similar to many of the other difficult issues – like addressing offensive content, removing defamatory posts, or limiting the flow of misinformation – in internet regulation. How do you control what users do online without directly going after individual users? Legal actions against individuals are expensive; they only really make sense in high value cases. Changing the behavior of many individuals on a large scale is much more difficult, whether it’s users sharing copyrighted music and films or people using the internet to harass others. Any effective answer has to involve technology companies and internet intermediaries in some way, because they have the power to influence large numbers of users through their design choices and policies.
Digital intermediaries govern the internet. The telecommunications companies that provide the infrastructure, the standards organizations that design the protocols, the software companies that create the tools, the content hosts that store the data, the search engines that index that data, and the social media platforms that connect us all make decisions that impact how we communicate on a broad level. They govern us, not in the way that nation-states do, but through design choices that shape what is possible, through algorithms that sort what is visible, and through policies that control what is permitted. The choices these intermediaries make reflect our preferences, but also those of advertisers, governments, lobby groups, and their own visions of right and wrong.
In August 2017, several hundred white nationalists marched on the small university town of Charlottesville, Virginia. The rally turned tragic when one of the protesters rammed his car into a crowd of counterprotesters, killing 32-year-old Heather Heyer. The Washington Post characterized the protesters as “a meticulously organized, well-coordinated and heavily armed company of white nationalists.”1
Because technology companies play such a large role in governing our lives, we should expect them to constitutionalize their processes for making decisions that affect our fundamental rights. By constitutionalization, I mean particularly the introduction of limits imposed by companies on their own exercise of power.2 This process of constitutionalization is the transformation of political limits that have historically only applied to governments to apply to a decentralized environment where many different types of actors can be said to play a governing role in society.3 This is the translation of the concept of the rule of law to formalize the “lawless” internal processes of powerful corporations in a way that limits and regulates how power is exercised. This translation is a shift away from purely legal conceptions of the rule of law that is essential to pursue if the core goal of the rule of law – limiting the arbitrary exercise of power – is to be achieved in the messy social systems of real life where governments are not the only bodies that regulate our lives.4
In 2009, Facebook CEO Mark Zuckerberg announced that the massive social network would become more democratic. Responding to criticism over controversial changes to its privacy policy, Zuckerberg pledged that from then on Facebook users would have direct input on the development of the site’s terms of service. These terms were “the governing document” for Facebook users across the world, Zuckerberg said; “Given its importance, we need to make sure the terms reflect the principles and values of the people using the service.”1 Facebook committed to ensuring that users would be consulted on any changes to its rules and that the company would in future defer to the popular will of its users through a new voting process.
Facebook has come under sustained criticism from human rights groups for its role in helping to spread hate speech that fueled the crisis. The platform’s policies prohibit incitement to violence and hate speech, as well as hate organizations and content that expresses support or praise for those groups or their members. These policies, however, were not well enforced during the crisis.8 The Burma Human Rights Network reported that official government Facebook pages used dehumanizing language in a campaign to “demonize” the Rohingya population, and “Facebook posts by nationalists have directed abuse towards journalists, NGO workers and Rohingya activists.”9 The military in Myanmar executed an extensive, systematic campaign involving hundreds of military personnel who used fake Facebook accounts to spread anti-Rohingya propaganda, flooding news and celebrity pages with incendiary comments and disinformation.10
In 1215, on a floodplain on the bank of the River Thames, King John of England met with a group of rebel barons to negotiate a peace treaty. The meeting at Runnymede, about halfway between the fortress of Windsor Castle and the camp of the rebels, became one of the most significant events of Western political history. After raising heavy taxes to fund an expensive and disastrous war in France, King John was deeply unpopular at home. He ruled with might and divine right; the king was above the law. He regularly used the justice system to suppress and imprison his political opponents and to extort more funds from his feudal lords. The peace charter promised an end to the arbitrary rule of the king, guaranteeing the liberties of feudal lords. The document became known as Magna Carta (the “great charter”), described by Lord Denning as “the greatest constitutional document of all times – the foundation of the freedom of the individual against the arbitrary authority of the despot.”1
In 1215, on a floodplain on the bank of the River Thames, King John of England met with a group of rebel barons to negotiate a peace treaty. The meeting at Runnymede, about halfway between the fortress of Windsor Castle and the camp of the rebels, became one of the most significant events of Western political history. After raising heavy taxes to fund an expensive and disastrous war in France, King John was deeply unpopular at home. He ruled with might and divine right; the king was above the law. He regularly used the justice system to suppress and imprison his political opponents and to extort more funds from his feudal lords. The peace charter promised an end to the arbitrary rule of the king, guaranteeing the liberties of feudal lords. The document became known as Magna Carta (the “great charter”), described by Lord Denning as “the greatest constitutional document of all times – the foundation of the freedom of the individual against the arbitrary authority of the despot.”1