We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 1 establishes the context of our project and defends its theoretical and practical importance. Section 1.2 outlines the basic conceptual framework employed in the book, including the distinction between two concepts of hate speech and our twin-track approach to analysing them. We also highlight some of the pay-offs that flow from this conceptual framework. Section 1.3 explains what we mean by ‘grey areas of hate speech’ including identifying three underlying reasons or explanations why certain phenomena might end up falling into these areas, namely moral, semantic, and conceptual. We also try to motivate the significance and value of working to clear up the grey areas. Finally, Section 1.4 introduces and attempts to respond to the sceptical challenge that says, because the term ‘hate speech’ is linked to conceptual ambiguities, misleading connotations, an explosion of applications, and politicisation, it would be better to dispense with both the term and its concepts. We critically examine five main ways of responding to this sceptical challenge: rehabilitation, downsizing, abandonment, replacement, and enhanced understanding. We defend the final response as being the most promising and the overarching goal of the book.
Chapter 2 identifies prototypical examples of hate speech and seeks to explain what makes them such. Section 2.2 lists the original examples of hate speech cited in Mari Matsuda’s seminal article on the legal concept. We then explain how, even though the ordinary and legal concepts of hate speech share paradigmatic examples, the ordinary concept now has its own extended body of exemplars. Section 2.3 attempts to plot the complex pattern of overlapping and criss-crossing similarities among these exemplars. Section 2.4 looks in more depth at one of the paradigmatic examples of hate speech, namely racial slurs such as ‘nigger’. We highlight similarities it shares with other prototypical examples of hate speech. Finally, Section 2.5 defends a particular account of what it means for a new example to have enough similarities with exemplars to count as hate speech. If there are enough similarities across at least four out of five of the distinguishing qualities of target, style, message, act, and effect, then this conceptually justifies applying the phrase ‘x is also hate speech’ to the new example. We dub this the global resemblance test.
Chapter 5 seeks to orient the ordinary and legal concepts of hate speech. Section 5.2 uncovers various ways in which the ordinary and legal concepts of hate speech come together, including in terms of the kinds of speech they both count as hate speech. In Section 5.3, however, we turn to consider the potential sources of divergence between the ordinary and legal concepts of hate speech including the differing social functions or purposes played by the two concepts. Section 5.4 addresses the nature of the relationship and interaction between the ordinary and legal concepts of hate speech. Finally, in Section 5.5 we try to show why theoretical disagreements about the relationship between the ordinary and legal concepts of hate speech matter. In particular, we argue that uncovering these deeper disagreements can help to explain both the source of some academic controversies about the legitimacy of hate speech laws and the source of some wider public debates about the rights and wrongs of social media platform content policies on hate speech.
Chapter 4 defends classifying a further five grey area examples as hate speech in the ordinary sense of the term under the global resemblance test. We shall also critically examine Facebook’s community standard on hate speech in relation to its handling of these kinds of attacks, and make specific recommendations to address relevant weaknesses. Section 4.2 looks at what we call identity attacks. Section 4.3 investigates existential denials, namely statements denying the very existence of people identified by a protected characteristic. Section 4.4 scrutinises identity denials, by which we mean statements denying that certain people are who they take themselves to be, based on protected characteristics. Section 4.5 examines identity miscategorisations, which go one step further and attribute identities to people that do not match the identities they take themselves to possess, based on protected characteristics. Finally, Section 4.6 assesses identity appropriations, wherein people adopt elements of the identities of other people, based on protected characteristics, but without claiming to possess the relevant identities.
Cyberspace is essential for socializing, learning, shopping, and just about everything in modern life. Yet, there is also a dark side to cyberspace: sub-national, transnational, and international actors are challenging the ability of sovereign governments to provide a secure environment for their citizens. Criminal groups hold businesses and local governments hostage through ransomware, foreign intelligence services steal intellectual property and conduct influence operations, governments attempt to rewrite Internet protocols to facilitate censorship, and militaries prepare to use cyberspace operations in wars. Security in the Cyber Age breaks-down how cyberspace works, analyzes how state and non-state actors exploit vulnerabilities in cyberspace, and provides ways to improve cybersecurity. Written by a computer scientist and national security scholar-practitioner, the book offers technological, policy, and ethical ways to protect cyberspace. Its interdisciplinary approach and engaging style make the book accessible to the lay audience as well as computer science and political science students.
No serious attempt to answer the question 'What is hate speech?' would be complete without an exploration of the outer limits of the concept(s). This book critically examines both the ordinary and legal concepts of hate speech, contrasting social media platform content policies with national and international laws. It also explores a range of controversial grey area examples of hate speech. Part I focuses on the ordinary concept and looks at hybrid attacks, selective attacks, reverse attacks, righteous attacks, indirect attacks, identity attacks, existential denials, identity denials, identity miscategorisations, and identity appropriations. Part II concentrates on the legal concept. It considers how to distinguish between hate speech and hate crime, and examines the precarious position of denialism laws in national and international law. Together, the authors draw on conceptual analysis, doctrinal analysis, linguistic analysis, critical analysis, and diachronic analysis to map the new frontiers of the concepts of hate speech.
Today’s climate models trace their lineage to the global circulation models of the 1950s. The core equations are the same, but the algorithms that implement them have evolved, and scientists have taken advantage of each new generation of faster computers to improve their models. The models I’ve studied often weigh in at more than a million lines of code, contributed over many years by hundreds of scientists. And they keep evolving; every climate model is a work in progress. Even the beating heart of a climate model – its “dynamical core” – gets replaced every once in a while. In this chapter, we’ll examine one model in particular, the UK Met Office’s Unified Model, and explore its dynamical core and the design decisions that shaped it.
How certain can we be about projections of future climate change from computer models? In 1979, President Jimmy Carter asked the US National Academy of Science to address this question, and the quest for an answer laid the foundation for a new way of comparing and assessing computational models of climate change. My own work on climate models began with a similar question, and led me to investigate how climate scientists build and test their models. My research took me to climate modelling labs in five different countries, where I interviewed dozens of scientists. In this chapter, we will examine the motivating questions for that work, and explore the original benchmark experiment for climate models – known as Charney sensitivity – developed in response to President Carter’s question.
Climate models are often presented as tools to predict future climate change. But that’s more a reflection of the questions that politicians and the general public ask of the science, rather than what the science does. Climate scientists prefer to use their models to improve our understanding of the past and present, where more definitive answers are possible. Predicting the future is notoriously hard. It requires some careful thinking about what can be predicted and what cannot. On this question, early experiments with climate models led to one of the most profound scientific discoveries of the twentieth century – chaos theory – which gave us a new understanding of the limits of predictability of complex systems. The so-called butterfly effect of chaos theory helps explain why a computer model can predict the weather only for a few days in advance, while the same model can simulate a changing climate over decades and even millennia. To find out why, read on!
Climate and weather are intimately connected. Weather describes what we experience day-to-day, while climate describes what we expect over the longer term. So it’s not surprising the models used to understand weather and climate share much of the same history. While Arrhenius’s model ignored weather altogether, focusing instead on the energy balance of the planet, modern climate models grew out of the early work on numerical weather forecasting – the basic equations for how winds and ocean currents move energy around, under the influence of the Earth’s rotation and gravity. The equations for these circulation patterns were first worked out by Arrhenius’s colleague, Vilhelm Bjerknes, in 1904, but it wasn’t until the invention of the electronic computer that John von Neumann put them to work forecasting the weather. The approach developed by von Neumann’s group now forms the core of today’s weather forecasting models.
Our confidence in climate science depends to some extent on our confidence that the models are valid. But a computational model can never be perfect, because a model cannot capture everything. What do we mean by “valid”? In this chapter, we will examine how climate modellers test their models, and what they do when they find errors. One surprising result is that climate models appear to be less buggy than almost any other software ever produced. More importantly, climate modellers have adapted the tools of science – hypothesis testing, peer review, and scientific replication – in remarkable new ways to overcome the weaknesses in any individual model, to ensure their scientific conclusions are sound. A study of the Max Planck Institute for Meteorology, in Hamburg, Germany will show how they do this.