Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-02T19:12:44.404Z Has data issue: false hasContentIssue false

Part IV - Reflections and Research Notes

Published online by Cambridge University Press:  21 April 2022

Scott J. Shackelford
Affiliation:
Indiana University, Bloomington
Frederick Douzet
Affiliation:
Université Paris 8
Christopher Ankersen
Affiliation:
New York University

Summary

Type
Chapter
Information
Cyber Peace
Charting a Path Toward a Sustainable, Stable, and Secure Cyberspace
, pp. 193 - 242
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

10 Imagining Cyber Peace An Interview with a Cyber Peace Pioneer

Camille François and Christopher Ankersen

Christopher Ankersen: What, to you, is cyber peace?

Camille François: For me, cyber peace is the set of norms and behaviors that we want democratic societies to observe in cyberspace, both below and above the threshold of armed conflict. It is a recognition that when we think about how to deploy cyber power, you also have to take into account what does it mean for democracy? What does it mean for human rights? It’s a positive framework that talks about how you want to behave, and what you want to preserve, as you’re thinking through deployment of cyber power.

Christopher Ankersen: I think it’s very interesting that you’ve connected cyber peace to the idea of democracy. Do you think, therefore, that it’s not possible for other kinds of countries to play a role in this? Are they always going to be the “others” in this exercise?

Camille François: When I started working on cyber peace, my focus was working on both the US and the French approaches to cyber power. I was looking through historical records of how cyber power was defined, and it was very evident that cyber power was only defined in the context of warfare and conflict. Similarly, it was very obvious that cyber warfare was defined without its companion question, which is what is cyber peace? I thought that this was backwards; I thought that it was important for democracies, who are thinking through what cyber power is and how to deploy it, to have a positive vision of cyber peace, and you deploy cyber power outside the realm of war which, again, was a clear gap.

Christopher Ankersen: It’s very interesting to link it back to this idea of cyber power. Do you think then that cyber peace is a goal? What I mean is, countries are deploying cyber power in order to “do things.” Is cyber peace, one of those things they’re trying to do? Or do you think it’s more like a precondition or even a collateral outcome?

Camille François: It’s a necessary question for the societies to answer. Peace is a state of affairs that is much more common than war, which is what we want. And so it is interesting and somewhat baffling to me that most of the governments whose cyber theories we work on have spent all this time trying to work through the minutiae of how you deploy cyber power in wartime, which is important, but without ever touching on what the considerations are that you go through to get there. What are the appropriate sets of norms? How do you want to deploy cyber power in peacetime? And I think that this blind spot is detrimental to peace and stability.

When I started working on this, people were perhaps confused: it sounded like a “hippie” theory. But I think the past few years have demonstrated that the major cyber incidents do happen in peacetime and that the spectrum of conflict and conflict evolution doesn’t allow these democratic societies to have a space for thinking how they deploy cyber power in peacetime. And this has to be a necessary democratic conversation.

Christopher Ankersen: Can you go into a little bit more what you mean by people’s reactions to cyber peace?

Camille François: So from a research perspective, I was looking at two bodies of conceptions on the role of the state in cyberspace. The first body of work that I was looking at was the cyber utopians. (It’s the John Perry Barlow school of thought, to be brief.) And that’s a really interesting body of work because, initially, it conceptually makes no room for state cyber power. The essence of the declaration says “you giants of flesh and steel have no room where we gather.” A conception of cyberspace that makes no room for the deployment of state cyber power. And that’s interesting. But it creates a huge gap between where we are and that initial conception. That body of work is preoccupied with cyber security. It’s also a school of thought that has thought a lot about encryption, but it kind of stops at actual cyber power. Because, again, it conceptually doesn’t make room for that.

My other point of departure was actual military cyber theory, which is almost the radical opposite of where the cyber utopians are starting from. In it, cyber power deploys itself all over cyberspace, regardless of where we are on the spectrum of conflict and peace on wartime.

And so, looking at these two bodies of work, one says state cyber power is nowhere. The other one says state cyber power is everywhere. And for me it was self-evident that we were lacking the sort of rational approach that says today we are in a situation where states are building cyber power, they are building sort of military theories on how to express cyber power in cyberspace, and we need to have the in-between conversation, which is: What is the desirable use of that power? What is the responsible use? What is the democratic use of cyber power in peacetime?

And that was my point of departure – being stuck in between these two bodies of work, seeing the obvious gap, the conversation that has not happened.

Christopher Ankersen: It’s quite fascinating that the utopians saw cyberspace as almost anarchic, in a libertarian sense, where everything was possible. And we see this crop up over and over again: With the advent of social media, we had the same optimism. “Oh, great! Tahrir Square, uprisings across the Arab Spring, now we will know exactly what’s going on. We won’t have to worry about things being mediated!” But it really only took one contact with reality to see that wasn’t exactly the case. Do you think, therefore, that this idea of cyber stability (as opposed to cyber peace) is a compromise, a way of trying to avoid the disappointment experienced before? Along the lines of “Well, let’s not worry about peace, but can we at least have some kind of rules of the road so that we can have some reliability?” Do you think that stability is an ingredient towards cyber peace? Or is it a completely different approach?

Camille François: So it’s a really interesting question, because one of the things I was circling around while working on cyber peace was also the question of what type of entities belong at the table when we talk about the reasonable deployment of cyber power in peacetime. When I started this body of research, I was at the Berkman Center for Internet and Society at Harvard (where I still am). I love the center: It’s really grounded in the libertarian perspective. Working with one of my colleagues, I organized a meeting between the directors of the Berkman Center and the directors of the West Point Army Cyber Institute to talk about cyber peace. It must have been like 2013 or something. And it was this fascinating moment where it was evident that both parties at the table actually shared a lot of common ground. We’re talking about the same thing, but with such radically different languages and concepts, and radically different perspectives.

And I think that is what I’m aiming for with this idea of cyber peace, which is, if you’re going to talk about stability, that’s fine, you can call it stability. But the normal parties that you would convene when you talk about rules of the road in peacetime have to be at the table for the debate to be meaningful. You have to have a consideration for the tension between cyber power and human rights in peacetime. You have to have corporations at the table. What is the role of the private sector in relationship to the deployment of cyber power in peacetime?

All these other types of conversations are now starting to progress. We finally saw the private sector say, okay, maybe we do have a role in preserving peace and stability in peacetime. And we do have some form of responsibility in the face of cyber power. But that took a very long time.

Christopher Ankersen: One of the questions I had written down was exactly that. If we look at the analogue, the world peace movement from the 60s, it shares a lot of the same ideas with the cyber utopian side. And civil society was a big driver there: NGOs, ordinary people, churches, and community groups, and there was a dialogue of sorts between the people and the government. It was reluctant, but it worked in a way: The disarmament movement was a bottom up affair and it forced politicians to engage. But who wasn’t involved in that conversation? Weapons manufacturers like Dow Chemical (the makers of napalm) and Raytheon. They were implicated in that conversation, but they were not really parties to it. They were like, “well, we’ll wait and see, do we get an order next week? Or do we not but we don’t really have a role in doing anything. We’re not going to cut back if Ronald Reagan wants to engage more for the SDI then full speed ahead. Let someone else drive the ship. And we’ll just provide what’s needed.” But this seems slightly different now that companies, corporations, and firms seem to, as you say, understand, at least implicitly, that they have more of a role.

But we don’t see as much civil society involvement. People aren’t on the streets out there looking for cyber peace. Do you think that that makes cyber peace a different kettle of fish and that we can’t necessarily draw on past practices?

Camille François: There are so many interesting questions in what you just put on the table, I’ll take at least three of them. The first one is: What is the private sector in this context? There isn’t really one private sector. And when you think about it, you know, the Raytheon example is interesting, because you have the part of the private sector that is manufacturing and selling elements of cyber power. So the sort of “hacking for hire” types. And here, the debate is one of regulation. What is the appropriate regulation for shops that develop “zero-days for hire”? And that is a conversation that really was late to the party. We’ve seen organizations like the NSO Group go back and forth on what that means for them to meaningfully respect human rights. I think they got a lot of that very wrong. At the same time, though, regulators have been slow to catch up with that.

So there’s a private sector in that way, that is part of this conversation, because it’s one of regulation. Now, there is another private sector, which sometimes intersects, but mostly doesn’t, which is the private sector on which this conflict is being deployed. And that raises a question of the role of a company like Microsoft, like Cloudflare, like Google, like Facebook. And here, what’s really interesting is I have seen them be part of this conversation without acknowledging it, and therefore, we’re missing the strategic guidance for it.

I’ll give you a very bizarre, specific example, which is one that’s really close to my heart. Ten years ago, Google launched my favorite feature anywhere on the Internet, which is the state sponsored warning. Google decided that its threat Intel team had the visibility to see when private citizens were being targeted by state sponsored actors on their services. And Google decided that it was worth telling these users and started rolling out a little message, initially in Gmail, that told its users “Google has reasons to believe that your account is being targeted by state sponsored actors.” I spent a lot of time working on those features, and they are now replicated across the industry. Twitter’s doing it, Facebook’s doing it, and Microsoft’s doing it. They’re all saying not exactly the same thing and they’re not all advising the same thing. But that is a hell of a recognition that in peacetime cyber power is deployed against the individual, and that there is a need to protect them and inform them.

Christopher Ankersen: That is a great feature, but I would say most people don’t know about it. Let’s be honest, out of 7 billion people, probably less than 100,000 get that message, right? Because they’re actually important enough in somebody else’s ecosystem. And there are a few experts, such as yourself, who know about it, but that’s what I mean. That’s not the same as a peace symbol on a placard that a whole range of people might be attracted to and understand enough to, say, donate money to Greenpeace or actually go out and protest. It just seems to me that, in some sense, this is not a mass movement yet. There’s a perfect example of technical capability to do it and some recognition among some people that it’s necessary and possible. But does that include the people in the United States? Will Google warn somebody if they think the NSA or the FBI or someone is doing that? So few people know about that. It’s not like, “Hey, man, like, you know, did you get your warning yet? Are you on the warning list?”

Camille François: I’ve worked with the targeted communities and the users who get the warning, and talked them through it. What do you understand about it? What did it feel like? What are your questions?

The targeted communities, they’re exactly who you would expect: Members of parliament, elected officials, journalists, activists. I remember I did a user interview with a journalist in cybersecurity who eventually got the warning, and he said, “I finally got it! It was my badge of honor. I was the last one of my friends to get it. Now I can brag at DEF CON!” So there are communities for whom this is a known entity. But then I also talked to users who were more unaware: “Oh, yeah, I see this stuff. I think it’s just routine stuff that they send to everybody, to keep people on their toes.” They fundamentally don’t understand this is because of exactly what you’re saying, which is that we don’t have a movement to explain it. What does it mean? What does it look like? What are the moments to panic and the moments to stay calm? And the advocacy piece, the civil society piece of it, has been quite slow to develop.

Christopher Ankersen: You were going to talk about a third piece of the private sector before I interrupted?

Camille François: I was going to talk about the third piece of your question about the private sector, which is civil society. Last year, I joined the board of Digital Peace Now Society; I’m super excited about what they do. Their mission is to build up advocacy. But to be honest, I think that the fact that the research has been lagging behind has also hampered the advocacy movement’s ability to develop. And I think that what’s happened with Solar Winds is a good example. If you look at the cyber conversation, what do you see? People yelling at one another because they can’t define what constitutes an attack. Which is okay, I understand. But it’s really interesting because you can see that despite years of work on cyber conflict, those important terminologies about what can be expected and what isn’t an appropriate response are still in flux, and they remain contentious points in the actual academic literature. I think that this is because the academic focus on cyber peace for so long has been lagging behind the focus on cyber war.

Christopher Ankersen: Do you think that part of this lag is not just on the research side, but because people perceive this to be “ones and zeros” and hacking and geeky and green screens and just weird stuff that they don’t think they understand? Whereas, let’s be honest, nobody understood nuclear weapons either, but they understood them enough to know “it goes boom, kills people: got it.” And that was enough for people to get informed and have this grassroots “we don’t want it anymore” type movement. Whereas with cyber there’s some feeling of “Well, we need it; I don’t really understand it; somebody knows better than me, the experts must have a hold on this.” And so, therefore, even the civil society groups tend to be more informed, like EFF. These groups are a subset of the “geek community” that get it and therefore have concerns.

Camille François: It is a really interesting example. And lobbyists have been working with them for a long time. That’s a conversation I’ve been having with them for ten years. EFF always says that that part (cyber peace) of the overall question isn’t in their scope. So if you look, for instance, at the EFF statement on the Tallinn manual – it doesn’t exist. That’s not part of their scope.

So it’s interesting to see that we can have entire conversations on norms that are applied to state power both above and beyond the threshold of armed conflict without any meaningful consultation of civil society organizations. Even EFF, which, as you said, is super tech savvy, isn’t around the table. As a result, Tallinn 2, which is preoccupied by conflict below the threshold of armed conflict, has a chapter on human rights that is significantly smaller than the chapter on the Law of the Sea! The way we’ve been engaging with these questions, the way we’ve been defining the scope of these questions, is backwards.

Christopher Ankersen: I wonder if that’s because it comes from this idea, as you say, that most of the movement has come from the cyber security perspective, as opposed to the cyber peace or cyber utopia side. Therefore, they see this as about securing stuff, protecting stuff, as opposed to liberating and kind of offbeat, as defining what we’re actually trying to do, which is have a place where we can get stuff done.

Camille François: Exactly. What you are describing is a very tech-centric definition of cyberspace, one of tech bits and systems, which is why you care most about things like encryption. That very tech centric definition of the space has long been a problem for our ability to address wider issues such as peace and stability. That is, the problem that we had in 2016, in the face of Russian Foreign interference: both Silicon Valley and Washington were so preoccupied excluding that piece from their definition of cyber security. Again, from a normative perspective, perhaps that is okay, but at the end of the day, concretely, it means that in Silicon Valley, you had entire cybersecurity threat intelligence teams with not a single person in charge of detecting the attack that was going to come their way. So yes, you can have whatever definitions you want from a normative perspective. But this trickles down into how peace and security are actually cared for, and how we do defensive work in a way that leaves blind spots open and is, ultimately, problematic for peace and security.

Christopher Ankersen: That is fascinating because it’s this self-defined issue. Privacy? People get that and the solution to that is, somehow, more tech. Get a password manager, get a VPN, don’t do this, don’t do that. And platforms like Facebook will have a “real world harm threshold,” which is to say that if somebody says they’re going to murder somebody, we’ll take that as a threshold to actually do something about it. But beyond that, on things like false information actually going to sway something, perhaps there has been too much of a free hand given, allowing companies to self-define, and therefore, opt out of these conversations. So it’s not just that they’re not welcome at the table, but they’re also not necessarily knocking on the door to get to the table, either. They can sit back and say “we got this little gap here fixed and we got this little gap here.” But what about all “the rest of it”? And I think what you’re saying is “all the rest of it” is cyber peace.

Camille François: Yes, it’s not just hackers and “ones and zeros” everywhere. It’s the unsexy but fundamental space where basic regulatory frameworks apply to protect peace and stability, how to define what’s acceptable, what’s not acceptable, who is in charge of defending it, and how we structure ourselves for it. What is the role of the private sector in that? What is the role of civil society? And what do we expect from our governments? Yes, it’s not very sexy; it’s not the hacker wars, but it represents the space where the vast majority of these incidents happen.

Because we’re lacking this perspective, we’re constantly getting blindsided by major events that after each of them, everybody says, “oh, how is it that we were possibly blindsided in this way?” My answer is that it’s because our focus has been overly concerned with defining cyber war, the topic of countless doctrines, countless papers, and not focused enough on defining and organizing cyber peace.

Christopher Ankersen: A last question then: What do you think the biggest threats are to this idea of cyber peace? Where would you say we were looking at the biggest barriers to actually getting to an idea of cyber peace?

Camille François: It’s over indexing on offensive measures. It’s that every incident that is getting in the way of peace and stability must be addressed by offensive measures, because our state of mind is that of cyber warfare and not that of cyber peace. Once you have a hammer, you have a hammer problem? What we need is a more positive, more defensive, broader understanding of cyber peace, across all of society. This last point is interesting because every time we confront a massive incident that was totally predictable, but yet not exactly in line with how we organize ourselves, one of the answers is, “oh, we need a whole of society response.” That is true, but let’s talk about why we don’t have whole-of-society responses on things that touch cyber power.

11 Overcoming Barriers to Empirical Cyber Research

Anne E. Boustead and Scott J. Shackelford
1 Introduction

Empirical studies have the potential to both inform and transform cyber peace research. Empirical research can shed light on opaque phenomena, summarize and synthesize diverse stakeholder perspectives, and allow causal inferences about the impact of policymaking efforts. However, researchers embarking on empirical projects in the area of cyber peace generally, and cybersecurity specifically, face significant challenges – particularly related to data collection. In this chapter, we identify some of the key impediments to empirical cyber research, and suggest how researchers and other interested stakeholders can overcome these barriers. While these issues stretch across different categories of research designs, some barriers are likely to generate more concern in the contexts of certain types of research questions, as is summarized in Table 11.1. Furthermore, while these obstacles are by no means unique to empirical cyber research, they are particularly salient in this context – and we focus on mechanisms for addressing these barriers that are most likely to be useful to cyber researchers.

Table 11.1 Most salient barriers to addressing different types of empirical cyber research questions

Type

Description of Type

Example Cyber Question

Most Salient Barrier

Exploratory

Focus is on describing and explaining phenomena; may be used to analyze the range of variation in a phenomenon

How do organizations decide whether to use an external cyber risk decision-making framework?

Empirical cyber research projects frequently require expertise from multiple domains, complicating systematic exploration of cyber phenomena

Parameter Estimation

Focus is on quantitatively estimating characteristics of a population in a statistically valid way; generally requires particular kinds of random sampling

How many hours of cybersecurity training do hospital employees receive every year?

Research may only be possible in a narrow range of contexts, making it difficult to systematically observe a population of interest

Causal

Focus is on establishing whether a cause-and-effect relationship exists between two characteristics of a phenomenon

Do policies requiring regular password changes reduce the frequency of successful cyberattacks?

Cyber decisions and outcomes are difficult to observe, making it difficult to identify and evaluate policymaking

2 Barriers to Empirical Cyber Research
2.1 Cyber Decisions and Outcomes Are Difficult to Observe

Difficult-to-obtain data are a common and persistent problem for empirical cyber researchers. Although there are some publicly available data on cyber policies and outcomes (Federal Bureau of Investigation, 2020; Indiana Attorney General, 2020; National Conference of State Legislatures, 2020), these datasets can be fragmentary, and are few and far between. Data that have become available through less traditional means – such as the leaking of information after a data breach – can provide crucial insights into important, previously unobservable phenomenon, but their use in research raises novel and difficult ethical questions (Boustead & Herr, Reference Boustead and Herr2020). In the absence of publicly available datasets, researchers conducting empirical cyber projects must rely more heavily on data collection, increasing the time, effort, and resources necessary to conduct research.

Data collection in empirical cyber research is further complicated by the range of actors involved in cyber policy, and differences in how these actors document and disclose their cyber decision-making. Government cyber policymaking is typically memorialized in publicly released documents – including statutes and judicial opinions – which can be analyzed and used to evaluate the effects of these policies on important outcomes (Romanosky, Telang, & Acquisti, Reference Romanosky, Telang and Acquisti2011). However, much cyber policymaking occurs on an organizational level through decisions made by specific companies and groups about how to manage their own cyber practices (Harknett & Stever, Reference Harknett and Stever2009). This decision-making frequently does not result in public documentation, and organizations may be highly reticent to disclose details of their cyber practices due to concerns about security, brand reputation, or liability.

Researchers cannot evaluate policies that they cannot observe and, perhaps more insidiously, efforts to evaluate observable government policies may be undermined by simultaneous and unobservable organizational decision-making. For example, a heavily publicized data breach event could result in observable legislation mandating employee cybersecurity training, as well as unobservable changes in corporate cyber infrastructure. If the frequency of data breaches declines after the legislation becomes effective, researchers may attribute this change to the legislation without being aware of the confounding and unobservable changes in corporate cyber infrastructure. Reluctance to provide information about cyber decision-making can also result in low survey response rates, making it difficult to accurately estimate how often organizations are adopting particular cyber practices.

2.2 Empirical Cyber Research Projects Frequently Require Expertise from Multiple Domains

Cyber systems consist of more than just technology; they also include the people and organizations involved in using and managing cyber systems. Consequently, empirical cyber research often requires data and analytic techniques from multiple domains and disciplines. For example, a project studying how the passage of data breach notification laws impacts cybersecurity behaviors and outcomes would require expertise in law, behavioral sciences, and computer science (Murciano-Goroff, Reference Murciano-Goroff2019). The range of expertise necessary to conduct these projects generally suggests the need for an interdisciplinary research team. However, differences in the expectations and incentives placed upon researchers in different disciplines may make collaboration difficult.

2.3 Research May Only Be Possible in a Narrow Range of Contexts

While some categories of research questions can be answered with only a limited range of observations, others require either a broader scope of data collection or the use of specialized sampling techniques. This is particularly important when trying to describe a characteristic of a population; for example, when estimating the percentage of Fortune 500 companies that employ a Chief Information Security Officer, or how many hours of cybersecurity training hospital employees receive every year. In order to estimate these characteristics in a statistically valid way, researchers must be able to select individuals from the population to observe so that (1) every member of the population could potentially be studied, and (2) the researcher knows how likely it is that each member would be selected. This process – which is known as conducting a probability sample – generally requires identifying every member of the population and selecting members at random to observe (Groves et al., Reference Groves, Fowler, Couper, Lepkowski, Singer and Tourangeau2011). In the case of cyber peace research, identifying every member of the population can be particularly difficult, especially when researchers are trying to estimate characteristics of technical populations (such as malware) rather than human ones. Even when it is possible to address a research question by studying a narrower population, this choice may impact the generalizability of the research (Lee & Baskerville, Reference Lee and Baskerville2003). As a result, both researchers and policymakers must be careful when trying to generalize the results of the study. For example, further research would be needed to determine whether the results of a survey of cybersecurity practices conducted in Indiana could be generalized to other states (Boustead & Shackelford, Reference Boustead and Shackelford2020).

3 Overcoming Barriers

Although these barriers pose significant challenges to empirical cyber research, they are not insuperable. In the remainder of this document, we identify several practices that individual researchers, universities, and other organizations could adopt to facilitate empirical cyber research.

3.1 Incentivize Interdisciplinary Research Teams

To overcome these difficulties, exploratory cyber research projects may especially benefit from an interdisciplinary team, with expertise in technology, policy, law, and behavioral science. Fortunately, there is a long history of interdisciplinary collaboration in cyber research, including cross-disciplinary conferences, journals, academic programs, and other initiatives. In order to further encourage interdisciplinary cyber research, we would suggest that academic leaders in multiple disciplines make clear how interdisciplinary research will be accounted for during the tenure and promotion process (Benson et al., Reference Benson, Lippitt, Morrison, Cosens, Boll, Chaffin and Link2016). Additionally, researchers across multiple disciplines should be encouraged to engage in cross-disciplinary teaching experiences in order to educate future researchers and decision-makers to engage in interdisciplinary research, and create partnerships between disciplines to facilitate future research. An example of this approach in action is the IU Cybersecurity Clinic, which is unique in both its interdisciplinary breadth, as well as the fact that it is open to all graduate students across campus and offers applied service-learning opportunities to assist local and state-level critical infrastructure providers.

3.2 Partnerships Are Key

Oftentimes, empirical cyber research questions may be of interest to a variety of stakeholders in the public and private sectors. A state government may be interested in information about the uptake of cybersecurity practices amongst businesses in their jurisdiction, while a trade group might be interested in perceptions of privacy protections amongst their constituents. For example, the authors of this paper have collaborated with the State of Indiana to field a survey on cybersecurity practices amongst organizations in Indiana in order to address both academic and policy questions on cybersecurity decision-making. Under these circumstances, partnering with stakeholders has the potential to facilitate and improve cyber research. Research partners can provide insights into the phenomena in which they are involved, and insider knowledge about how policies are implemented in practice can provide a critical counterpoint to academic expertise. Furthermore, stakeholders are often experts in their own decision-making, and emic explanations about their policies and practices can be irreplaceable.

Research partnerships with public or private stakeholders can take on a number of forms. Researchers can consult with stakeholders during project development in order to identify potential causal mechanisms, locate existing data sources, and preview interview questions to determine whether they are likely to elicit relevant information. Stakeholder research partners may also be willing to facilitate data collection by distributing surveys or providing introductions to potential interview subjects. Because they are likely interested in the results of research, stakeholder partners may also be helpful in disseminating the results of research projects and encouraging consideration of policy recommendations resulting from the project.

While partnerships with public or private organizations can greatly benefit empirical cyber projects, researchers must be mindful of several potential complications. Public and private organizations may have a more limited remit than the population that might be of interest to the researcher. For example, a state or local government may be able to provide data about their own jurisdiction, and an industry trade group may be able to assist in distributing a survey to their members. These constraints can generally be addressed by narrowing the research question to focus on the population for which data are available; however, a more limited study may be less generalizable, and efforts to use these studies for policy decision-making in other areas must account for differences in context. It may also be helpful to repeat research in multiple contexts in order to explore the circumstances under which the results of the study hold.

Researchers who partner with public or private entities should also be prepared to navigate potential conflicts between the goals of the research partner and the goals of academic research. Organizations may partner with researchers because they have an interest in obtaining answers to particular questions or learning more about phenomena that affect them. Researchers may consider expanding the scope of their research to ensure that questions of interest to the partner are also addressed, and seeking out partnerships where there is a natural overlap in the questions of interest. However, partners should never have control or veto power over whether the results of the research are released. In order to ensure that partnering organizations can benefit and learn from the research, researchers should consider ensuring that results are available in formats that are usable by the partner; for example, publishing reports and podcasts, as well as journal articles.

3.3 Publish Cyber-Related Data

The field of empirical cyber research as a whole would benefit tremendously from an increase in the scope of publicly available data on cyber policies, decisions, and outcomes. Publicly available data facilitate and incentivize research by lowering the costs of undertaking projects. They also create efficiencies by ensuring that data collected are available to many researchers, reduce the burden on participants who may be asked to participate in multiple studies unless data collection is coordinated, and increase transparency in both research and policymaking (Napoli & Karaganis, Reference Napoli and Karaganis2010). Organizing the release of datasets could also serve as a mechanism for promoting high-quality cyber research if the data released are valid and reliable.

There are a number of mechanisms for ensuring the availability of empirical data on cyber phenomena. Over the short term, the publication of an annotated bibliography describing the datasets that are available, and highlighting where the collection of data in other domains has touched upon cyber-related issues, would both make those data more accessible to researchers and identify gaps in current data availability. Efforts could then be undertaken to expand current data collection projects to include information about cyber-related issues where relevant; for example, adding a question to a survey of hospital administrators to ask about their cybersecurity practices. Finally, surveys and other data collection projects focused on cyber issues could be undertaken and expanded, with priority given to efforts that can be repeated on a yearly basis in order to observe changes over time. These efforts could be facilitated through collaboration with existing public–private cyber partnerships, such as Executive Councils on Cybersecurity and organizations designed to share cyber threat information within sectors, such as information sharing and analysis centers and information sharing and analysis organizations. There is no one-size-fits-all model, but through experimentation and deeper partnerships, we may glean a more accurate picture of the cyber peace landscape.

12 Bits and “Peaces” Solving the Jigsaw to Secure Cyberspace

Stéphane Duguin , Rebekah Lewis , Francesca Bosco , and Juliana Crema Footnote *
1 Introduction

Efforts to create peace in cyberspace can, at times, be much like trying to assemble a 1,000 piece jigsaw puzzle without a picture of the finished product, full of important, related elements, but lacking an overall strategy. Much like the missing picture on the puzzle box, the absence of a mutually agreed-upon definition of “cyber peace” is itself one of the fundamental challenges to achieving it. Without a common understanding – a common vision – it is difficult to come together and work collectively toward a common goal. While agreeing on a universal definition of any truly global concept is inherently challenging (witness the ongoing debates surrounding “sustainability”), due to the sheer number and diversity of perspectives involved establishing a shared understanding of cyber peace is particularly difficult due to the complex and evolving nature of cyberspace, and the nature and meaning of peace itself. By taking a more operational perspective in this chapter and building from the work of others throughout this edited volume, our hope is to advance the discussion on cyber peace beyond this uncertainty – in essence, to transcend it.

First, we will set forth a “light-weight” operational definition of cyber peace that we believe is compatible with more theoretical formulations of the concept, while providing a guiding compass point for both strategic and tactical activities. To be impactful, we argue that any approach to cyber peace must, above all, be concerned with human well-being and, therefore, contemplate the integrated, multidimensional components of the human experience. As outlined in the first chapter of this volume, there are various interpretations of cyber peace. Some understand it solely as a concept to be theorised, whereas others consider it to be a set of practices that can be employed (Marlin-Bennett, Reference Marlin-Bennett, Shackelford, Douzet and Ankersen2022, pp. 4–6). With this work in mind, we will build upon Marlin-Bennett’s conception of cyber peace as a practice in order to better understand the human role and the related impact we can have to promote cyber peace and accountability in cyberspace. Second, we will highlight two key challenges that we believe must be overcome on the road to cyber peace. In assessing these challenges, we also seek to bring to the fore a broader geopolitical issue, the growing and fundamental redistribution of power that is not supported by a complementary redistribution of oversight and accountability. Lastly, we will argue that the principle of accountability – as a generally applicable concept and a key component of literature on institutional analysis such as the Ostrom Design Principles – provides a flexible and durable means to pursue cyber peace. By taking into account this operational understanding of cyber peace and by using current examples to illustrate how they apply can help to further guide the path toward a sustainable cyber peace framework.

2 Defining Cyber Peace

In an effort to highlight the necessary collective approach to achieve cyber peace, we have built our operational understanding of cyber peace around previous work, but have adjusted the focus to ensure that it is human-centric. As discussed in the Preface and first chapter of this edited volume, discussions of cyberspace, operations, security, and peace have come from a variety of actors in a move toward a multistakeholder approach to cyberspace and away from the previous focus upon state-centered security models (Shackelford, Reference Shackelford, Shackelford, Douzet and Ankersen2022, p. xxv). The state as a focal point makes sense in a Westphalian world order in which national territory and governments serve as the primary mechanisms for protection of rights and preservation of stability and order. But the scope of cyberspace – as a “notional environment” defined by connected networks and devices – is rapidly expanding (Delerue, Reference Delerue2020, p. 29). Today, from a micro-level perspective, computers, networks, and, information and communication technologies – “cyber” – is woven into almost every aspect of human life. Equally important to recognize is the macro-level perspective in order to understand how the complexities of our digital life are nestled into broader societal and geopolitical contexts. Accordingly, we assert that any efforts to achieve cyber peace should and must, as a moral imperative, be centered around and motivated by a concern for the well-being of individual human beings in order to achieve a peace beyond the mere absence of war (Diehl, Reference Diehl2019, pp. 2–3). Echoing Heather Roff (Reference Roff2016, p. 8), we believe that the individual human should be the main referent for a guiding conception of cyber peace. In keeping with this singular focus on the human being as the center point of a peaceful cyberspace, we propose that cyber peace exists when human security, dignity, and equity are ensured in digital ecosystems.

This formulation of cyber peace is intended to be highly actionable at an operational level and, we believe, is complementary to and compatible with existing related scholarship which has been previously discussed in this volume.Footnote 1 For those seeking to actively pursue cyber peace, the definition is intended to be instructive on a number of practical levels and works to approach these issues from a human perspective from the start. In this way, we can begin to address the challenges and obstacles in achieving accountability in cyberspace. In an effort to clarify each element of cyber peace listed above, there are specific criteria and questions to consider. For example, we believe that human security exists in cyberspace when services essential to human life and related critical infrastructure are protected. Based on this definition, we can begin to think through cyber-related topics by using a human-centric lens, and by questioning which rights and freedoms have been violated such as, “… the right to life, liberty and security of person” (United Nations, 2015, p. 8). Some questions to think about include whether there has been an obstruction to essential resources and services; this line of critical thinking will also begin to highlight the question of accountability, and in cases of attacks against healthcare facilities; for example, who holds responsibility for the failure to protect the element of human security.

Moreover, following the foundation laid out by human security, in the cyberspace context human dignity presents a mutually reinforcing concept as these associated rights rest upon the fulfillment of security in cyberspace. With this in mind, human dignity exists in cyberspace when individual’s beliefs, cultural rights, and ability to participate in society are protected. Human dignity is unique to the individual’s experience and context-specific to their everyday realities. Rights relating to this definition include, but are not limited to, civil and political rights, along with freedom of expression and assembly, as well as cultural and indigenous rights. Furthermore, human equity exists in cyberspace when individuals are protected against discrimination, bias, prejudice, and inequality. The importance of human equity in cyberspace stems from the reality that not everyone is starting at the same position in life and that these discrepancies need to be rectified in order for cyber peace to exist. This understanding follows the first and key tenant from the Universal Declaration of Human Rights, which emphasizes that “all human beings are born free and equal in dignity and rights” (United Nations, 2015, p. 4). This definition helps to get to the root problems that need to be addressed in order to resolve inequalities and can include issues such as political or developmental barriers to equity, or social constructions which inhibit upon one’s rights. This holistic and proactive approach is needed to ensure that these barriers are eliminated for people and communities everywhere. We view cyberpeace as encompassing three distinct elements: human security, dignity, and equity. These key elements relate to various dimensions of the human experience, including political, economic, and social considerations, and in this way are closely linked with human rights. To be clear, these three key elements are intertwined, interdependent, and intersectional as a necessary effort to achieve cyber peace. The human rights specifically encompassed by human security, dignity, and equity build upon and reinforce each other, as is relevant for each individual’s experience.

Keeping these concepts in mind, but further building upon our understanding of cyber peace, the role of accountability becomes much more apparent. By grounding these definitions in rights and freedoms, and while maintaining a human-centric perspective, we can further question the intersection of the virtual and physical worlds, and the role that each actor plays in these ecosystems. Having a clearer understanding of the roles and responsibilities, both on and offline, will help to rectify the accountability deficit we currently face due to the rapid evolution and convergence of disruptions in technology, geopolitics, and human behavior.

One example of the operationalization of our approach toward cyber peace is the focus we have been devoting to the healthcare sector. As the extent of people relying on health services for necessary human needs increases, the potential harm to human security and dignity are immense. Malicious cyber operations against healthcare facilities put human lives in jeopardy and require immediate action. To this end, we supported a call on the world’s governments to collaboratively work to stop cyberattacks against healthcare facilities and related critical infrastructure entities. Then, considering the increasing gap reported between the variation and sophistication of cyberattacks and the ability for healthcare sector entities to protect themselves from such attacks, we set up Cyber 4 Healthcare. This initiative is a global matchmaking service to partner civil society organizations and healthcare providers with private sector actors to individually assist them in protecting their services in order to decrease their vulnerability to cyberattacks, while considering their local context. The personalized advice and discussions through Cyber 4 Healthcare is just one example of how cyber peace, as it encompasses human security, dignity, and equity, must truly span the globe, inside and out, while maintaining contextual relevance.

3 Key Challenges on the Road to Cyber Peace

In order to further unpack the goal of cyber peace through accountability, we must be cognizant of the challenges and obstacles in this realm. In order to illustrate this, we have identified two deeply rooted and largely false assumptions about the nature of cyberspace itself that must be debunked and counteracted in order to make meaningful progress toward cyber peace. First, we must recognize the unequal and disproportionate access and engagement with cyberspace around the world, and address this issue in the discourse around responsibilities and responsible behavior in cyberspace. Second, we must acknowledge and tackle head-on, through creative, out-of-the-box thinking, the persistent tensions and gaps in the existing ecosystem of laws, norms, and principles governing cyberspace, and the use of information and communications technologies (ICTs). By analyzing these issues through a cyber peace lens, we can begin to address them on the basis of rights and freedoms afforded to all in international treaties and declarations.

3.1 Access and Security in Cyberspace

Keeping in mind the definition of cyber peace and its corresponding elements, access and security in cyberspace remains a prominent challenge. Communities around the world are in vastly different stages of development and implementation when it comes to cyberspace infrastructure and technology, which thus leads to questions of human equity and the impact that this discrepancy in development has upon end-users and citizens of the world more generally. Bias and subjectivity are hard-wired problems in technology, though access to this technology is in itself unevenly distributed, which deepens existing inequalities. In order to keep this issue in a global context, it is also highlighted by the UN’s Sustainable Development Goals, particularly goal number 9, which is to “build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation” (United Nations Department of Economic and Social Affairs, 2020, p. 42). Specifically in relation to access to the Internet, the UN cites that, “in 2019, almost the entire world population (97%) lived within reach of a mobile cellular signal, and 93% lived within reach of a mobile-broadband signal” (United Nations Department of Economic and Social Affairs, 2020, p. 43). However, despite this high percentage of coverage the UN found that, “most of the offline population live in LDCs, where only 19% use the Internet …” (United Nations Department of Economic and Social Affairs, 2020, p. 43). Moreover, the 2019 Human Development Report emphasizes this point and warns that, “… while access to basic technologies is converging, there is a growing divergence in the use of advanced ones …” leading to a growing concern about a so-called New Great Divergence, following the first divergence created by the Industrial Revolution (United Nations Development Programme, 2019, pp. 200–203). With a greater emphasis placed upon the question of human equity and an active approach to facilitate the participation of all in cyberspace, concerns about the digital divide can begin to be addressed. As stated in the Sustainable Development Goals, bridging the digital divide by providing Internet access to the 3.6 billion people – nearly half of the world’s population – in the developing world who are not online “is crucial to ensure equal access to information and knowledge, as well as foster innovation and entrepreneurship” (United Nations Development Programme, n.d., para. 3). Moreover, disparate levels of exposure and access to technology mean communities have vastly different experiences upon which to form mature policy positions, significantly affecting their ability to participate meaningfully in global fora, and therefore harming their overall security as a citizen and as a person.

To be clear, getting online is only one piece of the puzzle. While infrastructure is a first and crucial step to access, it is not enough simply to invest in the installation of fiber or cell towers, or even to foster an ecosystem of service providers. Once online, users must be able to engage and act without threat to their privacy, freedom of speech, and financial or physical security. Such threats, in the form of online discrimination, censorship, manipulation, and surveillance are faced by vulnerable populations around the world, but manifest differently depending on the relevant technology and context specific to them. In order to be sustainable and effective, a cyber peace framework must, therefore, acknowledge and account for the distinct ways that cyberattacks impact different populations depending on their context and unique situations, and therefore threaten their human security. The role of accountability becomes clearer in this context, because by recognizing threats to an individual’s opinion or their privacy, the behavior of states, industry, civil society, and end users becomes more apparent.

The healthcare sector in the context of the COVID-19 pandemic presents one such case where vulnerable communities, whose ability to access and securely engage in cyberspace, have been severely compromised as a result of specific circumstances and characteristics. Long before the coronavirus outbreak, the healthcare sector’s dependence on digital technology and connectivity had skyrocketed. This dependence, combined with the sensitive data and services under its purview, put the healthcare sector at high-risk to cyberattacks, such as ransomware or data breaches. Following the declaration of a global pandemic and the sudden increase in demand for medical facilities and services, this community became even more vulnerable to existing security threats as they scrambled to set up field hospitals and testing centers, produce and procure equipment, and reshuffle staff and schedules. A well-publicized attack against a hospital in Düsseldorf, Germany, forced the hospital to turn away patients, including a woman who later died, due to a ransomware attack that encrypted thirty of the hospital’s servers (Goodin, Reference Goodin2020). In cases like this, by disrupting healthcare operations, such attacks have a very real and tangible impact on the health and well-being of its staff, patients, and the broader community the healthcare sector serves. It shows how questions and concerns over human security should be at the forefront of cyber peace since, at the end of the day, events such as cyberattacks against hospitals have an impact on human lives and their overall well-being.

In addition, the COVID-19 infodemic is another closely related example of the unique impact of cyberattacks on specific communities. These communities are often not defined by any geographic or territorial boundaries, but are still protected under the concepts of human security, dignity, and equity. Due to the nature of the COVID-19 outbreak:

… communities are relying on online resources to be informed, and are producing information on their own. This leads to a massive generation of online content, blending information coming from official channels (media outlets, international organization bodies, governments), private communication entities and user’s generated content.

(CyberPeace Institute, n.d., para 1)

The World Health Organization (WHO) has identified this “blending of information” as an “infodemic,” defined as, “… an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it” (World Health Organization, 2020, p. 2). For example, as an increasing number of people turn to online resources to work and study from home, malicious actors are taking advantage of this influx of online activity. In one case, the WHO itself was hacked and phishing emails that mimicked the organization’s internal email system were sent out by a malicious actor (Satter et al., Reference Satter, Stubbs and Bing2020).

These kinds of attacks not only weaken public trust in authoritative institutions like the WHO, but also cause these organizations to divert staff resources away from their usual activities to respond to attacks and mitigate their effects.Footnote 2 Beyond the community of institutions like the WHO, this infodemic greatly impacts the broader community of so-called “netizens” – engaged and responsible online users – by eroding their sense of trust and security on the Internet itself. Without a sense of security online, those who are already vulnerable to attacks or influence are left more vulnerable, and any sense of accountability is lost. These are just two specific examples of how global events and changing circumstances, even those – like the COVID-19 pandemic – with no direct relationship to digital technology, can quickly create new vulnerabilities and threats to online access and security. In pursuing cyber peace, we must account for this volatility and incorporate mechanisms to protect vulnerable populations as they arise in a rapidly changing global landscape.

4 The Ecosystem of Laws and Norms

The current ecosystem of international law and norms surrounding cybersecurity is complex to say the least. While these complexities present many areas of interesting debate, specifically about polycentric engagement, those in pursuit of cyber peace must work to identify and address the gaps and ambiguities that have the greatest impact on civilian life and human well-being with relation to human security, dignity, and equity. The COVID-19 pandemic again provides a powerful recent example of some of the impact of such gaps and ambiguities.

Cyberattacks against hospitals, such as the one in Düsseldorf as previously discussed, and other facilities during the pandemic emphasize the importance of protecting essential services – especially (but not only) during times of crisis when the civilian population is particularly dependent upon them and their security is at risk. However, both international law and norms present hurdles. Related to international law, the question of attribution presents a foundational issue regarding the ability to track adherence to specific responsibilities and bring claims against specific states. In addition, ongoing debate regarding relevant thresholds for violations of obligations related to territorial sovereignty and due diligence also frustrates the ability to bring substantiated claims (Open Ended Working Group, 2020, p. 5). Voluntary nonbinding norms that have been proposed to support or complement existing legal obligations are also challenged by ambiguity regarding the meaning of certain key terms, including critical civilian infrastructure – as evidenced by debates and comments at the Open Ended Working Group (OEWG) and discussions regarding a new norm prohibiting attacks against medical facilities further underline this ambiguity (International Committee of the Red Cross, 2020).

This latter issue regarding critical infrastructure is highlighted by the norms outlined in the 2015 report by the UN Group of Governmental Experts on Developments in the Field of Information and Telecommunications in the Context of International Security (UN GGE) (United Nations General Assembly, 2015). Two of these norms address the protection of “critical infrastructure,” which is of particular importance to discussion and analysis of the implications of cyberattacks against the healthcare sector, and specifically in how they relate to human security, dignity, and equity:

 (f) A State should not conduct or knowingly support ICT activity contrary to its obligations under international law that intentionally damages critical infrastructure or otherwise impairs the use and operation of critical infrastructure to provide services to the public.

(g) States should take appropriate measures to protect their critical infrastructure from ICT threats, taking into account General Assembly resolution 58/199 on the creation of a global culture of cybersecurity and the protection of critical information infrastructures, and other relevant resolutions ….

(United Nations General Assembly, 2015, p. 8)

Not only is the definition of critical infrastructure itself the subject of much debate, the question of what constitutes “appropriate measures” in the context of norm (g) is also unclear. This snapshot of how the existing legal and normative framework for cyberspace applies to a specific sector in a time of acute crisis demonstrates some of the current gaps between conceptualization and on-the-ground reality. Without clarity regarding these foundational components, the effectiveness of law and norms as mechanisms for change and accountability will be limited. In the meantime, the human cost and impact of these attacks continue to rise as individual’s security, dignity, and equity is threatened as the pandemic rages on.

5 Achieving Cyber Peace

In keeping with the notion of cyber peace as a multidimensional concept, adopting a general theory of change rather than attempting to enumerate specific measures will maximize operational flexibility, durability, innovation and, ultimately, impact. In critiquing the World Federation of Scientist’s Erice Declaration, which applies a top–down governance solution to cyber peace, Heather Roff notes that “by framing the issue this way, the Scientists discount problems associated with unjust social structures, as well as the unsatisfactory nature of the entire international legal framework” (2016, p. 5–6). This is but one example of how prescribing specific approaches – in this case, peace through legal governance, may discount important issues that specifically relate to human security, dignity, and equity. Another point of reference to consider are the principles put forth by Ostrom which show “… in many places around the world how communities devise ways to govern the commons to assure its survival for their needs and future generations” (Walljasper, Reference Walljasper2011). These principles can be used to form sustainable and equitable governance systems in communities which form an integral part of the polycentric governance model discussed previously in this edited volume. In essence, the principles put forth by Ostrom are a way to assess one’s responsibility to act in their community, so that future generations may also enjoy the same natural resources; for example (Walljasper, Reference Walljasper2011).

As applied to cyberspace, we believe a general theory of accountability can provide the needed flexibility and durability to serve as a foundation for a globally applicable methodology for achieving cyber peace. Such a theory of accountability is not synonymous with attribution or so-called “naming and shaming.” Rather, we recommend a very practical understanding of accountability as used in a variety of everyday settings, from the hyper-local to the international, building upon the polycentric approach discussed throughout this volume. For example, in the maintenance of a dwelling, training of a sports team, or employees at a coffee shop – in all these settings, specific actors have clear responsibilities and roles aimed at achieving a common goal and each are held accountable for these actions through various mechanisms. Accountability requires an evolving understanding of relevant stakeholders and responsibilities. More specifically, at the CyberPeace Institute, we believe that a systematic approach to accountability involves the following key steps for each stakeholder; identification of relevant responsibilities, confirmation of commitment to these responsibilities, tracking or measurement of adherence to these responsibilities, and analysis and implementation of effective measures to ensure or increase adherence. We believe that these four steps complement existing work on the topic of cyber peace by advocating for both a bottom–up approach to governance, as outlined in the polycentric model, but by also promoting a simultaneous top–down approach in governance to ensure that appropriate regulation and oversight works to promote accountability of all stakeholders in cyberspace.

6 Conclusion

The key challenges above expose an underlying redistribution of power as a result of changing digital ecosystems; a redistribution that is not accompanied by equally robust mechanisms for accountability that can be leveraged to protect individual human beings and their rights and freedoms, both on and offline. By defining cyber peace around human security, dignity, and equity we can take direct aim at this systemic problem and begin to address the human impact of infringements upon these fundamental building blocks of peace.

As we move into a brave new world, we want to actively and deliberately design our future. Cyber peace is a way to articulate the desired contours of that future and provide clear compass points toward a destination that will benefit all. Recognizing again that common action requires common understanding and a common goal, we must be clear about what we are after and why. The CyberPeace Institute is committed to further operationalize the concept of cyber peace. Such operationalization does not require consensus regarding a finite list of the specific means to achieve our end goal. With the rough contours and a working theory of accountability, we can move forward in a common pursuit of cyber peace.

13 Cyber Hygiene Can Support Cyber Peace

Megan Stifel , Kayle Giroud , and Ryan Walsh Footnote *

Among high-profile cybersecurity incidents over the past decade, several were reportedly the work of nation-state actors. The actors leveraged tactics, techniques, and procedures to take advantage of known vulnerabilities – technical and human – to undertake actions that compromised personal information, risked human health, and paralyzed the global supply chain. Left unchecked, the scale and breadth of such actions can threaten international stability. Yet, an examination of high-level cases suggests that basic cyber hygiene is an accessible and practical approach to mitigate such incidents, enhance confidence in the use of information and communications technology (ICTs) and, ultimately, advance cyber peace.

Ninety-one percent of cybersecurity incidents begin with a phishing email (FireEye, 2018). In a phishing attack, a malicious actor poses as someone else and sends an email to a victim in order to trick the victim into taking a particular action – often clicking a link that can give the malicious actor account credentials or access to the victim’s device. In the absence of multifactor authentication, accounts and devices compromised via phishing or other means can be leveraged for further exploitation. Actors attributed to nation-states have successfully deployed these tactics in a number of high-profile incidents, including the phishing attacks against staff of the Office of Personnel Management (OPM) in 2015, the Democratic National Committee in 2016, and various organizations in 2020.

1 Office of Personnel Management

In 2015, the global community learned that actors attributed to China were allegedly accessing the email accounts of top US government officials. Also in 2015, information technology staff at the Office of Personnel Management (OPM) discovered that personnel files had been compromised (Fruhlinger, Reference Fruhlinger2020). Among the personnel files that were accessed were approximately 4 million SF-86 forms, which contain extremely personal information, as well as fingerprint records, gathered in background checks for people seeking US government security clearance (Fruhlinger, Reference Fruhlinger2020). After initially obtaining copies of manuals and other network architecture documents the actors moved laterally throughout the network, which had not implemented multifactor authentication. Public reports suggest the actors explored the network for three years before they were discovered and that the incident affected more than 21.5 million individuals (Starks, Reference Starks2016).

Further exacerbating the initial breach, after the OPM discovered the compromise, it offered employees a credit and identity protection plan. Almost immediately after OPM sent email notifications to register for their credit monitoring services phishing messages appeared (Vaughan-Nichols, Reference Vaughan-Nichols2015). Malicious actors with knowledge of the planned offering leveraged it to obtain account credentials and personal information from OPM staff. While some staff did login and gave the actors access to their personal information, others stopped before entering their data. Cybersecurity awareness training is said to have, in part, limited the impact of the credit monitoring phishing campaign (Rein, Reference Rein2015).

2 Democratic National Committee

On March 19, 2016, John Podesta, the then chair of Hillary Clinton’s presidential campaign, received an email purporting to be a Google security alert. Podesta clicked on the link and entered his password into a fake Google log on page through which the actors collected his username and password. As a result, the actors gained access to a decade of his emails (Lipton, Reference Lipton, Sanger and Shane2016). Months later, on October 9, WikiLeaks began publishing thousands of Podesta’s compromised emails. Subsequently, several cybersecurity firms attributed the attack to a Russian intelligence unit code-named “Fancy Bear,” which has been active since the mid-2000s, and is known among other things for its technique of registering domains that closely resemble domains of legitimate organizations they plan to target. Fancy Bear has also been linked publicly to intrusions into the German Bundestag in 2015, among other intrusions.

3 “Mustang Panda”

January 2020 witnessed a surge in registered domains related to the coronavirus, followed by a spike of cyber incidents. According to Recorded Future’s report (Gorey, Reference Gorey2020), malicious actors use COVID-19 as phishing lures for malware, and at least three cases have potential links to nation-state actors. Among them, the “Mustang Panda” campaign has alleged ties to a Chinese government-linked group. The lure used in this campaign was a file discussing COVID-19, purporting to be from the Vietnamese prime minister, Nguyen Xuan Phuc. Once opened, a malicious code could take over the system. Additionally, countries such as the United States, Italy, Ukraine, and Iran have been the focus of related phishing attempts. Malicious actors used trusted organizations as lures for their scam emails, such as pretending to be the World Health Organization and US Centers for Disease Control and Prevention. The malicious emails often use language creating a sense of urgency, or attachments, or links that are said to contain additional information.

At least three cyber hygiene resources can prevent or reduce attacks like the three just mentioned. These resources include deploying Domain-based Message Authentication, Reporting, and Conformance (DMARC), using a protective Domain Name System (DNS), and enabling multifactor authentication. None of these resources alone can prevent a significant cyber incident 100 percent of the time, and they do require investment in human capital. Nonetheless, when implemented across the ecosystem they can have a significant impact. At a minimum, their use can force malicious actors to change targets, tactics, techniques, and procedures. By limiting the impact of phishing and the incidents that may follow, the ecosystem can stabilize, which can support cyber peace.

DMARC is an email authentication, policy, and reporting protocol. DMARC builds on the widely deployed SPF and DKIM protocols, adding linkage to the author (“From.”) domain name, published policies from recipient handling of authentication failures, and reporting from receivers to senders, to improve and monitor protection of the domain from fraudulent email. DMARC allows the sender to indicate that their messages are protected and tells the receiver what to do if one of the authentication methods passes or fails – either send the message on or reject the message to junk. DMARC also prevents the dissemination of fraudulent email from an organization’s domain. DMARC deployment is a public sector requirement in Australia, Canada, Denmark, the Netherlands, the United Kingdom, and the United States. Moreover, beyond good policy, DMARC prevents significant losses to the global economy. A 2018 study found that the estimated value to the 1,046 surveyed organizations that deployed DMARC at a policy level of “reject” or “quarantine” approached $19 million (USD) (Shostack, Jacobs, & Baker, Reference Shostack, Jacobs and Baker2018).

The use of multifactor authentication (MFA) provides an additional effective, low-cost barrier to phishing attacks. A recent survey found that 74 percent of breaches were the result of abuse of privileged credentials (Columbus, Reference Columbus2019). Phishing attacks are one technique used to obtain passwords for use in future exploitation. MFA involves the use of a password plus an additional source of validation, such as a one-time token, to verify a user before granting access to an account. Where enabled, MFA can prevent a malicious actor from using a compromised password to access an account or, in the case of OPM, moving practically uninhibited throughout a vast organizational network.

Additionally, configuring a protective DNS on home and organizational routers can help protect Internet-connected devices against malicious activity. A protective DNS prevents access to known malicious domains by not resolving the DNS query. In doing so, the protective DNS prevents access to a range of threats including malware, ransomware, phishing attacks, viruses, malicious sites, and spyware. Furthermore, using a protective DNS can provide organizations with metrics about the health of their networks and can inform organizational, including national level, incident response functions in the event of a successful attack. One such service, Quad9, protects users from accessing known malicious websites by leveraging threat intelligence from multiple industry sources and blocks an average of over 15 million threats per day for users in over 88 countries. A 2019 study found that the use of DNS firewalls can prevent more than 33 percent of cybersecurity data breaches from occurring (Shostack, Jacobs, & Baker, Reference Shostack, Jacobs and Baker2019). The UK Cabinet Office has mandated the use of protective DNS by the public sector. The US Cybersecurity and Infrastructure Security Agency (Nyczepir, Reference Nyczepir2020) and the National Security Agency are also piloting similar services for their communities of interest (Baksh, Reference Baksh2020).

More recently, actors attributed to nation-states have also capitalized on organizations’ failure to patch software and backup data to cause unprecedented losses to the global economy. The Wanna Cry and NotPetya cyberattacks are examples of these incidents. In light of these tactics, two additional best practices can further limit the ability of malicious actors, acting on their own behalf or on behalf of nation-states, from using ICTs to destabilize international order.

4 WannaCry

In 2017, actors reportedly affiliated with the government of North Korea used ransomware to cripple computer systems around the world (Latto, Reference Latto2020). The attack was an example of crypto-ransomware, a type of malicious software used by cybercriminals and other actors to extort money. Ransomware accomplishes this by either encrypting valuable files, rendering them unreadable, or by locking the computer, rendering the computer unusable. Like other types of crypto-ransomware, this attack, dubbed WannaCry, took data hostage, promising to return it upon payment of the ransom.

WannaCry began in May 2017 and spread through computers operating Microsoft Windows (Latto, Reference Latto2020). Users’ files were held hostage, and the actors demanded a Bitcoin ransom for their return. The cybercriminals responsible for the attack took advantage of a previously disclosed vulnerability for which a patch was available. Unfortunately, many individuals and organizations had not regularly updated their operating systems and so were left exposed to the attack. The WannaCry ransomware attack impacted approximately 230,000 computers across 150 countries in just one day – many of them belonging to government agencies and hospitals, including thousands of National Health Service (NHS) hospitals and surgery centers across the United Kingdom (Latto, Reference Latto2020). The attack affected a third of NHS hospitals, with estimated costs of £92 million after 19,000 appointments were canceled as a result of the attack (Field, Reference Field2018). Globally, losses due to WannaCry have topped $8 billion USD (Lemos, Reference Lemos2020).

5 NotPetya

The 2017 NotPetya attack offers another example of the importance of maintaining up-to-date software. In NotPetya, actors attributed to Russia launched destructive malware adapted from a series of vulnerabilities common to unpatched Windows operating systems. More specifically, they combined the exploit used in WannaCry together with a password harvesting tool called MimiKatz (Greenberg, Reference Greenberg2018). By exploiting vulnerabilities in applications in wide use by the private and public sectors, the NotPetya attack quickly spread from targeted Ukrainian banks, payment systems, and federal agencies to power plants, hospitals, and other systems worldwide. Global companies, including Maersk, Merck, and Mondelez, found their systems impacted, with total losses approaching $10 billion USD (Greenberg, Reference Greenberg2018). To date, NotPetya is the costliest attack to ever occur. Yet, had the computers been patched, NotPetya likely would have had far less of an impact because it would have had fewer unpatched systems to leverage into patched systems.

Most recently, in September 2020, a woman in Germany reportedly died after the hospital proximate to her was the victim of a ransomware attack, leading to delay in her care. This incident is the first death publicly attributed to a ransomware attack. Unfortunately, a 2020 study found that 80 percent of observed ransomware attacks in the first half of 2020 used vulnerabilities reported and registered in 2017 and earlier – and more than 20 percent of the attacks used vulnerabilities that were at least seven years old (CheckPoint, 2020). Thus, without a significant shift by key stakeholders within the ecosystem, particularly governments and entities that develop and maintain connected systems, it will likely not be the last.

These ransomware incidents highlight the importance of enabling automatic software updates where appropriate for the operating environment, and otherwise establishing policies for the prioritization and installation of updates. In addition to ensuring software is up to date, appropriately maintained file backups can also mitigate the risk of ransomware. Ransomware targets that maintain clean and timely backups are often able to avoid significant impact from an attack and continue operations without major delays.

6 Conclusion

These cases illustrate that the threat from the malicious use of ICTs is real and that known, effective, accessible, and low-cost resources exist to prevent and limit this threat. Still, reducing cybersecurity risk is a continuous process that requires the use of multiple tools together with human capital. Unfortunately failure to employ cyber hygiene collectively has contributed to significant losses globally, including human life. With the increasing, unavoidable dependence on ICTs for everything from governance and economic development to social engagement, inaction becomes increasingly perilous, especially for governments.

Promisingly, an increasing number of national policies are beginning to require the use of cyber hygiene measures in the public sector. This trend reflects a future reality where use of these capabilities is no longer an option, it is the norm. As a result, a state failing to support their implementation may eventually become the cyber equivalent of a safe harbor. Ultimately, despite what society is often led to believe, what stands in the path of cyber peace is not technology, but political will.

14 Crowdsourcing Cyber Peace and Cybersecurity

Vineet Kumar
1 Introduction

The Internet’s potential can help people from across the globe collaborate and share information for a common cause. Every year, tens of millions more individuals and businesses join cyberspace. However, this newfound access brings in its own set of vulnerabilities, threats, and risks. Crowdsourcing is one way to address these risks by using a systematic approach that makes use of the capabilities of the Internet and its users. When vital information and valuable expertise are shared between people and organizations using crowdsourcing for cybersecurity purposes, it can bring forth positive results for the benefit of all. The CyberPeace Corps is one such crowdsourcing initiative tapping into the skills, expertise, and passions of individuals and groups from all backgrounds to establish cyber peace by collectively building resilience against cybercrime and cyber threats, while upholding the cybersecurity triad of confidentiality, integrity, and availability of digital information resources. Through the crowdsourcing model of the CyberPeace Corps, the idea of a truly global Internet that is trustworthy, secure, inclusive, and sustainable is furthered by leveraging the potential and possibilities of information sharing and collaboration of a large number of people from all over the world.

2 What Is Crowdsourcing?

The term “crowdsourcing” originates from a collocation of two words – “outsourcing” and “crowd.” Simply put, crowdsourcing concerns obtaining information, seeking opinions, and getting the work done with the help of many people who submit data using the Internet as a medium, using the various tools available, such as social media and smartphone apps (Hargrave Reference Hargrave2019). People involved in crowdsourcing can be paid freelancers or those who work voluntarily. There are a lot of processes that can take place. However, six identified forms are as follows:

  1. 1. Crowd innovation

  2. 2. Crowd funding

  3. 3. Crowd voting

  4. 4. Crowd creativity

  5. 5. Crowd collective knowledge

  6. 6. Micro working (Hargrave, Reference Hargrave2019)

A simple example of crowdsourcing would be a traffic app that encourages drivers to report traffic jams or accidents, thereby providing real-time, updated information to other app users. This allows people to save time, take the correct route and, most importantly, be safe during their journey. Some of the crowdsourcing benefits are as follows:

  • A wider talent pool is available for getting the work done and can contribute to the cause.

  • People can work virtually from anywhere, allowing them the flexibility to choose the location and type of work.

  • Various enterprises of different resources and interests can tap into an enormous array of skills, resources, and expertise without incurring significant overheads.

  • It also enables businesses to raise a large capital pool for special projects.

The point is that crowdsourcing involves breaking down a complicated project into small achievable tasks that a crowd of people can individually work on to achieve set objectives.

3 Crowdsourcing Cybersecurity

The crowdsourcing cyber conflict model is not a new concept. The 2007 Estonia incident, one of the first and most notable DDoS (Distributed Denial-of-Service) attacks in history, is still fresh in cybersecurity circles (McGuinness, Reference McGuinness2017). Malicious actors crowdsourced a series of massive attacks on the Estonian infrastructure, paralyzing the entire city, its largest banking network, and the Parliament (McGuinness, Reference McGuinness2017).

But what is interesting is the way Estonia established a model of the first of its kind volunteer cyber force, called the Defence League Cyber Unit (CDL), in 2010. The unit is part of the military in Estonia, but is essentially a civil body with members of the public and private sectors enrolling as experts who render support in the times of a cyber crisis. It started as an initiative during the 2007 attacks, but was capitalized on by Estonia through the institutionalization of the volunteer force into a unit within the military. What this goes on to say is that if crowdsourcing can be used to cause cyberattacks of such a massive magnitude, it can prove useful in fighting cybercrimes as well. Among many others, this incident paved the way for conceiving the creation of a CyberPeace Corps model.

4 What Is the CyberPeace Corps?

The CyberPeace Corps is a volunteer-driven initiative by the CyberPeace Foundation for building peace in the cyber world. It is a coalition of citizens, experts, and students who volunteer to come together for sustaining cyber peace. The concept continues to evolve, but it involves a “crowd of” diverse people comprising citizens and organizations who converge as working groups or individual volunteers to foster cyber peace in marginalized communities, organizations, and nations around the world. Currently, over 1,200 CyberPeace Corps members are spread across forty countries are working to enhance their technical capacity against cybercrimes and threats using various modes of communication like social media, street theaters, workshops, webinars, and so on among communities at national and global levels. They also provide support in data collection and analysis to back the training modules developed for workshops, detecting cyberattacks, using machine learning for investigative analysis, and even assist in content creation and dissemination among other activities. The CyberPeace Corps works mainly across four verticals including: Inclusion & Outreach, Collaboration & Connect, Policy & Advocacy, and Innovation & Outreach related to all aspects of cyber peace and cybersecurity. The CyberPeace Corps focuses on the collaboration of people, even from nontechnical backgrounds to build a resilient, safe, and sustainable cyberspace. The CyberPeace Foundation conducts training program for all volunteers who join the CyberPeace Corps and makes them sign a ten-part oath to promote values to ensure peace in cyberspace and strive hard to achieve them. Imagine what malicious actors will do if they manage to access confidential and sensitive information about a country’s defense organization. The consequences of such a scenario could be devastating, not only for the organization but also for the common people. On the contrary, suppose a law-abiding citizen gets information about a planned terrorist attack in the country – s/he would report the same to the law enforcement agencies.

Here lies the benefit of crowdsourcing. Crowdsourcing helps create a faceless army of volunteers who can play a stellar role in protecting society from harm. Looking at the scenario from a cybersecurity angle, it is a massive army that helps fight cyber threats on a large scale. The CyberPeace Corps works on this concept of encouraging people to volunteer and fight cybercrime for the good of all. The CyberPeace Foundation has also been working in building a model for children, as well by establishing CyberPeace Clubs in schools. Under the guidance of faculty, school administration, and team at CyberPeace Foundation, students are trained in conducting sessions on cyber safety and also have a continuous dialogue with other students on a resilient and safe cyberspace.

5 How Can Crowdsourcing Work in Cybersecurity?

Authorities, police, and agencies have always used crowdsourcing to combat crime in the physical world. The Boston Marathon bombing investigation (Ackerman, Reference Ackerman2013) and the Broward County Sheriff case (Contributor BP, 2011) are two prime examples. The public shared media online in large numbers to help the authorities with their investigations. A similar methodology can be used for the purpose of ensuring cybersecurity. There are three prominent fronts where crowdsourcing can help cybersecurity, as discussed below:

  • Collaboration: People from various locations and different walks of life can put their heads together for a common cause. They can share ideas, work together as a team to bring forth something creative and useful to thwart cybersecurity issues, and offer productive involvement in a cybersecurity project. Synack is an example of one such group of experts ready to respond to organizations’ calls and become involved in handling cybersecurity crises so as to combat threats.

  • Sharing Intelligence: Many experts in the cyber world can contribute vital information to protect numerous people and organizations from serious cyber threats. ThreatExchange, started by Facebook, is an example of an intelligence-sharing platform.

  • Bounty Programs: These are experts who are not permanent employees of large organizations such as Microsoft, Google, or Apple, but still help them. They can offer their expertise in troubleshooting various organizations’ software products to find serious flaws or bugs that can prove fatal sooner or later once discovered by malicious actors. They research independently to identify zero-day vulnerabilities and share valuable information with the organization, thereby helping it develop a patch program and save millions of dollars in the bargain.

6 Crowdsourcing Research

Crowdsourcing encourages people from different walks of life to contribute their ideas, based on their usage and expertise, to ensure that diversity of thought processes are accounted for. The CyberPeace Corps has already been employed in various cyber research projects using this method. Involving the public in research has the following advantages:

  • Researchers get a chance to understand the public’s perspective by involving them.

  • Similarly, the public can improve their scientific understanding by participating in the research programs.

  • CyberPeace Corps platforms provide the right opportunities for people who want to discover projects or researchers who wish to create new projects.

  • It allows various people from diverse fields to assist in cybersecurity tasks for the Corps without enrolling as permanent workers.

As the CyberPeace Corps is a volunteer-centric organization, it relies on community support and participation. There is tremendous potential in the community yet to be tapped. Because there is a severe shortage of cybersecurity professionals capable of handling cyberattacks and related threats, it becomes more relevant to involve the community in projects. Therefore, an initiative like the CyberPeace Corps can help people to contribute their skills and intelligence for the benefit of all. These initiatives have delivered successful results, thanks to collaborative problem solving. Ideas can virtually spring from anywhere, including people not connected to the organization in any way.

7 Serving Society

The CyberPeace Corps can help business entities by educating the organization’s employees to maintain cybersecurity hygiene. The Corps volunteers are always on the move from one organization to another to drive home the fact that it pays to maintain cybersecurity hygiene. It could be educating people on maintaining strong passwords and discouraging them from sharing them with others to helping people deal with the aftermath of a cyberattack.

The CyberPeace Corps believes in people maintaining self-discipline when using the Internet for personal or business purposes. A simple sharing of an email id and password is enough to clean out a bank account in no time.

8 Creating an Impact

The crowdsourcing journey comprises four phases through which a potential volunteer has to pass:

  • Awareness Phase: The awareness phase is the first phase where an individual discovers the initiative through some channels, such as the Internet, professional networks, or a Corps volunteer.

  • Consideration Phase: The second phase is the consideration phase, where the volunteer learns more about the initiative and decides whether to be involved in it.

  • Participation Phase: In this phase, volunteers participate in the research and contribute willingly.

  • Closing Phase: The final phase is the closing phase, where the initiative’s compensation occurs.

The CyberPeace Corps assesses the volunteer’s journey through this proven model.

9 Call for Action

Today, even government authorities and security agencies have put crowdsourcing into practice. By encouraging crowdsourcing in cybersecurity, talented individuals can showcase their capabilities and help organizations thwart cyberattacks.

  • With the industry facing a dearth of cybersecurity professionals, the CyberPeace Corps provides an ideal opportunity for people with the expertise and interest to volunteer to fight cybercrime.

  • Interested individuals are welcome to volunteer toward offering their services to the CyberPeace Corps.

  • The exciting aspect is that it is not mandatory to have a technical background to volunteer.

  • Even people having nontechnical experience can play a critical role in creating cybersecurity awareness among the public.

  • Every individual is capable of contributing to safeguarding cyber peace in some way.

As they say, “Security is everyone’s responsibility,” so here is your chance to connect with the CyberPeace Corps help others to be safe online, and contribute to the greater cause of peace in cyberspace.

15 Advanced Persistent Threat Groups Increasingly Destabilize Peace and Security in Cyberspace

Anne-Marie Buzatu Footnote *
1 Introduction

Their attacks do not typically result in gruesome pictures nor grab the international headlines in the same way as their physical, armed counterparts, but they may be just as deadly or even more dangerous: Advanced Persistent Threat Groups (APTs) are on the rise and changing the very character of modern international conflict today, with yet to be fully appreciated consequences. Operating in obscurity behind screens where they can largely remain anonymous, and called such fanciful names as “Red Apollo” or “Cosy Bear,” little is known for certain about who is manning these groups or to whom their allegiances ultimately lie.Footnote 1 Rather, analysts try to piece together this information by identifying patterns in cyberattacks, seeing whether the targets of the attack align with the interests of certain states, as well as finding the occasional digital traces that the groups or their members may have left online. What these groups lack in physical bravado they more than make up for in real-world damaging consequences. The COVID-19 pandemic has only served to further accelerate the global dependence upon technologies, providing APTs more opportunities to wreak international havoc and destabilization.

While not officially acknowledged by states, APTs are allegedly run by or sponsored by states to gain unauthorized access to computer systems of governments or companies, where they remain undetected for an extended period of time and gather information, including sensitive information about defense capabilities and critical infrastructure control systems. Recent times have seen the emergence of nonstate sponsored APT groups carrying out large-scale intrusions into government or commercial network systems, sometimes for criminal/financial gain.Footnote 2, Footnote 3

The “Solarwinds” attack, discovered in December 2020, vividly illustrates both the damage and the uncertainty these kinds of attacks can cause. Analysts said the attack resembled those in the past, thought to have been carried out by Russian-based APTs “Cosy Bear,” also known as “APT29,”Footnote 4 but Russia has denied any involvement,Footnote 5 and the identity of the attack’s author is not known for certain, although it seems fairly sure that it is another government.Footnote 6 However, apart from the inability to reliably attribute the attack, the extent of the attack itself, as well as the potential security risks it engendered, are also uncertain. What is known is that US agencies, important for the nation’s security, were compromised, including the US departments of Homeland Security, State, Commerce and Treasury, the National Institutes of Health, as well as nuclear programs run by the US Department of Energy and the National Nuclear Security Administration.Footnote 7 The lack of clarity regarding the information that was stolen, as well as whether critical systems were compromised, has generated a lot of anxiety about the security of US defense systems, with some experts calling for the United States to strike back at Russia.Footnote 8 Clearly, APT attacks are turning the traditional international security and peace paradigm on its head, with commensurate risks to our collective safety and security.

2 Kinds of Attacks

APT attacks generally fall into the following categories:

2.1 Espionage

APTs infiltrate computer systems and networks and gather information. Targets typically are governments, companies, or other organizations.

For example, an APT group that seemed to be based in China have reportedly targeted South East Asian government machines since at least November 2018, infecting over 200 government machines and even installing backdoors so that they could easily access machines going forward.Footnote 9 Other reports claim that three state-sponsored APTs operating from Russia and North Korea attempted to break into the computer systems of at least seven prominent companies involved in COVID-19 vaccine research and treatment in order to steal sensitive information.Footnote 10

2.2 Critical Infrastructure Attacks

The industrial control systems (ICS) that operate and control critical infrastructure systems have been targeted by APTs, which use sophisticated attacks to deactivate, take over control, or destroy them. These include the ICSs of energy grids, water supply systems, electricity production plants, nuclear installations, and banking and telecommunications systems.

One of the earliest of these kinds of attacks that garnered international attention was the 2010 Stuxnet worm that targeted supervisory control and data acquisition (SCADA) systems, which operate the systems that control large-scale machinery and industrial processes, including energy grids and nuclear installations. In this instance, the Stuxnet worm reportedly ruined nearly one-fifth of Iran’s nuclear centrifuges by infecting over 200,000 computers and physically damaging approximately 1,000 machines.Footnote 11 While no state officially took responsibility for the attacks, analysts largely believe that groups associated with the United States and Israel were behind them.Footnote 12

In the time since the Stuxnet attack, attacks on important and critical infrastructure control systems have continued and increased. For example, in April of 2020, the command and control systems of Israeli water supply systems were reportedly breached by an APT associated with Iran. However, the Israeli government did not disclose any further information regarding the impact of the attack.Footnote 13 Additionally, in February 2021, a water treatment plant in the US state of Florida was attacked by a hacker who managed to break into the water treatment control system and increase the levels of lye in the water from 100 parts per million to 11,100 parts per million, which would have made anyone who drank the water very sick. Fortunately, a water plant operator happened to be looking at the ICS screen and witnessed in real time the changes to the levels, correcting them before the changes contaminated the water. However, the computer security systems of the water plant’s ICS were not robust enough in themselves to prevent the damage, meaning that if the operator had not happened to be looking at those levels at that particular moment, the water would have been contaminated.Footnote 14

2.3 Interference in the Electoral Processes

APTs are also putting their skills to work at interfering with and undermining national electoral processes.

For example, the US government Cybersecurity and Infrastructure Security Agency (CISA) issued an alert in October 2020, saying that Iranian APTs were creating fictitious websites as well as posting “fake news” to legitimate media platforms in order to undermine public confidence in election systems, as well as to further divide public opinion.Footnote 15 US intelligence agencies also reported that Russia interfered in the 2016 US elections, under the direct orders of Russian President Vladimir Putin who, they say, used “troll farms” to create thousands of fake social media accounts to influence popular opinion.Footnote 16

2.4 Information System Attacks

Another kind of attack aims to bring down the networks and computer systems so that they are no longer available online.

For example, an APT allegedly associated with North Korea, known as the “Lazarus Group,” reportedly took down the Sony Corporation website in retaliation for its release of the film The Interview, a controversial comedy that portrayed US journalists recruited by the US government to assassinate North Korea leader Kim Jong-Un.Footnote 17

2.5 Ransomware Attacks

Recently, there has been a sharp increase in ransomware attacks, or attacks in which an organization’s data has been stolen or computer systems rendered unavailable, with attackers demanding they be paid a ransom to return data or restore access. In 2020, these kinds of attacks increased by an estimated 319 percent, with perpetrators bringing in at least $350 million (USD).Footnote 18

The above-mentioned “Lazarus Group” APT has also been blamed for one of the most significant ransomware attacks, known as “Wannacry,” which was released in May 2017 and infected around 200,000 computers located in 150 different countries. Of particular note, the National Health Service (NHS) hospitals in England and Scotland were hit, requiring the NHS to cancel many noncritical procedures and treatments.Footnote 19

In September 2020, in what was possibly the first account of a ransomware attack on a hospital resulted in the death of a patient in Dusseldorf, Germany. Having fallen victim to a ransomware attack, the hospital had to reroute the patient’s ambulance to another hospital, during which the patient died. Of note, the attacked hospital was not the intended victim of the attack, as the ransom note was addressed to a nearby university. The attackers stopped the attack once authorities informed them that it had shut down a hospital, however, this came too late to save the victim.Footnote 20

In addition to the serious, even life or death consequences for health, both of these attacks also illustrate another difficulty about cyberattacks; that is, their often indiscriminate nature, infecting systems and devices across the Internet where they find vulnerabilities, and not just the intended target(s).

3 “Cyber Hybrid Warfare”: An Emerging Threat to Cyber Peace

The activities and cyberattacks carried out by APTs are changing the character of international conflict today. In the words of Australia’s Defense Minister, Linda Reynolds:

[w]hat is clear now, is that the character of warfare is changing, with more options for pursuing strategic ends just below the threshold of traditional armed conflict – what some experts like to call “grey-zone tactics” or “hybrid warfare.”Footnote 21

More worryingly, the nature of these “grey-zone tactics” by and large slip through the cracks of our international legal frameworks, most of which were constructed around underlying assumptions that attacks would be physical or kinetic, and that states had effective territorial control to uphold their international obligations and protect those within their jurisdictions. By contrast, the “cyber hybrid warfare” paradigm thrives in an environment where “cyberattacks” often do not fulfill the requirements of international conventions, in which states do not acknowledge their responsibilities for such acts, and in which private actors can act on behalf of – or in the place of – international legal personalities.

In this new paradigm, all stakeholders can be authors of attacks as well as the victims – oftentimes both state and nonstate actors are injured by the same attacks online – and effective defense against such attacks may more likely come from the private sector instead of public security forces. If we are to adapt the international legal order to one that supports cyber peace, this calls for new thinking and innovative approaches. While the scope of this essay is not such as to go in-depth into conceiving such a paradigm, as this has already been done throughout this volume, in constructing this new framework, from our perspective, the following elements should be considered or reconceived.

3.1 The Adaptation of International Legal Obligations and Norms to the Cyber Frontier

While it is generally accepted that “international law applies online as well as offline,” what is not always clear is what this means in practice within the interconnected, transborder environment of cyberspace. Furthermore, as states have implemented their international law obligations to reflect national cultural contexts and values, how do we reconcile these often different and sometimes incompatible state-specific standards within an interconnected, largely borderless cyberspace?

3.2 The Notion of “Effective Control”

Furthermore, what actor has the ability to effectively control online activities? Is it the company who owns an undersea cable that forms part of the Internet’s backbone? Is it the company that owns the computer servers and/or the state in which those same servers are located? Is it the APT that has the knowledge to infiltrate and even control state and commercial computer systems? When reconsidering the notion of “effective control,” we should look carefully at which actors have the know-how/capability to effectively stop or prevent the kinds of behaviors online that undermine cyber stability and cyber peace. This issue is part and parcel of creating an effective regime to regulate state-sponsored cyber aggression.

3.3 Responsibility and Accountability v. Protection

Finally, as actors have the ability to carry out activities anonymously, private actors sometimes pack more cyber power than states, and states do not publicly acknowledge their involvement in many attacks, how do we craft a system in which there is effective responsibility and accountability for online attacks? Perhaps this question should be turned around and be considered from a human security point of view as the CyberPeace Institute suggests,Footnote 22 asking how can we best protect the human rights and safety of individual users/netizens online?

Recent initiatives offer some promising avenues to pursue. For example, the Office of the High Commissioner on Human Rights (OHCHR) B-Tech project, which aims to apply the UN Guiding Principles on Business and Human Rights to the ICT sector, is looking at how to create a “smart mix of measures” by exploring regulatory and policy responses to human rights challenges linked to new technologies.Footnote 23 UN Secretary-General António Guterres launched a High-Level Panel on Digital Cooperation, bringing together actors from public and private sectors to advance discussions on improving cooperation in cyber governance, which resulted in the “UN Secretary-General’s Roadmap on Digital Cooperation.”Footnote 24 Both France’s Paris Call for Trust and Security in CyberspaceFootnote 25 and New Zealand’s Christchurch Call to Eliminate Terrorist and Violent Extremist Content OnlineFootnote 26 call on both governments and the ICT commercial sector to join forces in combatting malicious attacks online. Microsoft has been very active in the cyber governance space, launching a number of different initiatives, such as the Cyber Tech Accord,Footnote 27 to advance multistakeholder discussions at the international level. My organization ICT4Peace, a CSO, has called on governments to publicly commit to refrain from cyberattacking critical infrastructuresFootnote 28 – which in principle should extend to the APTs they are affiliated with – and called for the creation of a state peer review mechanism on the order of the Human Rights Council’s Universal Periodic Review to provide some oversight and accountability for states’ actions online.Footnote 29

All of these initiatives recognize the piecemeal, polycentric, multistakeholder-driven nature of cyberspace, and further that it will take joint efforts and concerted collaborative action toward a goal that is in all of our best interests: A safe and peaceful cyberspace in which all stakeholders can thrive and in which state and human security go forward hand-in-hand.

Footnotes

10 Imagining Cyber Peace An Interview with a Cyber Peace Pioneer

11 Overcoming Barriers to Empirical Cyber Research

12 Bits and “Peaces” Solving the Jigsaw to Secure Cyberspace

* The CyberPeace Institute is an independent, nonprofit organization headquartered in Geneva that works to enhance the stability of cyberspace by decreasing the frequency, impact, and scale of destructive cyberattacks against civilians. The Institute promotes transparency about such attacks and holds malicious actors to account for the harm they cause on vulnerable communities and populations. To this end, the Institute’s mission is to ensure a human-centric and evidence-led response to cyber operations.

1 For example, this definition comports with the four pillars of a positive cyber peace “…as a system that: (1) respects human rights and freedoms; (2) spreads Internet access along with cybersecurity best practices, (3) strengthens governance mechanisms by fostering multi-stakeholder collaboration, and (4) promotes stability and relatedly sustainable development” (Shackelford, Reference Shackelford2020, pp. 15–16).

2 See the following public service announcement as an example: www.who.int/about/communications/cyber-security.

13 Cyber Hygiene Can Support Cyber Peace

* About the Global Cyber Alliance.

Founded in 2015 by the District Attorney for New York, the City of London Police, and the Center for Internet Security, the Global Cyber Alliance (GCA) is a charitable organization dedicated to reducing cyber risks. The GCA accomplishes this mission by uniting global communities, scaling cybersecurity solutions, and measuring their impact. In the five years since its launch, GCA has grown to include over 150 organizations as partners, across over thirty countries, and all sectors of the economy. Partner organizations include industry, governments, academia, and other nonprofit organizations.

Examples of GCA’s work include support for Domain-based Message Authentication, Reporting, and Conformance – email security protocols known as DMARC, the development of a protective domain name service (DNS), and the creation of cybersecurity toolkits for at-risk organizations and populations. In supporting DMARC, GCA developed a leader board of domains that have fully implemented the tool, conducted multiple boot camps to train administrators on the proper implementation of DMARC, and translated resources guides into eighteen languages. A 2018 study (Shostack et al., Reference Shostack, Jacobs and Baker2018) found that the estimated value to the 1,046 organizations that deployed DMARC at a policy level of “reject” or “quarantine,” after using GCA’s tool, is likely $19 million (USD).

GCA developed a protective DNS service called Quad9 in collaboration with IBM and Packet Clearing House. Quad9 protects users from accessing known malicious websites by leveraging threat intelligence from multiple industry leaders and blocks an average of over 15 million threats per day for users in over eighty-eight countries. A 2019 study (Shostack et al., Reference Shostack, Jacobs and Baker2019) found that the use of DNS firewalls can prevent more than 33 percent of cybersecurity data breaches from occurring.

More recently, GCA combined these projects with free resources from software application developers to develop cybersecurity toolkits for small business, elections administrators, and journalists. The toolkits recommend resources to help these organizations and individuals implement internationally recognized cybersecurity best practices. Each toolkit includes several tools, together with brief overviews of the need for the tool and step-by-step instructions to guide users through the tools’ set up. A community forum and learning management system further support users in their use of the resources. The toolkit for small business is available in four languages, and GCA is assessing methods to measure the toolkits’ impact.

GCA works to eradicate cyber risk and improve the connected world. GCA projects focus on the most prevalent cyber risks individuals and businesses face by developing and deploying practical solutions that measurably improve the security of the digital ecosystem; GCA offers these resources at no cost to the global community. GCA is dedicated to increasing cyber awareness and hygiene across all layers of society through awareness-raising campaigns and civil society engagement.

14 Crowdsourcing Cyber Peace and Cybersecurity

15 Advanced Persistent Threat Groups Increasingly Destabilize Peace and Security in Cyberspace

* ICT4Peace is an independent foundation that fosters political discussion and common action to support international and human security in cyberspace. To this end, it researches, identifies, and raises awareness about emerging technology challenges, makes policy recommendations, and delivers capacity-building programs.

1 More information about the forms APT attacks take place can be found here: https://csrc.nist.gov/glossary/term/advanced_persistent_threat.

2 See, Maloney, Sarah, “What is an Advanced Persistent Threat (APT),” last accessed November 29, 2020.

3 “Why nation-state cyberattacks must be top of mind for CISOs,” TechTarget Network, last accessed November 29, 2020.

4 “Microsoft Discovers a Second Hacking Team Exploiting SolarWinds Orion Software,” CPO Magazine, last accessed February 16, 2021.

5 “SolarWinds software used in multiple hacking attacks: What you need to know,” ZDNet, last accessed February 16, 2021.

6 Ibid.

7 Ibid.

8 “Cybersecurity experts say US needs to strike back after Solarwinds hack,” CBS News 60 Minutes Overtime, last accessed February 16, 2021.

9 See, for example, “Dissecting a Chinese APT Targeting South Eastern Asian Government Institutions,” Bitdefender Draco Team Whitepaper, last accessed November 29, 2020.

10 “See Microsoft says three APTs have Targeted Seven COVID-19 Vaccine Makers,” available online at: https://www.zdnet.com/article/microsoft-says-three-apts-have-targeted-seven-covid-19-vaccine-makers/.

11 “Sheep dip your removable storage devices to reduce the threat of cyber-attacks,” Solutions, last retrieved November 29, 2020.

12 “Stuxnet was work of U.S. and Israeli experts, officials say,” Washington Post, last accessed 16.02.2021.

13 Goud, Naveen, “Israel Water Supply Authority hit by Cyber Attack,” Cybersecurity Insiders,last accessed November 29, 2020.

14 “‘Dangerous Stuff’: Hackers Tried to Poison Water Supply of Florida Town,” New York Times, last accessed February 16, 2021.

15 “Iranian Advance Persistent Threat Actors Threaten Election-Related Systems,” US Cybersecurity & Infrastructure Security Agency Alert (AA20–296B), published October 22, 2020, last accessed November 29, 2020.

16 Ross Brian, Schwartz Rhonda, Meek James Gordon, “Officials: Master Spy Vladimir Putin Now Directly Linked to US Hacking,” ABC News, last accessed November 29, 2020.

17 Heller, Michael, “Lazarus Group hacker charged in WannaCry,” Sony attacks, TechTarget Network, last accessed November 29, 2020.

18 “Ransomware gangs made at least $350 million in 2020,” ZD Net, published February 2, 2021, last accessed February 16, 2021.

19 “Cyber-attack: Europol says it was unprecedented in scale,” BBC News, published 13 May 2017, last accessed November 29, 2020.

20 Wetsman, Nicole, “Woman dies during a ransomware attack on a German hospital,” The Verge, published September 17, 2020, last accessed November 29, 2020.

21 Dowse, Andrew and Bachmann, Sascha-Dominik, “Explainer: what is ‘hybrid warfare’ and what is meant by the ‘grey zone’?” The Conversation, published June 17, 2019, last accessed November 29, 2020.

22 “CyberPeace: From Human Experience to Human Responsibility,” Medium, last accessed February 16, 2021.

23 “Business and Human Rights Technology Project (‘B-Tech Project’): Applying the UN Guiding Principles on Business and Human Rights to Digital Technologies,” last accessed November 30, 2020.

24 For more information, see www.un.org/en/digital-cooperation-panel/, last accessed November 30, 2020.

25 For more information, see pariscall.international/en/, last accessed November 30, 2020.

26 For more information, see www.christchurchcall.com, last accessed November 30, 2020.

27 For more information, see cybertechaccord.org, last accessed November 30, 2020.

28 “Call to Governments to refrain from carrying out offensive cyber operations and cyberattacks against critical infrastructure.”

29 Cyber Peer Review Mechanism.

References

References

Benson, M. H., Lippitt, C. D., Morrison, R., Cosens, B., Boll, J., Chaffin, B. C., … Link, T. E. (2016). Five ways to support interdisciplinary work before tenure. Journal of Environmental Studies and Sciences, 6(2), 260–267.CrossRefGoogle Scholar
Boustead, A. E., & Herr, T. (2020). Analyzing the ethical implications of research using leaked data. PS: Political Science & Politics, 53(3), 505–509.Google Scholar
Federal Bureau of Investigation. (2020). 2019 Internet Crime Report. Retrieved from https://pdf.ic3.gov/2019_IC3Report.pdfGoogle Scholar
Groves, R. M., Fowler, F. J. Jr, Couper, M. P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2011). Survey methodology (Vol. 561). John Wiley & Sons.Google Scholar
Harknett, R. J., & Stever, J. A. (2009). The cybersecurity triad: Government, private sector partners, and the engaged cybersecurity citizen. Journal of Homeland Security and Emergency Management, 6(1), 1–14.CrossRefGoogle Scholar
Indiana Attorney General. (2020). Identity Theft Prevention. Retrieved from www.in.gov/attorneygeneral/2874.htmGoogle Scholar
Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information Systems Research, 14(3), 221–243.Google Scholar
Murciano-Goroff, R. (2019). Do Data Breach Disclosure Laws Increase Firms’ Investment in Securing Their Digital Infrastructure? Paper presented at the Workshop on the Economics of Information Security.Google Scholar
Napoli, P. M., & Karaganis, J. (2010). On making public policy with publicly available data: The case of US communications policymaking. Government Information Quarterly, 27(4), 384–391.Google Scholar
National Conference of State Legislatures. (2020). Security Breach Notification Laws. Retrieved from www.ncsl.org/research/telecommunications-and-information-technology/security-breach-notification-laws.aspxGoogle Scholar
Romanosky, S., Telang, R., & Acquisti, A. (2011). Do data breach disclosure laws reduce identity theft? Journal of Policy Analysis and Management, 30(2), 256–286.Google Scholar

References

CyberPeace Institute. (n.d.). What is the infodemic? Retrieved from: https://cyberpeaceinstitute.org/blog/2020-03-25-what-is-the-infodemicGoogle Scholar
Delerue, F. (2020). Cyber operations and international law. Cambridge University Press. DOI: 10.1017/9781108780605Google Scholar
Diehl, P. F. (2019). Peace: A conceptual survey. Oxford Research Encyclopedia of International Studies. Oxford University Press. https://doi.org/10.1093/acrefore/9780190846626.013.515Google Scholar
Goodin, D. (2020, September 19). A patient dies after a ransomware attack hits a hospital. Wired. Retrieved from: www.wired.com/story/a-patient-dies-after-a-ransomware-attack-hits-a-hospital/Google Scholar
International Committee of the Red Cross. (2020, February 11). Norms for responsible State behavior on cyber operations should build on international law. Retrieved from: www.icrc.org/en/document/norms-responsible-state-behavior-cyber-operations-should-build-international-lawGoogle Scholar
Marlin-Bennett, R. (2022). Cyber Peace: Is that a thing? In Shackelford, S. J., Douzet, F. & Ankersen, C. (Eds.), Cyber Peace: Charting a path toward a sustainable, stable, and secure cyberspace (pp. 3–21). Cambridge University Press.Google Scholar
Open Ended Working Group on developments in the field of information and telecommunications in the context of international security. (2020, May 27). Second “Pre-draft” of the report of the OEWG on developments in the field of information and telecommunications in the context of international security. Retrieved from: https://front.un-arm.org/wp-content/uploads/2020/05/200527-oewg-ict-revised-pre-draft.pdfGoogle Scholar
Roff, H. M. (2016). Cyber Peace: Cybersecurity through the lens of positive peace. New America. Retrieved from: https://static.newamerica.org/attachments/12554-cyber-peace/FOR%20PRINTING-Cyber_Peace_Roff.2fbbb0b16b69482e8b6312937607ad66.pdfGoogle Scholar
Satter, R., Stubbs, J., & Bing, C. (2020, March 23). Exclusive: Elite hackers target WHO as coronavirus cyberattacks spike. Reuters. Retrieved from: www.reuters.com/article/us-health-coronavirus-who-hack-exclusive/exclusive-elite-hackers-target-who-as-coronavirus-cyberattacks-spike-idUSKBN21A3BNGoogle Scholar
Shackelford, S. J. (2020). Inside the global drive for Cyber Peace: Unpacking the implications for practitioners and policymakers. SSRN. Retrieved from: https://ssrn.com/abstract=3577161 or http://dx.doi.org/10.2139/ssrn.3577161Google Scholar
Shackelford, S. J. (2022). Introduction. In Shackelford, S. J., Douzet, F., & Ankersen, C. (Eds.), Cyber Peace: Charting a path toward a sustainable, stable, and secure cyberspace (pp. xix–xxxi). Cambridge University Press.Google Scholar
United Nations Department of Economic and Social Affairs. (2020). The Sustainable Development Goals Report 2020. Retrieved from: https://unstats.un.org/sdgs/report/2020/Google Scholar
United Nations Development Programme. (2019). Human Development Report 2019: Beyond income, beyond averages, beyond today: Inequalities in human development in the 21st century. Retrieved from: http://hdr.undp.org/sites/default/files/hdr2019.pdfGoogle Scholar
United Nations Development Programme. (n.d.). Goal 9: Industry, innovation and infrastructure. Retrieved from: www.undp.org/content/undp/en/home/sustainable-development-goals/goal-9-industry-innovation-and-infrastructure.htmlGoogle Scholar
United Nations, General Assembly. (2015, July 22). Report of the group of governmental experts on developments in the field of information and telecommunications in the context of international security, A/70/174. Retrieved from: https://undocs.org/A/70/174Google Scholar
Walljasper, J. (2011, October 2). Elinor Ostrom’s 8 Principles for Managing a Commons. On the Commons. Retrieved from: www.onthecommons.org/magazine/elinor-ostroms-8-principles-managing-commmonsGoogle Scholar
World Health Organization. (2020). Novel Coronavirus (2019-nCoV): Situation Report – 13. Retrieved from: www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf?sfvrsn=195f4010_6Google Scholar

References

Baksh, M. (2020). NSA piloting secure domain name system service for defense contractors. Nextgov. Retrieved October 28, 2020, from www.nextgov.com/cybersecurity/2020/06/nsa-piloting-secure-domain-name-system-service-defense-contractors/166248/Google Scholar
Buchanan, B. (2020). The hacker and the state: Cyber attacks and the new normal of global politics. Harvard University Press.Google Scholar
Center for Internet Security. (Unknown). Ransomware: Facts, threats, and countermeasures. Retrieved October 2, 2020, from www.cisecurity.org/blog/ransomware-facts-threats-and-countermeasures/Google Scholar
CheckPoint. (2020). Cyber attack trends, mid-year report. CheckPoint. Retrieved October 29, 2020, from www.checkpoint.com/downloads/resources/cyber-attack-trends-report-mid-year-2020.pdfGoogle Scholar
Columbus, L. (2019). 74% of data breaches start with privileged credential abuse. Forbes. Retrieved April 21, 2020, from www.forbes.com/sites/louiscolumbus/2019/02/26/74-of-data-breaches-start-with-privileged-credential-abuse/#7bd92a33ce45Google Scholar
Field, M. (2018). WannaCry cyber attack cost the NHS 92m as 19,000 appointments cancelled. The Telegraph. Retrieved September 22, 2020, from www.telegraph.co.uk/technology/2018/10/11/wannacry-cyber-attack-cost-nhs-92m-19000-appointments-cancelled/Google Scholar
FireEye. (2018). Email Threat Report. Retrieved October 30, 2020, from www.fireeye.com/offers/rpt-email-threat-report.htmlGoogle Scholar
Fruhlinger, J. (2020). The OPM hack explained: Bad security practices meet China’s Captain America. CSO. Retrieved April 21, 2020, from www.csoonline.com/article/3318238/the-opm-hack-explained-bad-security-practices-meet-chinas-captain-america.htmlGoogle Scholar
Gorey, C. (2020). National-state actors may be running phishing scams that exploit the coronavirus. Siliconrepublic. Retrieved May 5, 2020, from www.siliconrepublic.com/enterprise/coronavirus-phishing-scamsGoogle Scholar
Greenberg, A. (2018). The untold story of NotPetya, the most devastating cyberattack in history. Wired. Retrieved September 22, 2020, from www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/Google Scholar
Greenberg, A. (2019). Sandworm: A new era of cyberwar and the hunt for the Kremlin’s most dangerous hackers. Doubleday.Google Scholar
Latto, N. (2020). What Is WannaCry? Avast Academy. Retrieved April 22, 2020, from www.avast.com/c-wannacryGoogle Scholar
Lemos, R. (2020). Three years after WannaCry, ransomware accelerating while patching still problematic. DarkReading. Retrieved October 29, 2020, from www.darkreading.com/attacks-breaches/three-years-after-wannacry-ransomware-accelerating-while-patching-still-problematic/d/d-id/1337794Google Scholar
Lipton, E., Sanger, D., & Shane, S. (2016). The perfect weapon: How Russian cyberpower invaded the U.S. The New York Times. Retrieved April 21, 2020, from www.nytimes.com/2016/12/13/us/politics/russia-hack-election-dnc.htmlGoogle Scholar
Nyczepir, D. (2020). CISA looks to offer a new DNS resolver to civilian agencies and beyond. FedScoop. Retrieved October 28, 2020, from www.fedscoop.com/cisa-dns-resolver-recursive/Google Scholar
Rein, L. (2015). Reacting to Chinese hack, the government may not have followed its own cybersecurity rules. Washington Post. Retrieved October 30, 2020, from www.washingtonpost.com/news/federal-eye/wp/2015/06/18/reacting-to-chinese-hack-the-government-may-not-have-followed-its-own-cybersecurity-rules/Google Scholar
Shostack, A., Jacobs, J., & Baker, W. (2018). Measuring the impact of DMARC’s part in preventing business email compromise. Global Cyber Alliance. Retrieved September 22, 2020, from www.globalcyberalliance.org/wp-content/uploads/GCA-DMARC-Exec-Summary.pdfGoogle Scholar
Shostack, A., Jacobs, J., & Baker, W. (2019). The economic value of DNS security. Global Cyber Alliance. Retrieved September 22, 2020, from www.globalcyberalliance.org/wp-content/uploads/GCA-DNS-Exec-Summary-Report.pdfGoogle Scholar
Starks, T. (2016). House report: Massive OPM breaches a ‘failure’ of leadership. Politico. Retrieved October 29, 2020, from www.politico.com/story/2016/09/opm-cyber-hacks-house-report-227817Google Scholar
Vaughan-Nichols, S. (2015). Phishing e-mail delays OPM hack remediation efforts. ZDNet. Retrieved April 22, 2020, from www.zdnet.com/article/phishing-e-mail-temporarily-stops-opm-hack-remediation-efforts/Google Scholar

References

Ackerman, S. Data for the Boston Marathon Investigation Will Be Crowdsourced, April 16, 2013, Wired, www.wired.com/2013/04/boston-crowdsourced/Google Scholar
Contributor BP, How a Sheriff Uses His 10,000 Facebook Fans to Solve Crimes, October 31, 2011, Consumerist, https://consumerist.com/2011/10/31/how-sheriff-al-lamberti-uses-his-7200-facebook-fans-to-solve-crimes/Google Scholar
Hargrave, M. Crowdsourcing, July 8, 2019, Investopedia, www.investopedia.com/terms/c/crowdsourcing.aspGoogle Scholar
McGuinness, D. How a Cyber Attack Transformed Estonia, April 27, 2017, BBC News, www.bbc.com/news/39655415Google Scholar
Shackelford, S. The World Needs a Cyber Peace Corps, 2017, Slate, https://slate.com/technology/2017/10/the-world-needs-a-cyber-peace-corps.htmlGoogle Scholar
Weingarten, D. Born in India, CyberPeace Corps Trains Tacklers of Disinformation, Online Challenges, May 7, 2020, Meritalk, www.meritalk.com/articles/born-in-india-cyber-peace-corps-trains-tacklers-of-disinformation-online-challenges/Google Scholar
Figure 0

Table 11.1 Most salient barriers to addressing different types of empirical cyber research questions

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×