We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The kind of automatic production and targeting of memories that we have described in the previous chapters is still relatively new. Yet it is already widespread and deeply embedded in how people relate to their past through social media content. As we have shown, processes of classification and ranking are central to how people encounter past social media content as memories. What this will mean for collective and individual memory will take some time to fully understand. However, in this chapter we would like to turn to a project that was recently completed by the first-named author in order to begin to think through and explore what these changes might mean, examining how people might come to respond and react to these packaged and targeted memories. The previous chapters showcase how the memorable is partitioned and promoted. In this chapter, we will reflect more directly on the reception of the classified and ranked memories with which users are presented. Given the scope of the issues, this is not a complete endeavour, but it begins to give glimpses into the variegated reception of automatically sorted memories that might then be pursued further. It will indicate the types of direction that memory making may be taking in the context of social media and mobile devices. In short, this chapter begins to explore something that is well- established but little understood as of yet. As discussed in Chapter One, we may know some of what happens when digital memories or mediated memories become integrated, but this particular chapter is about how people react to targeted memories. Partitioning and promoting the memorable through processes of classification and ranking assumes that the memory categories produced are fixed and distinct (Mackenzie, 2015). Yet, as we shall show, the processes of classification and ranking do not necessarily mean that memories fit neatly into those fixed grids of Facebook's taxonomy; nor are the reactions entirely in keeping with those imagined in the rhetorical ideals of the social media providers and coders. As this chapter shows, the reception of targeted memories in everyday life emphasizes the various nuances and tensions generated by the dual process of classification and ranking.
Inevitably, classification processes are powerful within any type of archive. The way content is classified shapes how documents are interpreted and, crucially, how they are retrieved. If we approach social media as a form of archive, then we can begin to see how the ordering process of classification and sorting that occur within these media may be powerful for how people engage with their past content and how individual biographies are made accessible. As we will explore, the ordering of the archive is crucial for understanding its functioning and what can be pulled from its vast stores.
The types of archives that are used to document life are powerful in their presence and outcomes. For some it has been placed at the centre of modern power formations. Derrida (1996: 4 n1) famously argued that, ‘there is no political power without control of the archive, if not of memory. Effective democratization can always be measured by this essential criterion: the participation in and the access to the archive, its constitution, and its interpretation.’
If we treat social media as a population of people effectively participating within a large archival structure, then social media bring the politics of the archive to the centre of everyday life and social interaction (see Beer, 2013). Derrida's point is that the structures of the archive afford its uses and what can then be said with it or retrieved from it. He argues that ‘the technical structure of the archiving archive also determines the structure of the archivable content even in its very coming into existence and in its relationship to the future’ (Derrida, 1996: 17; original emphases). The form that the archive takes also dictates the type of items or documents that come to be stored within them; it imposes its logic upon its content. Derrida adds to this, crucially, that ‘the archivization produces as much as it records the event’ (Derrida, 1996: 17). The technical structures of the archive need to be understood in order for its politics to be revealed, particularly as they intervene in the relations between the past and the future. This is something that we will keep in focus as we move through this and the following chapter.
Social media profiles inevitably leave traces of a life being lived. These biographical data trails are a tempting resource for ‘platform capitalism’ (Langley & Leyshon, 2017; Srnicek, 2017). As they have integrated themselves deeply into everyday routines and interactions, social media have captured a wealth of biographical information about their users. The production and maintenance of profiles has led to the recording and sharing of detailed documentary impressions. This accumulation of the day-to-day has led to the conditions in which prior content can be readily repurposed to suit the rapid circulations of social media. Moving beyond their initial remit as communication and networking platforms, social media have expanded to become memory devices. As people's lives are captured, social media platforms continue to seek out ways to recirculate these traces and to render them meaningful for the individual user. The archive is vast, and so automated approaches to memory making have been deployed in order to resurface this past content, selecting what should be visible and rendering it manageable. It is here that this book makes an intervention – this is a book about algorithmic memory making within social media. What is particularly important, as we will show, are the ways that social media's automated systems are actively sorting the past on behalf of the user.
In a short fragment composed around 1932, a piece that went unpublished in his lifetime, Walter Benjamin wrote of the ‘excavation’ of memories. Memories, the fragment suggests, are something to be actively mined from the continually piling remnants of everyday life. Memories require action, he implies; they are something to be achieved, they are the product of active labour. As a result, digging metaphors permeate Benjamin's single paragraph of text. He pictures the individual pursuing their memories as a kind of archaeologist combing through the dirt to uncover and reveal the items below. He opens by claiming that, ‘Language has unmistakably made plain that memory is not an instrument for exploring the past, but rather a medium. It is the medium of that which is experienced, just as the earth is the medium in which ancient cities lie buried’ (Benjamin, 1999a: 576).
Governing Privacy in Knowledge Commons explores how privacy impacts knowledge production, community formation, and collaborative governance in diverse contexts, ranging from academia and IoT, to social media and mental health. Using nine new case studies and a meta-analysis of previous knowledge commons literature, the book integrates the Governing Knowledge Commons framework with Helen Nissenbaum's Contextual Integrity framework. The multidisciplinary case studies show that personal information is often a key component of the resources created by knowledge commons. Moreover, even when it is not the focus of the commons, personal information governance may require community participation and boundaries. Taken together, the chapters illustrate the importance of exit and voice in constructing and sustaining knowledge commons through appropriate personal information flows. They also shed light on the shortcomings of current notice-and-consent style regulation of social media platforms. This title is also available as Open Access on Cambridge Core.
Diverse and increasingly comprehensive data about our personal lives is collected. When these personal data are linked to health records or linked to other data collected in our environment, such as that collected by state administrations or financial systems, the data have huge potential for public health research and society in general. Precision medicine, including pharmacogenomics, particularly depends on the potential of data linkage. With new capacities to analyze linked data, researchers today can retrieve and assess valuable and clinically relevant information. One way to develop such linked data sets and to make them available for research is through health data cooperatives. An example of such a health data cooperation is MIDATA – a health data cooperative recently established in Switzerland and the main focus of this chapter. In response to concerns about the present health data economy, MIDATA was founded to provide a governance structure for data storage that supports individual’s digital self-determination by allowing MIDATA members to control their own personal data flow and to store such data in a secure environment.
The rise of social media has raised questions about the vitality of privacy values and concerns about threats to privacy. The convergence of politics with social media use amplifies the privacy concerns traditionally associated with political organizing, particularly when marginalized groups and minority politics are involved. Despite the importance of these issues, there has been little empirical exploration of how privacy governs political activism and organizing in online environments. This chapter explores how privacy concerns shape political organizing on Facebook, through detailed case studies of how groups associated with March for Science, Day Without Immigrants (“DWI”), and Women’s March govern information flows. These cases address distinct issues, while operating in similar contexts and on the same timescales, allowing for the exploration of privacy in governance of personal information flows in political organizing and Facebook sub-communities. Privacy practices and concerns differed between the cases, depending on factors such as the nature of the group, the political issues it confronts, and its relationships to other organizations or movements.
Privacy has traditionally been conceptualized in an individualistic framing, often as a private good that is traded off against other goods. This chapter views the process of privacy enforcement through the lens of governance and situated design of sociotechnical systems. It considers the challenges in formulating and designing privacy as commons (as per the Governing Knowledge Commons framework) when privacy ultimately gets enacted (or not) in complex sociotechnical systems. It identifies six distinct research directions pertinent to the governance and formulation of privacy norms, spanning an examination of how tools of design could be used to develop design strategies and approaches to formulate, design, and sustain a privacy commons, and how specific technical formulations and approaches to privacy can serve the governance of such a privacy commons.
The Internet of Everything takes the notion of IoT a step further by including not only the physical infrastructure of smart devices, but also its impacts on people, business, and society. Our world is getting more connected, if not smarter, but to date governance regimes have struggled to keep pace with this dynamic rate of innovation. Yet it is an open question whether security and privacy protections can or will scale within this dynamic and complex global digital ecosystem, and whether law and policy can keep up with these developments? The natural question, then, is whether our approach to governing the Internet of Everything is, well, smart? This chapter explores what lessons the Institutional Analysis and Development (IAD) and Governing Knowledge Commons (GKC) Frameworks hold for promoting security, and privacy, in an Internet of Everything, with special treatment regarding the promise and peril of blockchain technology to build trust in such a massively distributed network. Particular attention is paid to governance gaps in this evolving ecosystem, and what state, federal, and international policies are needed to better address security and privacy failings.
Understanding the rules and norms that shape the practices of institutional researchers and other data practitioners in regards to student data privacy within higher education could be researched using descriptive methods, which attempt to illustrate what is actually being done in this space. But, we argue that it is also important for practitioners to become reflexive about their practice while they are in the midst of using sensitive data in order to make responsive practical and ethical modulations. To achieve this, we conducted a STIR, or socio-technical integration research. We see in the data, the STIR of a single institutional researcher, some evidence of changes in information flow, reactions to it, and ways of thinking and doing to reestablish privacy-protecting rules-in-use.
Personal information is inherently about someone, is often shared unintentionally or involuntarily, flows via commercial communication infrastructure, and can be instrumental and often essential to building trust among members of a community. As a result, privacy commons governance may be ineffective, illegitimate, or both if it does not appropriately account for the interests of information subjects or if infrastructure is owned and designed by actors whose interests may be misaligned or in conflict with the interests of information subjects. Additional newly emerging themes include the importance of trust; the contestability of commons governance legitimacy; and the co-emergence of contributor communities and knowledge resources. The contributions in this volume also confirm and deepen insights into recurring themes identified in previous GKC studies, while the distinctive characteristics of personal information add nuance and uncover limitations. The studies in this volume move us significantly forward in our understanding of knowledge commons, while opening up important new directions for future research and policy development, as discussed in this concluding chapter.
This introduction to Governing Privacy in Knowledge Commons discusses how meta-analysis of past case studies has yielded additional questions to supplement the GKC framework, based on the specific governance challenges around personal information. Based on this renewed understanding, a series of new case studies are organized around the different roles that personal information play in commons arrangements. The knowledge commons perspective highlights the interdependence between knowledge flows aimed at creative production and personal information flows. Madelyn will discuss how those who systematically study knowledge commons governance with an eye toward knowledge production routinely encounter privacy concerns and values, along with rules in use that govern appropriate personal information flow.
Drawing upon the GKC framework, this chapter presents an ethnographic study of Woebot – a therapy chatbot designed to administer a form of cognitive behavioral therapy (“CBT”). Section 3.1 explains the methodology of this case study. Section 3.2 describes the background contexts that relate to anxiety as a public health problem. These include the nature of anxiety and historical approaches to diagnosing and treating it, the ascendency of e-Mental Health therapy provided through apps, and relevant laws and regulations. Section 3.3 describes how Woebot was developed and what goals its designers pursued. Section 3.4 describes the kinds of information that users share with Woebot. Section 3.5 describes how the designers of the system seek to manage this information in a way that benefits users without disrupting their privacy.