We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Any new account of privacy, especially one with the unusual combination of elements as is presented here, will require a conducive institutional environment. The environment proposed here is based on a new notion of regulation, in the broad sense of social control rather than the narrower sense of subsidiary legislation and despite the successes claimed for that latter form. In this broad sense, present regulation derives from the spread of power from sovereign institutions as means by which that sovereign power is transformed but still empowered and which can be seen in such shapes as biopower and the algorithmic determinism of human behaviour. The presently dominant form of regulation, responsive regulation, is best seen as mythological, especially through the manner in which it is informed by the republicanism of Pettit. In response, the new sense of regulation as social control focuses on the reimagining of institutions and the promotion of the existential interests of the individual to centre-stage. This is the reverse of current priorities.
We need to understand neuroscience as an emanation of artificial intelligence. By that, a range of methods is being used to understand not only how the brain functions but also how it might be brought to function. Such neural change will increasingly come from connecting the brain to external sources of intelligence, both artificial and human. Yet the algorithms that are driving these developments are not neutral. As the world is itself increasingly being claimed to be algorithmic, we need to see not only that algorithms – and the data they interpret – are designed but that this design carries personal and cultural presumptions. We are re-creating the world through algorithms and that is both a form of idealism and one which is, because of that cultural frame, mythological in the sense of the dominant social dynamic. That is, because algorithmic designs are not determined by each individual, they are technologies of subjection, willing embraced or imposed. They are formative not only of the world but also of the individual self. This process is as evident in virtual and augmented realities as it is in clinical neuroscience.
The illegitimacy of present accounts of privacy is revealed by the manner in which normalisation has long taken place through a series of social transitions. Other historical perspectives of societal evolution have been adopted, but the mythological analysis here is distinctive. Following Christian confessionalism and pastoralism, we see the methods of governmentalizing discipline that led to the civilising of the sovereign State through the rise of the bourgeoisie; then the liberalism and neoliberalism that ultimately promoted the dominance of the Market over the State, by which the consumer has been constructed; and now the Technological ‘algorisation’ of social and individual perspective and practice. Many of the elements that have accumulated in this long process are thereby being brought to bear in technologies of the self as self-creation. Each of these regimes was founded on the distancing and camouflage of existential reality, inducing subjection to the ideas and practices promoted within these mythological magnitudes and primarily for the benefit of their respective dominant interests.
A valid new sense of privacy would need to be founded on the principles of the existential, respectful self-responsibility of all individuals and the promotion of which would need to be complemented by a reimagined State, Market and technological design principles. This will allow the embrace, not the denial, of the value of technological development, especially in neuroscience. In this context, each individual would have an evolving personal technology strategy with progressive/enhancement and conservative/protection elements. From that, respectful self-responsibility would require both sharing information and acquiring it, all typically under the individual’s control, including through data and algorithms that are designed and applied under their direction. The initiatives undertaken by the IEEE and MyData are moves in the right direction, but they remain prey to mythological interpretation. The principles of this new sense of privacy are then tested by application to standard and well-known privacy dilemmas, including on case law.
In developing a new ethic as a foundation for a non-mythological notion of privacy, we need first to put aside the informational ethics of Floridi, as that is founded on the conception of the individual as, ontologically, information. We demonstrate that this is a mythological position. Capurro has seen the errors of that argument in the dehumanisation of the individual. In moving forward, we examine the value of the full range of the standard ethical qualities on which our relationship with technology is said to be best based and thereby how we should manage its intrusions into privacy. These include dignity, liberty, identity, responsibility, democratic principles, equality, human rights and the common good. However, each of these is shown ultimately to be vulnerable to a range of shortcomings. It is argued that only respectful self-responsibility – that is, responsibility to and for oneself which is respectful of others and which relies on existential values – can act as a solid ethical foundation, although these other principles can be claimed to be of secondary value. We conclude the argument here by pointing out how that principle would not fall prey to bourgeois aspirations.
Law by its very nature tends towards the constraint of the decision-making of individuals, and so has an inherent – but not inevitable – mythological disposition, especially when in combination with both the sovereign and regulatory power of the State. Thereby it reflects shifting forms of – most recently neoliberal – power and truth. A non-mythological law will need to be framed constitutionally but will also require a rethinking of the rule of law, which is currently mostly comprised of anatomical lists of preferred characteristics. There are alternative approaches in the form of teleological accounts – preferred here – and prominent amongst which is that of Krygier. However, he does not go far enough, settling for a critical exploration of social traditions and seeing the arbitrary use of force as the dominant target. This tends to ignore the spread of sovereign power into regulatory forms, which are as intrusive as arbitrary power, albeit in a different manner. An existential rule of law would be founded on purpose-based, fiduciary principles which committed agencies to promote the non-mythological interests of self-responsible individuals. Trust would play a valuable but secondary role in such arrangements.
Part I has been concerned with the significance of emerging neuroscience for the idea and practice of privacy and with a range of contextual factors through which that significance needs to be understood.
Part II has been concerned with the reimagining of the social infrastructure in a manner that will support and encourage the individual pursuit of the existential responsibility to and for oneself.
Examining privacy theory begins with what is absent from present accounts, that is, the central importance of our personal concerns about our existential reality. These concerns, and the disposition to seek ways to distance and camouflage them with constructed concerns, is at the heart of the inducements of what emerged as the mythological trajectory of Deity, State, Market and Technology. The impact of the ideas and practices left behind by these failed but persisting magnitudes is what is normalised in us and comes to comprise what we see as our private world. However, these ideas and practices are mythological subjections. There are two dominant present accounts of privacy, the ‘Constitutional’ (which is sourced from the ideas and practices of Deity, State and Market), which is primarily a bourgeois account, and the ‘Selected Flow of Information’ account (which is inspired by the movement of information within a social context, especially in the technological age). Given the mythological content of both accounts, the way forward needs to take an entirely different approach. This will relocate existential reality to the centre of its frame but also emphasise an ethic which rejects subjection, one framed by respectful self-responsibility.
In a disruptive media landscape characterized by the relentless death of legacy newspapers, Nigeria's Digital Diaspora shows that a country's transnational elite can shake its media ecosystem through distant online citizen journalism.
Neuroscience has begun to intrude deeply into what it means to be human, an intrusion that offers profound benefits but will demolish our present understanding of privacy. In Privacy in the Age of Neuroscience, David Grant argues that we need to reconceptualize privacy in a manner that will allow us to reap the rewards of neuroscience while still protecting our privacy and, ultimately, our humanity. Grant delves into our relationship with technology, the latest in what he describes as a historical series of 'magnitudes', following Deity, the State and the Market, proposing the idea that, for this new magnitude (Technology), we must control rather than be subjected to it. In this provocative work, Grant unveils a radical account of privacy and an equally radical proposal to create the social infrastructure we need to support it.
Although we are talking about the automated production of memory in this book, these systems are still anchored by classification systems that open them up to a much longer held and well-established, as Foucault (2002) put it, order of things. It is also important to note that ‘The Taxonomy of Memory Themes’ discussed in Chapter Two served as the ‘ground truth’ (Amoore, 2020), so to speak, for the development of Facebook Memories. Established prior to its development, the memory classifications generated by Facebook's research studies were fed into the design of Facebook's current throwback feature. This was effectively a moment in which the formalization of a computational problem occurred and where there was an attempt to render the indeterminable and contingent into something calculable (see Fazi, 2018). Once this taxonomy of memories was in place, it provided the ranking algorithms with a clear-cut computational problem to ‘solve’ and optimize: what to surface, to whom and when. In other words, once there was a system in place for classifying memories within the taxonomy, the system had to then decide which memory, from all these many classified memories, should be targeted at the intended recipient and when they should receive it. Once the classificatory system is active within this social media archive, the focus then has to shift to retrieval and to the way in which this retrieval is instantiated in processes of ranking. Bringing memories to the surface requires, in this logic, a system by which they can be ranked – memories ranked at a certain level are the ones that then become visible. It is this ranking of memory that this chapter deals with.
Feedback loops and the surfacing of memories
In a Facebook Research report titled ‘Engineering for nostalgia: building a personalized “On This Day” experience’, Manohar Paluri and Omid Aziz (2016) outline the software engineering side to building the earlier iteration of Facebook Memories called On This Day. The claim behind this, they explain, is that they ‘wanted to make sure On This Day shows people the memories they most likely want to see and share, especially when it comes to the memories they see in News Feed’ (Paluri & Aziz, 2016).
Even something as intimate and personal as memory cannot escape the reach of social media and their datafied and circulatory logic. In this book we have explored the underlying processes that enable the selection and targeting of past content in the form of repackaged ‘memories’. Here we have highlighted the way that classification and ranking operate together to enable memories to resurface on social media throwback features. Through the combination of classification and ranking, the automated production and delivery of socalled ‘memories’ means that social media users do not need to dig; they are not excavating, as Walter Benjamin suggested, but instead that excavation is being done on their behalf. Benjamin noted that memories were always a way of mediating the masses of past experiences; this has not changed. These automated systems of social media remediate those memories through the classificatory systems that group them and then prioritize them, making them visible or invisible to us, and shaping how individuals and groups participate in those memories. Because, as Benjamin pointed out, memories have always been a mediation of the past, they can readily be reworked by these automated systems. As we have seen though, one problem with the automatic production of memory is authenticity. It is the act of producing memories that lends them authenticity; if that work becomes automated then potential tensions emerge around the legitimacy of that memory.
‘The promise of automation’, writes Mark Andrejevic (2020: 13), ‘is to encode the social so that it can be offloaded onto machines.’ In order to see the consequences this will have for memory and remembering, we suggest that there is a need to better understand the underlying classification and prioritization processes, what they are intended to do, as well as what implications and outcomes they have for people in everyday life. As a result, this book has sought to make a specific intervention into the automatic production of memory. Our contribution here has been to examine the role played by classification and ranking within these processes of automation. Once memories are opened up to classification and ranking, then the memories themselves will change, but so too will our understanding of what memories are. The concept of memory is unlikely to go untouched by these developments – indeed, we have sought to foreground the tensions that these processes of redefinition are already creating through features such as Facebook Memory.