Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-27T15:58:54.986Z Has data issue: false hasContentIssue false

Part III - Private Influence on Decision-Making

Published online by Cambridge University Press:  18 February 2023

Brett M. Frischmann
Affiliation:
Villanova University, Pennsylvania
Michael J. Madison
Affiliation:
University of Pittsburgh School of Law
Madelyn Rose Sanfilippo
Affiliation:
University of Illinois, Urbana-Champaign

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

7 Technofuturism in Play Privacy, Surveillance, and Innovation at Walt Disney World

Madelyn Rose Sanfilippo and Yan Shvartzshnaider
Introduction

In recent years, as families travel to Disney World, they embrace technology. They use apps and MagicBands – password-protected smart bracelets employing radio frequency identification (RFID) – to check in and out of hotels, enter their rooms and the parks, manage and redeem FastPass reservations, and pay for meals and souvenirs, often without ever speaking to a cast member. As Disney World-type technologies are introduced into the real world, it is vital to investigate the institutionalization and acceptance of pervasive monitoring via complex networks of unnoticed sensors and integrated systems. We discuss the values and functions of these technologies and how they ultimately shape our privacy preferences and expectations.

We frame Disney as a lab for public technology applications and use it to explore information governance challenges associated with pervasive location monitoring, facial recognition, data integration across contexts, and the seamlessness of smart experiences and interactions, facilitated by MagicBands. We employ Disney as an analytical model for potential challenges and governance strategies in other public applications alongside a number of privacy challenges.

We examine Disney’s complex balance between adhering to social privacy and security norms and their various, covert and explicit, violations of privacy and security expectations. For example, Disney’s prioritization of security puts many parents and consumers at ease.Footnote 1 Yet Disney has long identified a false trade-off between privacy and security, making extreme “invasions of privacy” in pursuit of “absolute security” (Smoodin 1994). Disney World, as an immersive and contained environment, holds more surveillance per square inch than the average American prison and is somewhat singular in that the surveillance is designed to promote consumption and not necessarily to protect visitors or the public (Project on Disney 1995). This is analogous to the relevant conflicts of interest associated with privacy in governing smart cities and public surveillance in other contexts.

We use the contextual integrity (CI) framework to analyze privacy implications and harms (Solove Reference Solove2005) of information aggregation. CI defines privacy as appropriate information flows in a given context (Nissenbaum Reference Nissenbaum2009). We explore normative violations around seamless integration of information flows in public spaces and extending to other Disney media. We also explore how this seamlessness is facilitated over time through processes of techno-social engineering creep (Frischmann and Selinger Reference Disney2018), with the MagicBand – designed for use as a ticket and pass to reserved attractions or experiences and extended as a room key, method of payment, and means of location-tracking – at the center of this process. Beyond studying normative agreement and disagreement, through the language used to frame perceptions and discussions of specific systems and technological innovation objectives, we investigate the perceived differences between the space of Walt Disney World (WDW) and other contexts, so as to understand the governance implications for parallel adoptions of technology.

Methodology

In this empirical case, we combine GKC and CI frameworks to explore institutions and institutionalization around data practices and information flows (Sanfilippo, Frischmann, and Strandburg Reference Sanfilippo, Frischmann and Standburg2018) in the reality of cross-context data integration in practice at Disney world. We examine how data collection and processing are changing, the social perceptions of those practices, and what smart cities can learn from Disney, relative either to practice or governance. We also sought to understand the underlying norms, values, and objectives associated with different stakeholder groups, technologies, and outcomes.

We structured our analysis of the formation of these norms, as well as governance in effect through the Governing Knowledge Commons (GKC) framework (Frischmann, Madison, and Strandburg, Reference Frischmann, Madison and Strandburg2014). We frame our privacy analysis in terms of contextual integrity (CI) to compare information flows within contexts, based on characterization in terms of five attributes: (1) information senders, (2) information subjects, (3) information types, and (4) information recipients, as well as (5) transmission principles, which reflect contextual norms of appropriateness (Nissenbaum Reference Nissenbaum2009). We then empirically documented the prevalence and visibility of data collection within Disney Parks, various identifiable categories of sensors that interact with apps and with MagicBands. Following the Shvartzshnaider et al. (2019) methodology, we annotated statements that describe information-handling practices in terms of relevant CI parameters in prescribed flows analysis of the Walt Disney Company privacy policy and My Disney Experience – Frequently Asked Questions (FAQs) page.

Furthermore, we evaluated stakeholder groups’ perceptions within privacy and surveillance action arenas, at Disney World, via sentiment analysis of text discussing each of these issues from public blog posts. We differentiated between official Disney perspectives, endorsed travel blogs, and perspectives of Disney consumers; note that Disney consumers are a distinct, nonrepresentative subset of the general public. We also analyzed news articles and Disney documentation for additional perspective and context. A total of 12,506 posts, from 112 blogs were considered in total. More details on the methodological approach are published by Sanfilippo and Shvarzshnaider (2021).

Background

Walt Disney World has adopted many technologies and integrated smart systems in a quasi-public space, prior to other applications in public. Before diving into the case study, we articulate the ways in which Disney World is a smart city and the ways in which it compares to other smart cities. Understanding both the conceptualization and the limits of the analogy supports appropriate conclusions and implications from this study.

First, Disney is a quasi-public place, in which a private actor controls a large space open to consumers from the general public. Second, Disney employs numerous digital technologies and multiple networks of sensors to enhance services and experiences, as well as to provide feedback. While nonconventional, the relationships between people, technology, and institutions within Disney spaces constitute a specific type of smart city. Still, there are many contextual differences. In comparison to public spaces and the public–private partnerships that often guide other smart cities, Disney is distinct not only in its extensive private control and decision-making, but also due to normative distinctions, differences in objectives, and the unique history of Disney as a planned space.

Normative differences are rooted in context: Disney exists primarily to provide entertainment, while cities must address social needs of local populations. Further, objectives are distinct between actors pursuing commercial interests in comparison to public safety or services. Intentional and planned spaces are also distinctly different from other public and urban spaces. Though Disney does have numerous parallels to other quasi-public spaces and actual intentional communities, such as Irvine, CA, which was thoroughly planned and engineered in pursuit of normative values in ways that have shaped development over time and human interaction with place (e.g. Kling and Lamb Reference Kling, Lamb and Eveno1997), much as at Disney.

Disney is also distinct in serving as a lab for technologies in public places (Martin Reference Martin2019; Walt Disney 1966). Many key smart city technologies were employed in Disney spaces earlier than in other contexts, yet with many distinct uses and applications. This practice raises caveats to how lessons learned might apply elsewhere.

Tomorrowland

Technological futures have long been present in the “wonderful world of Disney,” as imagineers – the official job title of creative engineers employed by Disney corporations – have fused technical knowledge and objectives with creative imagination (Knight Reference Knight2014). In addition to the embrace of technology and immersive branding, Disney expertly sanitizes plots and environments to adhere to Disney norms and simultaneously provides “commodity-satisfying entertainment … [to promote] the power by which people self-police themselves” (Hollinshead 1999). Beyond early innovations presented at the 1964/65 World Fair (Cotter and Young Reference Cotter and Young2004), which have been integrated into parks as attractions, and the pervasive adoption of various technologies to streamline tourism, consumption, and security, Disney has sought to showcase specific technological achievements and possible tomorrows.

Over the years, Disney has innovated and employed cutting-edge technology to realize the “great big beautiful tomorrow, shining at the light of every day.”Footnote 2 Decades before the Experimental Prototype Community of Tomorrow (EPCOT) opened to the public, Walt Disney envisioned EPCOT as the first smart city, though that label did not yet exist (Mosco Reference Mosco2019). “[EPCOT] will take its cues from the new ideas and new technologies that are now emerging from the creative centers of American industry. It will be the community of tomorrow, that will never be completed” (Walt Disney 1966).Footnote 3 Disney hoped that EPCOT would serve as an ever-changing model for what communities could be, incorporating technological solutions to community problems for better resource management, safety and sanitation, and techno-social engineering of a growing population.

While EPCOT never became that prototype community, “Disneyfication” has influenced development worldwide as communities and organizations seek to become more like Disney (Warren Reference Warren1994), from universities to small towns to competing firms to forthcoming smart cities (Matusitz and Palermo Reference Matusitz and Palermo2014; Wylie Reference Wylie2018). In addition to inspiring a specific type of development, Disney has taken a hands-on approach to building communities in its likeness and under its purview, first in Celebration, FL in the mid-1990s, and more recently in the Golden Oak community, in collaboration with the Four Seasons.

These communities embrace Disney culture and innovation beyond the theme park environment that tightly interconnects culture and business in a way that parallels their development and branding around immersive experiences and beloved cartoons. Disney exports their managerial philosophies to other places and industries, as it is so successful (Matusitz and Palermo Reference Matusitz and Palermo2014; Souther Reference Souther2007). Specifically, this blend of culture and commercial has been oriented around family-friendly and historically “sanitized” depictions of fantasy and reality, encouraging “traditional gender roles and old-fashioned morality” in their idyllic blend of futurism and nostalgia (Wills Reference Wills2017, 5).

Mickey after Dark

All iterations of immersive Disney spaces have embraced techno-futurism and data collection, including innumerable information flows in the current context and many historical examples. Disney has notably been the first to deploy particular technologies at scale or in public spaces. For example, the use of RFID to track luggage was first deployed in the United States by Disney, as opposed to at airlines or airports, where it happened globally and which is much more widely known. Further, CCTV represents one of the first technical applications of pervasive surveillance in both Disney stores and parks and is one of the earliest commercial applications of digital multiplexing within CCTV to scale (Coleman and Sim Reference Coleman and Sim1998). Human surveillance has always complemented monitoring cameras, with cast members throughout the park in real time, as well as Mickey after Dark teams.Footnote 4 Detailed surveillance data, collected invisibly and intrusively, is analyzed in real time to not only provide security, but also to minimize lines and customer wait times (Goldfarb and Tucker Reference Goldfarb and Tucker2012). At Disney, changes to technology, and security technology in particular, happen rapidly.

There are subtle but apparent differences in the way visitors engage with technology in the park over time. There are also exemptions or unique circumstances that lead to different experiences for some guests. For example, guests at Disney resorts and parks are free to refuse to register a fingerprint with their MagicBand through the Ticket Tag system,Footnote 5 with options to use an adult’s fingerprint in place of a child’s;Footnote 6 to provide photo ID to security, in place of a fingerprint; or to allow security to photograph children as identification with their ticket. What happens to fingerprints and photographs after guests’ interactions with security is less straightforward.

Disney assures guests that their system “converts the image into a unique numerical value and immediately discards the image” of a fingerprint, associating only that unique numeric representation with their ticket. While that may sound reassuring to the average consumer, the “unique numerical value” is unique to their fingerprints, not random, and will therefore correspond to any representations of their fingerprints through other biometric identification systems or law enforcement databases. Law enforcement professionals assert that Disney policy and practices do not prevent law enforcement access,Footnote 7 via subpoenas or court orders, or intelligence interest in this personal information. The justification for this biometric program, overall, is to prevent ticket fraud through a fingerprint-based biometric authentication system (Jain and Nandakumar Reference Hasan, Zhou and Griffin2012).

Results
Information Resources

User and behavioral data are the primary information resources. Disney employs a massive network of sensors and cameras in multiple systems to understand, predict, and often influence everything users do and purchase throughout their experience at Disney World. This includes the steps they take, the amount of time they browse shops, the food and souvenirs they purchase, the lines they wait in and the entertainment and attractions they engage with.

Some of this data collection is visible, yet many of these interactions are designed to seamlessly minimize visibility, as will be discussed in Action Arena 1. Further, the scope of this visible data collection is not obvious in aggregate.

MagicBands

MagicBands and step tracking represent newer streams of data collection about visitors, with many similarities to fitness trackers. MagicBands facilitate location data collection both through active (e.g., point-of-sale) and passive interaction (e.g., sensors, triangulation) (Stone Reference Stone2017). The MagicBand represented the first deployment of wearable RFID in the tourism industry outside the established use of RFID identification and security badges in transportation and sensitive, primarily government and military research, labs (Hasan, Zhou, and Griffin 2011; Roberts Reference Roberts2006; Stone Reference Stone2017). MagicBand sensors are pervasive all over the resort and theme park spaces, some unobtrusive, or even invisible, while others are visible as ubiquitous silver balls with a Mickey silhouette encircled by LED lights. This latter type of sensor, along with smart devices employed by cast members, is intentionally and visibly interacted with, as visitors wave their MagicBanded wrists over sensors for a variety of activities and tasks. There are additional points for active use of MagicBands via sensors that unlock individuals’ hotel rooms or check them in for transportation to and from the airport. Yet while the hotel room example represents a narrower flow of information, given the nonpublic context and limited proportion of park visitors who stay onsite, the transportation examples that occur outside Disney Parks include third parties and are extremely broad information flows.

Apps

Apps provide another major means of data collection about park visitors, as well as the wider population of Disney customers. In addition to individual apps tailored to each Disney Park worldwide, the My Disney Experience app, and a Disney transportation app, there are various consumer-directed apps such as Disney+ and a Shop Disney app. There are also a number of apps directed at children, some of which are educational, while others are purely for entertainment; one specifically supports interactivity within Disney Parks, entitled Play Disney Parks. Data from all of these apps are integrated in ways that not only facilitate the popularity of particular films, shows, and products, but also how these perceptions correspond with traffic and interaction with various features in and across the parks. The Play Disney Parks app is interesting because it directly integrates location data and behavioral data about children, as a protected population, with data from other sensors and systems within the park. Yet this app does not have a unique privacy policy, despite its target population and leveraging an extensive number of permissions, including full network access and approximate and precise location tracking.

Cameras

An extensive network of cameras collects data resources within the parks. Specifically, in addition to CCTV surveillance, there are a number of other unobtrusive cameras in quasi-public spaces, including those on rides or within attractions to capture the “action” and visitors at play. There are also visible cameras that document visitors during character interactions or photo opportunities that consumers knowingly choose to interact with; sometimes, this latter category of camera is accompanied by photographers or other humans in the loop, who link photos to individuals via their MagicBands. Additionally, “automatic photographer machines” are distributed throughout the parks. We note that these automated photographers are distinct both from human photographers with cameras, who roam the parks, and surveillance cameras, for which footage and images are not distributed to visitors via any of their photography packages, such as PhotoPass, or accounts.

Community Members

Relevant community members and stakeholders in the Disney World context span very different relationships to and with Disney, as well as interests associated with governance issues. While individuals have unique preferences, understanding stakeholder groups and their consensus on particular values and objectives is important to understanding governance processes and outcomes.

The diversity of actors within the Disney organization – including approximately 70,000 union, salaried and nonunion hourly employees at the Disney Parks without decision-making roles – spans interns, “imagineers,” cast members, musicians, a business office, and management roles, as well as many Disney subsidiary organizations. A major proportion of those hourly, nonunionized employees – who number approximately 43,000 – include veterans who fill security roles within the Disney Parks.

Outside of the Disney organization, there are two distinct aggregate groups: (1) visitors and (2) business partners. First, among visitors, different stakeholder groups are represented, not limited to, though including: those associated with conferences and events; families with children; multigenerational families; local visitors versus tourists; adults without children; individuals with disabilities; techno-futurists; and military families.

Second, the businesses and organizations that partner with Disney, to provide services or connect supply chains, have very distinct interests and more significant influence on privacy and surveillance than do individuals. Given the limited transparency about some of these relationships, their outsized influence on and role in information flows is likely to be surprising to many visitors. Third parties, completely distinct from the multifaceted Disney organization, include the Transportation Security Administration (TSA), the Orlando International Airport (MCO), and the City of Orlando, which partner to provide smart transportation solutions and safety throughout transit; various hotel chains in and around Disney parks, specifically including joint properties with Marriott and the Four Seasons; and consumer products and retailers, including Ziplock, Target, and AppleMusic.

Disney World serves many stakeholders. To the visitors it is pitching a unique and unforgettable experience, while to other partners it offers access to a lucrative dataset of information. The recent partnership with Target has potentially fostered information flows of customers’ data between the two companies. This new partnership with the retail sector follows previous collaboration with retail giant JCPenney,Footnote 8 which was established in 2015. More recently, Disney partnered with Lyft to provide a Minnie Van serviceFootnote 9 as supplementary transportation around the resort.

Goals and Objectives

In order to understand the current state of governance around surveillance and personal data at Disney, it is important to have a sense of how goals and objectives diverge among stakeholders. We analyzed the perceptions of (a) visitors and Disney enthusiasts, (b) bloggers endorsed by Disney, and (c) the Disney organization on key governance action arenas around privacy and surveillance. Figure 7.1 specifically depicts how positively or negatively (as measured through sentiment polarity) each of these groups viewed each action arena.

Figure 7.1. Categorical perceptions of action arenas

While there is divergence among these groups on all action arenas, those individuals whose blogs are endorsed by Disney, not surprisingly, generally represent their views as more similar to the Disney organizational perspective on these issues than to that of the general public. We noted that the greatest consensus among all three groups in their perceptions of personalization in Disney experiences and planning; specifically, all three groups frame personalization through language that is generally and similarly favorable. The greatest divergence in opinions is evident in discussions of tracking, wherein users and visitors view tracking practices as generally negative, and specifically excessive and non-transparent, while Disney represents tracking in a slightly positive way. Discussions of safety provide the second most significant gap, though in that arena the public views safety technologies and information flows as slightly positive – emphasizing outcomes and intent, rather than means – while Disney represents safety technologies and information flows in ways more positive than any other action arenas analyzed. More in-depth discussion of each arena is provided in the next section.

Governance

Understanding both the current privacy practices as well as the normative negotiation around each of the identified action arenas and smart systems are matters of governance. In this section we explore the specific perceptions and arguments underlying the tensions around actions arenas, documented in Figure 7.1, in comparison to current rules, as documented in privacy policies and guest or user FAQs. This comparison is helpful to highlight congruence and identify disparities between consumer privacy expectations and the reality of practices.

Action Arena 1: Seamlessness and an Immersive Experience

Disney integrates systems and technologies to provide seamless experiences in ways that are both immersive and invisible to its guests, suggesting magic enables everything, rather than a combination of marketing and engineering. Everything is aimed towards smoothly getting everyone to the next ride, meal, or event as efficiently and unobtrusively as possible, with all of the technologies and processes that architect choices and experiences obfuscated or made invisible. One touch of MagicBand, as if by magic, opens doors or pays for souvenirs and food. The Disney app suggests the best rides for you and your family. Your photos are instantly connected to your apps and accounts. While the MagicBand is thus convenient, and arguably makes a Disney experience more enjoyable, questions about potential privacy implications behind the notion of seamlessness are rarely discussed by either Disney stakeholders or visitors.

Disney often describes innovations, particularly those that bring fiction and fantasy to life, in terms of “magic” and “imagination,” which have highly positive connotations. “Invisible technology provides magical experiences for consumers.” Increasingly, these efforts incorporate invisible and seamless technological systems to enhance visitors’ experiences and allow them to immerse themselves, as Disney creates a “realm of magic and enchantment where they are not only dazzled and entertained, but may also have a chance to be part of the show.” Disney’s values around innovation and illusions are evident as they frame and promote particular systems and in direct value statements made by employees, such as “we all work toward the same goal each day – to uphold the legacy set forth of inspiring those around us and creating new experiences.”

This idealization of technological innovation, coupled with invisibility of technology, so as to create “magic” is echoed on blogs that are elevated and endorsed by Disney, including the Disney Mom’s blog, which often emphasizes how “amazing” technology is, alongside the preference that it “seems real” for their children.

In contrast, broader populations of users and visitors describe seamlessness and immersiveness slightly differently. While there is general agreement between stakeholders about how positive it is that systems are integrated – most seem to agree about issues of “convenience,” ease, and minimization of confusion – and the experiential aspects of immersiveness, the disagreement tends to fall around whether the technology is visible or “behind the scenes.” Even those who are enthusiastic supporters of Disney, on the whole, express their desire to make things more transparent, rather than less visible; many blogs seek to uncover how something works.

The dataset did not include any posts expressing concerns about privacy or data governance associated with applications of facial and voice recognition technology in attractions for children, such as “Turtle Talk with Crush,” though there were blog posts analyzing and articulating the technological design. In contrast, some blogs about immersiveness, seamlessness, and recognition technologies outside of attractions express hesitation and worries about whether people understand “concern over privacy,” particularly with MagicBands outside of rides. In this sense, there is also a suggested distinction between rides and more general experiences at Disney World within the minds of the general public and visitors. This also raises additional dimensions to social dilemmas around seamlessness and immersiveness in the form of questions about whether alignment between information controllers and consumers is sufficient to accept the privacy governance status quo as “good.” Seamlessness of data collection also seems to render some of the downstream and long term implications invisible to consumers, though not necessarily inapplicable.

Action Arena 2: Public Safety and Security

A second major action arena, around which both governance and technical interventions have diverged from the expectations of visitors and the general public is that of public safety and security. Innovation at Disney has historically been coupled with a close relationship to military and law enforcement interests (Knight Reference Knight2014; Shearing and Stenning Reference Sanfilippo and Shvartzshnaider1985, Reference Shearing and Stenning1987), which have also driven technological innovation in a number of other contexts such as the relationship between academic computing research and DARPA (e.g., Roland, Shiman, and Aspray Reference Roland, Shiman and Aspray2002). Yet Disney also has a history of supporting privacy, both in alignment with family values and the sanctity of “home” and more directly through investment in privacy research (e.g., supporting past iterations of the annual Privacy Law Scholars Conference). More recently, this dynamic has changed as they shifted to narratives around safety and security preferences, in 2016Footnote 10 and 2019,Footnote 11 as well as emphasize a false trade-off between privacy and innovation or transparency.

Despite the peace of mind that many may feel due to Disney’s prioritization of security, relative to both their children and their accounts, Disney has long identified a false tradeoff between privacy and security, taking extreme “invasions of privacy” in pursuit of “absolute security” (Smoodin 1994, 2013). Disney World, as an immersive and contained environment, holds more surveillance per square inch than the average American prison and is somewhat singular in that the surveillance is designed not to protect or promote production, but rather consumption (Project on Disney 1995).

Yet a cultural emphasis on the safety of children and families is supported by this surveillance environment, in ways that are consistent in practice. For example, fingerprintsFootnote 12 and facial images, along with scans of MagicBands can be used to locate and reunite small children who may wander away from families or otherwise get separated in crowds. The ability to leverage technology to quickly overcome separation minimizes anxiety and uncertainty, as multiple technologies can confirm and triangulate identities and relationships. A five-year-old girl, separated from her parents upon exiting a store on the right, rather than the left, was reconnected with her family by cast members within minutes, using her MagicBand and photo. A cast member noticed her, alone and looking upset, asked to scan her wrist and promised to help; the iPad she carried connected to a map that identified her parents’ location and the nearest cast member to them. The cast member then took her photo to positively confirm her identity and allow their counterpart to share with the parents.

Official Disney sources promote a dialogue prioritizing security over privacy. Disney offers compelling and well-packaged arguments about safety and security, while assuring visitors and readers, in vague terms, that data will be “secure,” as opposed to actually discussing privacy or the tradeoff being made. The language that they use around safety and security is highly assertive and positive in tone, communicating why visitors should trust them and need not worry, and providing clear examples in which they succeed, such as the anecdote from the previous paragraph.

Endorsed perspectives are similarly favorable; they communicate their trust in Disney and their positive experiences, without interrogating any tradeoffs between privacy and security. In fact, these perspectives from parents and Disney youth and Disney marathon runners often continue to focus on their positive experiences, even when faced with questions about privacy or skepticism about public safety. For example, a question posed to the Disney Parks Moms Panel by a general visitor focused on biometric privacy: “Are fingerprints stored to your name or just to your MagicBand?” The response was framed in terms of safety and security; the only mention of privacy was in the link to the Walt Disney company’s privacy policy.

While the general public frames these issues as slightly positive, in comparison to the extremely positive narratives directly from or endorsed by Disney, the action arena is more fragmented. There is a definite sense that the overall objectives around safety and security are meaningful and important, given that there are so many children in these spaces, as well as that outcomes are good. People feel very safe and many blogs discuss the same incidents in which security and surveillance identified individuals with handguns and prevented them from entering the parks.

Yet the general public’s norms about safety extend beyond security to include privacy and Disney either fails to acknowledge this or willfully ignores it. Further, there are differences between noticeable security, perceived as trustworthy, to inconspicuous or invisible surveillance, that many feel is an infringement of privacy. For example: “I have noticed increased security at my last few visits. Not enough to make me feel uncomfortable, but definitely more obvious. I’m sure there’s stuff going on that I can’t see as well.” This blogger went on to explain that feeling safe is important, but a sense of being watched without knowing who is watching is uncomfortable, and asked readers to comment on their own opinions about security and surveillance at Disney.

There are underlying, and sometimes stated, questions about whether there are more privacy-protecting ways to manage these data flows. In addition to the blog posts that discuss or hint at these tensions – and much more often express uncertainty (e.g., “though I don’t know if…” or “I’m not really sure how…” data is stored or protected) – many news articles convey these questions, from the perspective both of journalists and in quotations from travelers. While journalists have raised questions about things like data retention, bloggers suggest similar concerns in discussing long-term tracking around MagicBands, including that they “don’t expect to continue being watched … after [they] go home.” Further, they express greater trust in Disney than in privacy or security of technology in any context, including Disney technology. Similarly, consumers question the increasing presence of security personnel in these spaces, with third-party “backup” from “more uniformed and plain-clothes police officers, security guards, and dogs patrolling the parks” at particularly busy times of year. In this sense, while Disney appears to be trusted, consumers don’t necessarily trust Disney’s third-party partners or understand the nature of those relationships around privacy and security.

Action Arena 3: Personalization

Disney works very hard to process user and visitor data in order to offer personalized experiences, both within the parks and in planning trips (Stone Reference Stone2017). In addition to technologies supporting personalization, such as recommendation and planning systems, there are also social and sociotechnical systems that extend personalization to face-to-face interactions within Disney Parks. Characters and staff greet people, particularly children, by name and interactive games and rides are tailored personally based on children’s interests and experiences with other attractions or characters in the parks. While all of this is supported by technologies that are invisible to visitors, it extends the sense of magic.

Discussions of personalization around rides and characters are overwhelmingly positive, no matter the stakeholder. One underlying system to facilitate personalization is called MyMagic+, which Disney explains “is using technology to make it easier than ever before for guests to make the most of their Walt Disney World visit.” It connects many technologies and integrates all of the datasets in a way that both supports users’ choices and makes recommendations, during both the planning and visiting stages. Not only are official Disney posts about this very positive, but so too are unendorsed blogs, on which descriptions range from “empower[ing]” to “favorably impressed.” The only complaints made about this system, within the dataset collected, were about system outages, in the first year it was rolled out (2013).

Disney has become more ambitious in the space of personalization, developing an AI virtual assistant, the Disney Genie, which was finally released to the general public in 2021 after extensive testing to limited users over two years. Those bloggers – both endorsed and not – who have had a chance to test the system or to witness demos are nearly as positive as official Disney sources, though more tentative, using language that is hopeful. Other differences in sentiment, if not overall favorability, are actually present on endorsed blogs that are generally positive about personalization at Disney, but with very different emotional charges. For example, various Disney Mom posts that address experiences traveling to Disney with disabled family members emphasize how important personalization, and associated accommodations, are to them, given how excluded they feel in other spaces. They emphasize a willingness to share details about health and disability needs in order to have happy family travel experiences that would otherwise be impossible.

Overall, the general public is much more enthusiastic about personalization efforts by Disney than any other action arena, which leads to the greatest normative agreement in this space, in addition to consumer satisfaction. Many blogs, both endorsed and representing an array of average visitors, convey their customer loyalty to Disney in experiential terms: it is a one-of-a-kind experience for them or for their children. Many share tips on planning or suggestions, all while frame them with statements such as “how to make sure your dreams really DO come true” and the use of the tag #DisneyMagicMoments. A shared sense of exceptionalism and the belief that the experiences are unique to you align value and preferences around personalization between Disney and visitors. While there are financial costs associated with personalization, and Disney more broadly, most blogs equate the value of the experience with the cost.

Few acknowledge the nonmonetary value associated with all of this personalization – in contrast to the extensive number of blogs about hidden financial costs, including around some personalized services – though one blogger astutely provides a counterpoint to the prevailing positive sentiments around personalization: in terms of “our perception of value for money and worthwhileness” around “enchanting extras,” upcharges for luxury and/or personalized add-ons to attractions or events, you also have to be “willing to splurge” your data and your money.

Action Arena 4: Guest Tracking

Guest tracking is an important priority for Disney Parks, given that it intersects with a number of different objectives and other action arenas – including, personalization, safety and security, and optimization of lines and experiences – and promotes economic interests by introducing a number of granular metrics by which to assess and monitor interest in various attractions, experiences, and products. Despite the many interdependencies, it is notably an objective in its own right and presents unique governance dilemmas.

This action arena is by far the most contentious. Not only does Disney shy away from promoting many of the technologies employed to achieve objectives associated with their tracking agenda, but they frame many of these systems using somewhat defensive language within their attempts to convince visitors of the advantages and fend off criticism from media sources. For example, Disney has never explicitly blogged about step tracking – which is done through MagicBandsFootnote 13 – or embedded sensors within floors or walkways, yet various sources, including publications by Disney “imagineers,” assert they engage in these practices and have tested a number of different approaches (e.g., Andrews et al. Reference Andrews, Huerta, Komura, Sigal and Mitchell2016; Slyper, Poupyrev, and Hodgins Reference Shvartzshnaider, Apthorpe, Feamster and Nissenbaum2010). Instead, Disney glosses over them, while introducing the features or attractions that these forms of tracking support, mentioning the underlying technologies only in the vaguest terms, yet with highly positive language and framing. Further, as many visitors and bloggers can attest, location-based tracking is pervasive relative to in-app recommendation systems and estimates of lines and crowds, as well as in monitoring parking lots.Footnote 14

While most blogs in the dataset communicate in a positive tone about almost everything Disney, there is visceral negativity in explicit framing of tracking by most visitor blogs, along with negativity about cost, biometric scanning, and facial recognition. Notably, many of the tracking technologies and systems are framed more positively when the word tracking is not included, which perhaps provides an explanation for why Disney attempts not to use it. There are also very lukewarm framings on endorsed blogs, as they try to give the benefit of the doubt to Disney, while personally seeming skeptical that so much tracking is necessary to provide the features they enjoy. Disney has trouble selling tracking as positively, yet when framing many of the technologies that support the most extensive tracking as anything else, the criticism seems to vanish from other blogs, while still coming from the press. In this sense, tracking is unpopular with Disney customers, may also be poorly understood by those customers, and may be even more unpopular with the general public.

MagicBands and apps represent the two most significant mechanisms for tracking individuals around the park, as opposed to the use of biometrics, which facilitates tracking day-to-day and authenticates tickets. MagicBands also address other objectives – such as crowd control, children’s safety, seamlessness, and invisibility – yet tracking locations is the least popular. “Through various forms of tracking and data gathering, Disney is able to amass lots of information about their guests.” While some perceive this to enable “cool” things, like locating guests to bring them food in crowded restaurants or helping users to “get directions” in real time, others are more critical: “By tracking your location, Disney gains an added benefit. They receive big data that’s otherwise incalculable” and inaccessible to them. These more critical bloggers see the purpose as more in Disney’s financial interest than in the interest of visitors.

Other criticisms of the tracking arrangements around MagicBands parallel those concerns voiced around seamlessness, centering on whether or not users consent to the system, which is implicit in purchase. However, the photos, as with PhotoPass and Memory Maker, are taken and linked to individuals, regardless of purchase, with those not purchasing being offered additional opportunities for purchase after the fact.

Others are more concerned about the privacy implications of “missing photos.” Three separate blogs discussed the “cons” of PhotoPass and Memory Maker in terms of how “inappropriate” and “unsettling” it is to find someone else’s photos among your own, as well as to express their discomfort with the idea of someone else having their personal family photos. However, this phenomenon is more widely discussed in terms of frustration when photos of a favorite experience are never linked to the account, which bizarrely implies there are people who may value more extensive tracking. There are formal protocols to pursue in order to identify and claim photos mistakenly attributed to someone else or not attributed to anyone at all:

[S]ubmit a missing photo claim form within 30 days of the date of the photo. The PhotoPass cast members were able to locate a missing set of photos from my visit, but there is no guarantee that missing photos will be found. If there is something that you really want to make sure you have a picture of, be certain to have the PhotoPass photographer use your camera, too.

In this case, there are existing governance mechanisms to address misidentification, but not to compensate for privacy harms associated with this aspect of tracking.

Smart transportation provides another system in which there is very extensive tracking and real problems, despite this tracking, which lead to negative perceptions among those blogs that associate tracking with those problems. For example, tracking and predictive algorithms distribute buses based on current use and historical data, yet these systems seemingly leave humans out of the loop in making decisions that might better support visitors, such as allocating more buses to hotels following concerts and special events that the system wouldn’t otherwise account for. Further, the push to automate transportation systems is historical, with the monorail, which has seen recent problems, as an example; in part due to these challenges, they turned to more extensive tracking in this sector. Nevertheless, a majority of blogs are highly positive about the transportation systems, particularly around their responsive and dynamic nature, and the opportunity to seamlessly move from hotels to parks without difficulty or additional payments.

Action Arena 5: Optimization

Disney’s “operating system” manages resources and people; the objective is to guarantee an efficient, enjoyable, and smooth experience to all visitors, which helps maintain their high rate of return: “70% return rate of first time Disney visitors.” By “optimizing the mundane,” Disney makes “magic” and visitor satisfaction increases dramatically. As emerging technologies have been introduced to streamline guest experiences and minimize waiting times, as well as digitize and automate planning, Disney has framed these actions in terms of optimization, as a value in and of itself. They invest heavily in technological and human services to collect feedback from visitors about their priorities and their experiences, in order to better tailor future experiences and redirect customer service for the optimal Disney vacation. “Optimize” and “optimization” are often invoked to explain specific aspirations and innovations, associated with guest experiences, business practices, and park operations.

For example, posts across all three stakeholder groups – whether they are parents or adults without children – express satisfaction with efforts to entertain children who are waiting in line, through augmented reality games supported via app, play structures along the way, and physical games and characters throughout lines. However, the discussion of wait times, as framed among the different populations, illustrates the differences in current perceptions of existing systems, despite the clear alignment of preferences and satisfaction with new efforts made, though not yet resolved. Many independent sources, including popular consumer blogs, crowd-sourced datasets to monitor average and current wait times for specific attractions before Disney introduced this feature, and continue to do so even though the Disney Parks app does now include their estimates. Disney estimates are often overly optimistic or update more slowly than the crowd-sourced sets, replicating the same perceived problem, predating apps, with the physical clocks showing the approximate wait times at the start of lines: “I don’t know if they’re just slow to update or if they want you to get in line, even if it will take longer.” To paraphrase a question asked on multiple blogs: if they have so much user data, why can’t they estimate more accurately? Some of these posts expressed the belief that optimization priorities around crowd control were adversely impacting visitors’ experiences by raising unreasonable expectations about how many attractions they would be able to fit into a day. In contrast, another blogger simply felt that optimization would improve visitors’ experiences in the long run, but that in the short term, Disney hadn’t yet figured out how to achieve it aims or process all of the data generated.

Yet there is also evidence that many visitors obfuscate and create workarounds to circumvent both the perceived faults within technical systems for fast-pass and other data governance and systems. Various blog posts soon after MagicBands were introduced and before paper FastPasses were completely phased out offered advice on how to game the system: book both sets of FastPasses before Disney integrated them. Other posts about gaming the system are more privacy-aware; they provide advice on how to minimize observation, utilizing loopholes that emerge as Disney seeks to rapidly innovate and optimize particular systems, experiences, or features. As Disney pushes for optimization through technology, they almost always provide low-tech mechanisms to support opting out, in order to meet all possible accessibility concerns. Taking accessible alternatives to biometric scanning or metal detectors, or the use of MagicBands in restaurants, almost always involves screening by a human and minimizes documentation via data. While Disney does not make data available about how many people opt out, they have engineered opt-out procedures in such a way as to discourage them. Few consumers discuss or seem aware of them, many employees also lack information about them, and observation of entrances to WDW parks indicates that few navigate these low-tech, alternate processes.

Various customers and visitors opt out for other reasons, as Disney seeks to optimize (and automate) recommendation and planning systems. While overall there is satisfaction with customer services, documented via both our sentiment analysis and the extensive customer survey apparatus deployed to WDW guests, it is lower for these automated systems than for recommendation and planning. There is confusion about how to actually use many of the planning features without the support of humans in customer service. It is also notable that satisfaction with automated recommendation and planning systems has not changed significantly over time, likely indicating that frustrations and confusion are not only associated with the newly introduced systems.

Recommendation systems embedded in planning, which have been utilized by Disney for decades – albeit earlier iterations were less automated – do not receive as significant attention or garner as much enthusiasm from visitor blogs, and nor do various options to pick and choose so as to personalize travel plans. This hints at some tension with efforts to support personalization. Similarly, many bloggers express bemusement about or negative perceptions of suggested purchases made through Disney Parks apps, and some bloggers have speculated that the Disney Parks shopping app is no longer supported because visitors found the recommendations too aggressive. In contrast, recommendations about rides or attractions that are similar to rides that visitors have experienced are viewed much more positively.

As with other action arenas, there are disagreements between blogs representing the general population of visitors and endorsed blogs, some of which may correspond with the conflicts of interest represented by those endorsements. Endorsed blogs are much less likely to discuss financial concerns and this was most obvious in posts addressing optimization and automation. In contrast, endorsed blogs did diverge from official Disney posts overall, using language that was more hopeful and tentative, rather than confident.

Patterns and Outcomes

There are obvious lessons to be learned by smart cities from the case of privacy and surveillance at Walt Disney World. For example, social perceptions of appropriateness around wearable devices – people don’t like to be tracked outside the purpose or boundaries of the device, for example – and around children’s interactions with voice recognition and smart systems – interactive character features, which respond to children, have many parallels to step-tracking, triangulation between consumer and public surveillance, and to personal assistants like Alexa and Siri.

The majority of this discussion section focuses, however, on practical implications for smart cities and conceptual implications regarding cross-context data integration and aggregation. WDW is an interesting case in which to examine contextual integrity and the formation and change of norms over time because what is deemed to be appropriate is heavily shaped by the company. While it responds to norms and expectations of its customers, there are also spillover effects beyond the domain of Disney as the company’s practices serve as “industry-leading” and model information flows and governance for other public and quasi-public spaces. Much of this discussion will focus on the significant limitations around implications for smart cities, given the contextual differences between Disney World and smart cities, as associated with distinctions: in motivation and values, between consumers versus citizens as stakeholders, and in weight of consequences.

Disney World is carefully orchestrated to convey a sense of magic: a special place, where reality may be deluded. World Disney is a real-world, though privately owned, smart city, in which those who choose to be patrons, rather than a heterogenous public, are relevant community members and information subjects. As guests in Disney spaces, we leave our norms about the public sphere at the gate. This case, and previous research on Disney, documents the distinctions between norms at WDW and norms in other contexts. From the moment we scan our fingerprints on the Ticket Tag, we agree to be tracked, monitored and analyzed at every step we make, through apps, biometrics, facial recognition, MagicBands (as wearable devices), recommendation systems, smart assistants for planning, smart locks, transportation, and voice recognition. All to fulfill the illusion of magic. As our analysis shows, with use of clever fusion of fantasy and false trade-offs, along with misleading dichotomies, Disney has avoided broad scrutiny over many of its practices, yet has worked to coordinate governance of some information flows with the social expectations of their visitors. The use of smart locks, intelligent transportation, and biometric tracking are relevant examples.

Smart locks provide one example from which other contexts may learn from Disney’s early adoption. While Disney hotels are not unique in using a variety of technologies to unlock hotel doors, including both RFID and app-based entry, more hotels use magnetic cards to open doors.Footnote 15 However, increasingly apartment building are moving to smart locks, which concerns many residents about both privacy and questions of inequality,Footnote 16 given lower smart phone adoption rates among elderly and low-income populations. Interestingly, the reservations and criticisms that most Disney visitors express pertain to frustrations when their MagicBands or apps won’t work, which do align with some news articles that discuss challenges associated with deploying the technology before it is perfected. Yet the lack of discussion of privacy around this system may illustrate that Disney is more appropriately a model for other hotels, not permanent housing.

Tracking visitors in Disney Parks also has many parallels to tracking of consumers, either in automated storesFootnote 17 or busy retail spaces, and pedestrians, as with smart sidewalk projects, including the recently abandoned Sidewalk Labs project in Toronto.Footnote 18 While the specific systems may not translate well, the opportunities for users to provide feedback and actually have a response is a valuable model for cities to consider. The pairing of the Shop Disney Parks app with location tracking was considered by many visitors to be creepy, and Disney acknowledged that obvious normative gap by pulling that app from the market. Responsiveness to the public is important, which is why visitors’ relative comfort with MagicBand tracking, due the convenience and safety, as well as their “trust” in Disney, is unlikely to correspond with expectations and preferences in cities.

A major distinction between Disney and smart cities contexts, with respect to how these systems function and are perceived, relates to perceptions of and trust in decision-makers by other stakeholders. Many people trust Disney, implicitly, yet Disney often takes advantage of our confirmation bias. It promotes itself as a place of trust, kindness, and comfort, away from the reality. When people come to Disney World they see things differently, without fear or (too many) critical questions. Facial recognition keeps your children safe. Voice recognition enables “remarkable” interactions between visitors and their favorite animated characters. Location tracking makes sure we don’t stand in line. Yet some visitors do oppose particular systems or changes; Disney tries to be responsive, though it is important to note that visitors may not be sufficiently aware of how the technologies work to actually question everything they might normatively oppose, particularly in other contexts, namely those outside the border of Disney’s empire.

Seamlessness in design dissipates for renegades. You can refuse to provide fingerprints, in exchange for providing other proof of identity, or use a card for a ticket, rather than a MagicBand. These alternatives, however, diminish the overall experience; as so often happens in techno-socially designed systems, analog alternatives are presented as a less attractive option. For example, many employees are unaware of opt-out procedures and many services are made unavailable to those few who do opt out.

In this sense, Disney follows the worrisome trend in presenting false tradeoffs between information collection and quality of service. Given the relationship Disney maintains with its guests, opt-in would also likely work as well. This should serve as a warning to smart city advocates, who face a much more challenging task relative to “opt-in” options. Further, nudges to encourage opting in, when made by cheerful Disney characters, are likely to be perceived as much less sinister than those from police officers. Based on our analysis of blog posts, people seem to be more comfortable with nudges from Disney than with other commercial nudges. In addition, the information flows to Disney and to law enforcement within smart cities are more similar in effect than individuals realize, given the relationships between law enforcement and Disney. The implications of these information flows are thus similarly problematic, particularly in an age where mistrust of law enforcement is increasingly pervasive.

While Disney’s information flows, overall, often align with consumer expectations – whether organically or due to extensive marketing – many of their practices would not be appropriate for smart cities. The differences noted in social perceptions and acceptance reflect the fact that while Disney functions as a smart city, it is a distinct and private context, in contrast with public sector and public sphere contexts, reflecting very distinct norms. Even as cities partner with private sector firms, they should not assume this makes them more similar to Disney. Instead, they should question if those partnerships are appropriate and consider what types of governance are necessary to engender trust in decision-makers, other actors with access to data, and practices. A set of key overriding differences between these contexts is in the set of values being optimized, how they are selected, and who must accept them. Cities are not being optimized for profit, convenience, or a sense of wonder, while Disney is an escape. Cities must resist the temptation to buy vendors’ hype that it is possible or desirable to engineer the level of convenience or happiness present at WDW. To pursue this agenda would mean significant harms to the public with respect to privacy, transparency, fairness, and inclusion. Disney only works because it is an exception and bounded in time and space for its visitors.

However, a significant lesson to learn from this case, for smart cities and other public contexts, is the need to have a dialogue between the stakeholders even if they’re not all involved in decision-making. There are real similarities between Disney and its customers negotiating information flows, or other action arenas, and examples of privacy localism, just as there are parallels between Disney-specific norms and urban privacy norms. Privacy and surveillance practices and outcomes within Disney might not be normatively good from everyone’s perspective, just as local privacy governance in Seattle and Oakland differ from surveillance efforts in Atlanta and New Orleans, which are perceived to positively promote safety. In this sense, while many cities understand how contextual privacy is, smart cities can learn from this case procedurally. Two keys to Disney’s approach to information governance are the use of detailed social surveys and follow-up protocols with their guests to understand what expectations are not being met, and a commitment to iterative reevaluation.

Conclusions

Significant datasets collected by these, and other, systems feed Disney, as they construct a demi-reality. Walt Disney World provides an immersive environment for parents and children to experience the magical world of animation and imagination. There is no mistake in the name, it is indeed a “world” where our beloved characters come to life. To facilitate this real-life illusion, Disney needs and always has been in step with technological advances. Overall, Disney World attempts to make all visitors feel special and experience “magic.” Disney wants to protect us. Entertain us. Show us a good time. Visitors briefly relinquish the normative expectations of privacy that they have at home and in their cities. What may seem like an infringement of privacy in the middle of a busy public street, as a camera captures everyone and each move they make, may seem necessary or even desirable to guests at Disney World, in order to evoke an idyll and enable an absolute sense of safety. Disney World provides a great illustration of how context matters, but it also can be misleading to transfer the same norms to a real-life situation. When we step into this constructed world, we forget reality, but our concerns remain. Disney has to deal with the same concerns and tradeoffs as the real world. However, it often constructs a new normative reality that we would usually reject but, in case of Disney, accept as a necessary trade-off.

Behind the veil of false trade-offs, marketing slogans, and grand promises of a better world, there is an industry that is being built on aggregating our information. In the case of Disney, and Disney resorts in particular, tourists and consumers are the products, we supply Disney with our viewing habits (through Disney+), our favorite characters (through purchases in Disney stores), our preferred rides, and our daily routines when wearing MagicBands. All of this information is shared under pretenses associated with personalization, safety, immersiveness, and “magic.”

In making conclusions from this study, it is fair to criticize Disney’s lack of transparency and false trade-offs (e.g., Bowers Reference Bowers2019; Mosco Reference Mosco2019), but should acknowledge that they actively work to meet expectations in some action arenas. Where Disney and its customers are not diametrically opposed, they often meet in the middle or constantly revise to reach a mutually satisfactory outcome. Disney wants to keep its customers, who may not reflect the wider population. There are, however, significant tensions between these processes and actual outcomes. Despite the company’s status as an early adopter, it is difficult to use Disney as a model for smart cities, given the contextually specific preferences of visitors, in comparison to the general public, beyond replicating responsiveness.

In our day-to-day reality we now see a rapid adoption of technologies similar to those used by Disney, such as smart locks in apartments controlled by landlords, datafication of transportation systems,Footnote 19 and biometric authorization.Footnote 20 The public–private partnerships behind these efforts are often explicitly inspired by Disney (e.g., Mosco Reference Mosco2019; Souther Reference Souther2007). In a now familiar sleight of hand, they promote seamless, frictionless interactions, tailored service, and efficiency in their products, bringing the “magic” of Disney to our day-to-day lives – creating a smarter city. This is not always bad; technological progress is part of life. However, we argue that while some of these technologies can be beneficial, we shouldn’t manipulate the consumer, a Disney resort visitor, or a resident of a “smart” city to abandon their values, norms, and expectations when they immerse themselves in the new world. On the contrary, the new technologies should feed off users’ expectations to provide a safe and trustworthy environment that will nourish the creation of healthy sociotechnical systems that respect societal values and governing information norms.

8 Can a Smart City Exist as Commons? The Case of Automated Governance in Sidewalk Toronto

Anna Artyushina
Introduction

In October 2017, Alphabet and the Government of Canada announced a joint effort: the first smart city powered by Alphabet’s technology. The smart city was proposed to be built in Toronto, Canada, where Alphabet’s subsidiary Sidewalk Labs had partnered with public corporation Waterfront Toronto. The press release envisioned Sidewalk Toronto/QuaysideFootnote 1 as an exemplary community employing digital technology to tackle the issues of urban growth:

Sidewalk Labs and Waterfront Toronto announced today “Sidewalk Toronto,” their joint effort to design a new kind of mixed-use, complete community on Toronto’s Eastern Waterfront. Sidewalk Toronto will combine forward-thinking urban design and new digital technology to create people-centered neighborhoods that achieve precedent-setting levels of sustainability, affordability, mobility, and economic opportunity.

(Waterfront Toronto Reference Vincent2017)

In May 2020, Sidewalk Labs withdrew from the deal, citing financial uncertainty brought on by the Covid-19 pandemic. The project was canceled after the two and a half years of heated public controversy over the company’s plans for data collected in the project and the secret financial commitments of the parties (Goodman and Powles Reference Goodman and Powles2019; Artyushina Reference Artyushina2020a; Valverde and Flynn Reference Valverde and Flynn2020).

In Canada, the failure of the ambitious public–private partnership prompted a series of legal and administrative reforms. Major amendments to Canada’s data protection legislation have been suggested, to account for the privacy challenges associated with the use of personal data in data analytics, and the data collection in public and semi-private spaces (Government of Canada 2020). In July 2020, the Government of Ontario launched its first data trust that provided researchers with access to the medical data related to Covid-19.Footnote 2 Unlike the data trusts proposed by Sidewalk Labs that would make possible commercial reuse of personal data (Artyushina 2020b; Scassa 2020), the Ontario Health Data Platform (OHDP) secured the rights of the provincial government to digital information and intellectual property developed in the project; the trust only granted access to the data for research purposes. After a series of public consultations on the procurement of smart cities, Toronto launched the Digital Infrastructure Plan (City of Toronto Reference Cardullo, Di Feliciantonio and Kitchin2021). The DIP requires all city departments commissioning products from smart city vendors to prioritize open-source products and technologies that can be maintained by the city staff.

While Sidewalk Toronto never became an exemplary smart city, its global political relevance is undeniable: some European municipalities consider it a cautionary tale, while others seek to implement the partnership’s innovations: data trusts and automated planning (Artyushina Reference Artyshina2020b; Wolpow 2021; AI4Cities 2021).

It is useful to consider the smart city proposal an innovation testbed in the areas of urban technology and administration. The sole sourcing developer of Sidewalk Toronto had envisioned the automation of many city services and the governance of smart infrastructure as commons where city assets are run collectively though several trusts. While the proposal didn’t provide detailed layouts for the organization and representation mechanisms in these trusts, Sidewalk Labs had identified some possible models of collective action in smart cities.

An emerging city resource, smart infrastructure constitutes a serious governance challenge to policymakers and private vendors (Alizadeh, Helderop, and Grubesic Reference Alizadeh, Helderop, Grubesic, Graham, Kitchin, Mattern and Shaw2020; Barns et al. 2017). Data is a nonrivalrous resource, but smart systems also require algorithms, pipes, and cables, as well as access to public spaces and facilities. Contrary to the popular economic argument that private governance is the most efficient way to run city infrastructure, Brett Frischmann (Reference Frischmann2012) makes the case for the governance of infrastructure as commons. When public or private systems are employed by multiple communities, they produce massive social outcomes. The Internet and telecommunications are the perfect examples.

In this chapter, I draw on the Governing Knowledge Commons (GKC) framework to examine the partnership’s vision for the automatedFootnote 3 and commons governance in the smart city. Balancing public, private, and collective interests in smart cities is a challenging task, which is why Sidewalk Toronto proposed some innovative instruments of governance and management in its city infrastructure. However, as I show in this chapter, failure of the partnership was inevitable as the leaders of the project prioritized maximizing Alphabet/Sidewalk Labs’ profits over other objectives. Moreover, I argue, data-driven planning would likely stifle every possibility of collective action, set to eliminate both the public space and the public in the smart city.

Methodology

This chapter derives from my ongoing dissertation research, which began in 2017 when the partnership between Waterfront Toronto and Sidewalk Labs was announced. Many recent studies point to the post-political nature of the smart city (Gabrys 2014; Cardullo and Kitchin 2019; Carr and Hesse 2020), and some scholars (Kitchin 2021; Cardullo, Di Feliciantonio, and Kitchin 2021) urge an opening up of the decision-making in and around smart cities to democratic deliberations. Yet it is not an easy task to “politicize” smart cities. Complex technological systems, their data streams, controls, and beneficiaries, are often hidden from view and protected from any intervention. This is where the Governing Knowledge Commons framework proves to be immensely useful as it sheds light on the informal organizational and institutional aspects of the smart city governance.

The GKC framework adapts Elinor Ostrom’s Institutional Analysis and Development (IAD) framework for natural resource commons (Ostrom 1990) to study the commons-based knowledge production (Frischmann, Madison, and Strandburg 2014), biomedical research (Strandburg, Frischmann, and Madison 2017), data governance (Madison 2020), privacy (Sanfilippo, Frischmann, and Strandburg 2021), misinformation (Chapter 1 in this volume), and now smart cities. The attributes of shared resources, governance strategies, values of the actors and communities, action arenas, rules-in-use, and legal institutions that affect or uphold the commons – these are the key considerations of the GKC framework. It is the new forms of governance in the smart city that I am exploring in this chapter.

Information about the data collection:

  • In March–November 2019, the partnership conducted a series of public engagement events, where different iterations of the proposal were discussed. I conducted participant observation at four events. The notes taken at the meetings have been manually encoded using the methodology of inductive coding (Saldana 2015). The recordings of the meetings released by the partnership are available on YouTube under the titles “Sidewalk Toronto: First Public Roundtable,” “Second Public Roundtable,” “Third Roundtable,” and “Quayside Public Consultation.”

  • The second set of participant observation data comes from the meetings of an advisory panel appointed by Waterfront Toronto to help evaluate the project, the Digital Strategy Advisory Panel (DSAP). I attended three meetings that took place during 2019 and were open to the public. Similarly, the notes taken at the meetings were manually encoded. I was also provided access to the draft and public reports of the committee.

  • Beginning November 2017, I conducted qualitative analysis of the documentation released by Sidewalk Labs, Waterfront Toronto, Government of Ontario, City of Toronto, as well as the media coverage of the project in and outside of Canada. Specifically, the analysis covers such items as the “Project Vision,” “Master Innovation and Development Plan” Vols. 2, 3, 5, “Plan Development Agreement,” “Framework Agreement,” the “Ontario Auditor General Report,” investigations published by The Globe and Mail, Toronto Star, The New York Times, etc.

  • Between January 2021 and July 2021, I conducted seventeen interviews with the employees of Sidewalk Labs, government officials who were appointed to evaluate the project, privacy lawyers, and the citizens who had organized against the smart city. The interviews have been transcribed and encoded, using computational linguistic software, Descript and NVivo.

The Background

In the Sidewalk Toronto partnership, Waterfront Toronto outsourced many functions to Sidewalk Labs (e.g., public engagement, drafting of the smart city and data governance policies for the project, communication with the provincial and city governance, etc.). Therefore, it is not always possible to separate the two parties in this project. Over the project’s short life span, Waterfront Toronto changed its attitude toward Sidewalk Toronto from widely publicized support to conditional approval, and then to public criticism of the company’s attempts to monopolize governance in the project. In this section, I briefly tell the story of the smart city project and address the factors that ultimately led to its collapse.

Waterfront Toronto is a nonprofit corporation established in 2001 by the municipal, provincial, and federal government to oversee the development of Toronto’s waterfront. Waterfront Toronto has an unusual legal structure. Despite being a corporation, it has no shareholders; decisions are made by the board of directors. According to some of the corporation’s employees, this might have led to Sidewalk Labs effectively dominating the partnership:

Examine the Waterfront Toronto legislation and it will very explicitly say that Waterfront Toronto is not an agent of any level of government. Waterfront also indemnifies the three levels of government. No level of government was required to sign off on the Framework Agreement or the PDA. The only role three levels of government would have had was to offer regulatory approvals such as code compliance, public works compliance, but even then, it appears in the Master Innovation Development Plan that the proponent wanted their own “regulatory regime” as in fourteen months.

(Respondent, February 2021)

In 2024, Waterfront Toronto reaches the end of its funding cycle, and it has been actively looking for a private funding partner. Several of my respondents noted that Waterfront Toronto was deeply committed to the partnership with Alphabet/Sidewalk Labs:

When Google came to town, everyone was excited. So maybe, maybe they handed over too much to it, but it might’ve been, you know, in the excitement of the moment. It’s hard to push back against these, these behemoths.

(Respondent, May 2021)

The negotiations over the plot of land along Toronto’s eastern waterfront unofficially began in 2016, when an employee of Waterfront Toronto had reached out to Sidewalk Labs. A year later, Waterfront Toronto released the request for proposals to develop a smart city on the twelve-acre site along Lake Ontario. To make sure that Sidewalk Labs would be awarded the project, several employees of Waterfront Toronto aided Sidewalk Labs in the preparation of its proposal (Government of Ontario Reference Birch and Muniesa2018).

Sidewalk Toronto received backing on the federal level. Prime Minister of Canada Justin Trudeau publicly endorsed Sidewalk Toronto and mentioned that he and Eric Schmidt, then CEO of Alphabet, had been discussing this project for several years (Hook Reference Hook2017). The support of the federal government was instrumental in the way Sidewalk Labs has been positioned in the project. In October 2017, the board of directors of Waterfront Toronto had planned to vote on the proposal. At the request from the Office of the Prime Minister Justin Trudeau, the trustees were given only three days to review and accept the proposal. The board members who voiced their concerns over the lack of the due diligence were asked to resign from their positions (O’Kane Reference O’Kane2018).

Backed by the highest office in the country, the Sidewalk Toronto partnership showed surprising disregard for the local government (Valverde and Flynn Reference Valverde and Flynn2020). For the first eight months of the project, Sidewalk Toronto did not share any project documentation with the Government of Ontario, nor with the City of Toronto (Goodman and Powles Reference Goodman and Powles2019). Investigation conducted by the General Auditor of Ontario, Bonny Lysyk (2018), revealed that Sidewalk Labs’ parent company “has purportedly told other candidate communities that they want to control all data in this demonstration project area”; her investigation also revealed that employees of Waterfront Toronto who had assisted in the preparation of Sidewalk Labs’ proposal acted in direct violation of the rules of open competition for the public assets. Once the investigation was released, Premier of Ontario Doug Ford fired the CEO and Chair of Waterfront Toronto over the deal with Alphabet/Sidewalk Labs.

Members of City Council Joe Cressy and Paula Fletcher repeatedly raised issues regarding the lack of transparency in the procurement of the project and the use of surveillance technologies in the proposed smart city. The conflict between the partnership and the local officials reached its peak in 2020, when the City of Toronto began looking for a legal loophole to shut down Sidewalk Toronto.

Over the two and a half years of the project’s existence, the partnership had produced several hundred lengthy documents discussing its vision of urban planning, such as affordable housing, the commons governance of smart infrastructure, and timber-wood buildings. However, these documents rarely mentioned any specific technologies the partnership had planned to implement, data governance strategies, or the financial aspects of the deal.

The secrecy around the project had prompted a heated public controversy in Canada, where activists, journalists, and academics would piece together available information about the proposed technologies and resources at stake. Canadian open government advocate and activist Bianca Wylie was the leader of the pan-Canadian anti-Sidewalk Labs movement called #BlockSidewalk (Zarum 2019). Critics of the project expressed concern about the privatization of public spaces and services, as well as ubiquitous digital surveillance in the proposed smart city (Wylie Reference Wylie2017, Reference Wylie2018, Reference Wylie2020; Balsillie Reference Balsillie2018; O’Kane Reference O’Kane2018).

It was not easy, however, to publicly oppose the project. The company heavily invested in the positive media coverage and hired several internationally renowned academics as paid consultants. Several local experts were promised research funding and lucrative positions once the smart city had been developed. As one of my respondents remembered, when the Canadian Civil Liberties Association (CCLA) sued the three levels of government over the deal with Alphabet/Sidewalk Labs, they had to engage several international experts as witnesses. All but one of the Canadian experts they contacted refused to testify against the partnership.

Reportedly, Sidewalk Labs committed US$50 million for preparation of the proposal and the citizen engagement campaign (Bozikovic Reference Bozikovic2017). As a funding partner in the future smart city, the company planned to attract $3.9 billion in financing and a line of credit, including $900 million required to build the proposed real estate and smart infrastructure, and an additional $400 million to expand the subway to the eastern waterfront (Sidewalk Toronto 2019, Vol. 3, 31). In exchange, Sidewalk Labs requested the intellectual property and licensing rights in the data collected in the smart city, the 190-acre plot of land to extend the smart city into the Port Lands (with the land being sold at a discounted price), performance payments for advisory and engineering services, and the compensation for infrastructure at a market price (Muzaffar Reference Muzaffar2018; Sidewalk Labs 2018; Vincent 2019). Additionally, the contracts between Sidewalk Labs and Waterfront Toronto precluded Waterfront Toronto from considering other partners for the project before the Master Development Innovation Plan was submitted.

In October 2019, the board of directors of Waterfront Toronto moved to accept the deal with Sidewalk Labs, although with major cuts (O’Kane Reference O’Kane2019). Specifically, the newly appointed leadership of Waterfront Toronto rejected the company’s proposal for the new governing entities in the project and requested that all data collected in the smart city be subject to Canada’s privacy and data protection legislation. In May 2020, Sidewalk Labs withdrew from the deal.

The Outcome-Based Planning

The data-driven planning tool that would allow the developers to remake the smart city on the go was called the “outcome-based code.”Footnote 4 The software was expected to replace within the confines of the smart city “outdated” zoning and building codes of Ontario (Bowden Reference Bowden2018; Sidewalk Labs Reference Kitchin2017; Sidewalk Toronto 2019, Vol. 2, 21). In a city planned by data, both the city infrastructure and the community should be versatile by design and receptive to the market signals. The developer should be able to swiftly repurpose the lands, city spaces, and buildings to maximize the profits. For instance, a park may be redeveloped into a mall or a parking lot if data suggests the land will soon see an increase in value. The new simplified building codes would blur the line between the residential and nonresidential spaces, so that any building could be put to different uses.

The first mention of the outcome-based code can be found Sidewalk Labs’ winning bid, the Project Vision (Sidewalk Labs Reference Kitchin2017). In the 200-page document, Sidewalk Labs correctly identified the high cost of living in the city and traffic management as Toronto’s key problems. To address these issues, the company’s software would replace traditional zoning and building requirements and provide Sidewalk Toronto with some flexibility to achieve the market value of property and land in the city:

This new system will reward good performance, while enabling buildings to adapt to market demand for mixed-use environments. It is Sidewalk’s belief that outcome-based codes, coupled with sensor technology, can help to realize more sustainable, flexible, high-performing buildings at lower costs.

(Sidewalk Labs Reference Kitchin2017, 120)

In the Province of Ontario, cities are divided into single-use zones to avoid negative externalities associated with the mixed use of space. For instance, it is illegal in Ontario to build a chemical plant in a residential neighborhood, or a safe-injection clinic in the vicinity of a public school. Ontario’s building code is a piece of legislation that governs the construction, renovation, and change-of-use of a building in the province (Government of Ontario 2019). The code stipulates specific safety and convenience requirements deemed necessary for the residential buildings. For example, developers must protect residents’ rights to daylight, privacy, security, and silence. Since the 1990s, the walls and ceilings of the residential buildings cannot contain the toxic material asbestos.

In the Vision, Sidewalk Labs (Reference Kitchin2017) offered to replace the “restrictive” state regulation with data-driven planning. Rather than limiting multi-use spaces, the company offered to set minimum standards for comfort and environmental harms. In Sidewalk Toronto, industrial uses could be placed in residential areas but would be fined if the data showed lack of compliance on the part of the developer or business owner. The sensors embedded in the fabric of the city would monitor energy use, light conditions, and pollution, as well as collect real-time information about the users of city spaces. To take an example provided by the company, instead of respecting the residents’ rights to light, developers would be allowed to come up with “creative solutions” such as automated canopies. Sidewalk Labs promised that the buildings in the smart city would be monitored throughout their lifecycle, and the use of certain solutions could be restricted based on the user reports and algorithmic assessments.

In his damning critique of the outcome-based code, digital media theorist Evgeniy Morozov argues that the long-term goal of Sidewalk Labs was to extinguish any forms of social organization in the city:

Even neoliberal luminaries such as Friedrich Hayek and Wilhelm Röpke allowed for some non-market forms of social organisation in the urban domain. They saw planning – as opposed to market signals – as a practical necessity imposed by the physical limitations of urban spaces: there was no other cheap way of operating infrastructure, building streets, avoiding congestion. For Alphabet, these constraints are no more: ubiquitous and continuous data flows can finally replace government rules with market signals.

As a form of governance, commons can operate in the commercialized, private and semi-private environments (Frischmann Reference Frischmann2012, 67). In Sidewalk Toronto’s Master Innovation and Development Plan (MIDP) (Sidewalk Toronto 2019, Vol. 3), the private, automated city governance and the commons are not mutually exclusive. Communities are encouraged to provide inputs for the outcome-based code and take advantage of the data analytics, to generate monetary and nonmonetary value from the city spaces. Many use cases in the MIDP describe an array of business opportunities for the residents – as a form of community-building in the smart city (Sidewalk Toronto 2019, Vol. 2). Through the new governance entity, called the Open Spaces Alliance (OSA), citizens would offer their own visions for the common areas, and gauge their business potential. As a new governing agency, the OSA was expected to have an operating budget to procure the necessary services from Sidewalk Labs (Sidewalk Toronto 2019, Vol. 2, 179). Stoa, another planning innovation by Sidewalk Labs, was an open-concept market that encouraged community engagement around local businesses (Sidewalk Toronto 2019, Vol. 2, 155). Marketed by Sidewalk Labs as a commons, Stoa promised to reinvent the ground floors of the buildings to become business opportunities and recreation spaces for the residents.

Sidewalk Labs envisioned the data-driven planning would require “four strategies for meaningful reform” (Sidewalk Labs Reference Kitchin2017, 121): simplification (residential and nonresidential buildings are subject to the same building requirements); flexibility (municipal codes will be updated based on market performance); interoperability (data-driven rules apply to both public and private spaces); and automated permitting review.

Yet the data-driven planning does something more than simply clear the planning process of the state bureaucracy; its true objective is to put the data controller in the position of a regulator. In a community planned and run by data, the outcome-based code becomes a form of social ordering (Katzenbach and Ulbricht Reference Katzenbach and Ulbricht2019).Footnote 5 In Sidewalk Toronto, the algorithms would be sensitive to misbehavior and noncompliance. Throughout the project’s voluminous documentation, the partnership suggests multiple ways the software would reward compliant users and punish misbehavior:

As an alternative to traditional regulation, Sidewalk envisions a future in which cities use outcome-based code to govern the built environment. This represents a new set of simplified, highly responsive rules that focus more on monitoring outputs than broadly regulating inputs. With embedded sensing for real-time monitoring and automated regulation, this new code will reward positive behaviors and penalize negative ones, all while recognizing the value residents and visitors increasingly place on having a variety of uses within one neighborhood.

(Sidewalk Labs Reference Kitchin2017, 139)

Sidewalk Labs envisioned technology companies partnering with behavioral scientists to create a “feedback loop,” where human behavior is understood, predicted, and proactively shaped by the data controller, (Sidewalk Labs Reference Kitchin2017, 31). In the MIDP (Sidewalk Toronto 2019, Vol. 2, 351), Sidewalk Labs introduced a use case: the “Pay-as-you-throw” smart disposal systems, where the data about a household is used to set differential pricing.

It is important to remember that Sidewalk Labs entertained the idea of a social credit system in the smart city. The leaked internal document, Yellow Book, was prepared by Sidewalk Labs for Alphabet to probe the ways to commercialize Sidewalk Toronto. Published by the Globe and Mail (Cardoso and O’Kane Reference Cardoso and O’Kane2019), the Yellow Book details that the ubiquitous sensors embedded into the fabric of the city would collect both real-time and historical data about the residents. Residents would be rewarded for voluntarily sharing data with the company, where one’s digital reputation would be “new currency for community co-operation.” Residents who chose to opt out of data collection would be cut off from certain services. Moreover, Sidewalk Toronto wanted to have private police forces, and to use data to prevent crimes and misdemeanors. In general, the document suggests that Sidewalk Labs saw private control of city infrastructure as having “enormous potential for value generation in multiple ways” (Cardoso and O’Kane Reference Cardoso and O’Kane2019).

The smart city technologies show enormous potential in shaping and molding human behavior (Vanolo Reference Vanolo2014; Hollands Reference Hollands2015; Sanfilippo and Shvartzshnaider Reference Sanfilippo and Shvartzshnaider2021). Reflecting the concept of techno-social engineering (Frischmann and Selinger Reference Gabrys2018), automated governance seeks to create a compliant, easily controlled subject. It is safe to assume that, in a few years, Sidewalk Toronto residents would be comfortable sharing their banking information with the company in exchange for a reduced electricity bill. Even if a person decided to stand up against the data-driven decisions, in a city that is constantly reshuffled by algorithms there may not be a community to act with. Moreover, in a closely monitored physical environment where a resident’s credit score is their currency, any collective activity will likely be stifled.

A Commons Approach to the Smart City

In this section, I analyze the partnership’s proposal for the new governance bodies in the smart city. The smart city as digital commons can certainly exist and, when designed properly, the instruments of collective governance may help balance the interests of citizens, government, and businesses. Yet the instruments of collective governance offered by Sidewalk Toronto effectively disenfranchised the public from any rights to the smart city.

Sidewalk Labs CEO Dan Doctoroff’s testimony before the Ethics Commission of the House of Commons of Canada discussed at length the issue with Canada’s data protection legislation being inadequate in meeting the challenges brought on by the smart city partnership (House of Commons of Canada 2019). In this meeting, Mr. Doctoroff called for some new legal instruments that “would not stifle innovation.” In the MIDP, the partnership suggested establishing new governance entities that would mediate between the technology vendor and Canadian authorities and help members of the community collectively govern and manage the smart city infrastructure. These five new bodies of collective governance were the Urban Data Trust, the Waterfront Housing Trust (WHT), the Open Space Alliance (OSA), the Waterfront Transportation Management Association (WTMA), and the Public Administrator (PA).

Although the Sidewalk Toronto partnership never specified the ways residents could get representation through the trusts, the company did set out the goals of the proposed entities. My analysis shows that the trusts benefitted some members of the community more than others. Specifically, Sidewalk Labs openly declared its goal to support developers in the project, as well as the businesses coming to operate in the smart city.

My previous research on the Urban Data Trust (Artyushina Reference Artyushina2020a) draws on the rentiership theory (Birch and Muniesa 2020; Birch et al. 2020) to explore the data governance policies in Sidewalk Toronto. My analysis demonstrates that, as an instrument of collective governance, the Urban Data Trust was imbued with conflicting goals of making profit from data collected in the smart city and protecting citizens’ privacy. As part of its proposed digital innovation plan, Sidewalk Labs was seeking ways to reconceptualize personal data collected in the smart city as a private asset. Canadian privacy lawyer Teresa Scassa (Reference Scassa2020) raised an issue with the concept of “urban data” as devised by the partnership. By introducing new, quasilegal concepts, the proposal would make the data collected in the smart city exempt from Canada’s privacy legislation. Neither Sidewalk Labs nor Waterfront Toronto specified the legal framework for the trust. Earlier versions of the proposal (Waterfront Toronto Reference Carr and Hesse2018) mentioned that the trust would have fiduciary obligations toward the residents; however, subsequent documentation had clarified that the Urban Data Trust was never meant to be a trust “in a legal sense” (Sidewalk Toronto 2019, Vol. 2, Ch. 5, 423).

The real estate in Sidewalk Toronto would be governed by the Waterfront Housing Trust (Sidewalk Toronto 2019, Vol. 2, 284). Sidewalk Labs released an ambitious affordable housing plan in which 40 percentFootnote 6 of the residential units in the smart city would be sold or rented below market price. The trust would assemble funding from a variety of private and public sources and direct this funding toward the below-market housing in Sidewalk Toronto. According to the partnership, this would increase the returns and predictability for developers.

The proposed affordability plan received mixed reactions. Toronto Mayor John Tory called the company’s affordability plan “encouraging,” stating: “I am determined to build more housing in Toronto to help address affordability issues” (Vincent Reference Coulter2018). Local developers and urban planners were less optimistic. Toronto developer Julie Di Lorenzo, who resigned from the board of directors of Waterfront Toronto over the deal with Sidewalk Labs, said in an interview with the Toronto Star that the proposal was not in the interest of the citizens: “How will that be subsidized? Are there subsidies by our government, or are they using the land value of Quayside to subsidize the housing? If the land value of Quayside is being used to subsidize the land value, it is the choice and contribution of our governments – not Sidewalk” (Vincent Reference Coulter2018). Former Toronto Chief Planner Jennifer Keesmaat criticized the company for overpromising:

Sidewalk breaks down their plan like this: five per cent of units will be deeply affordable, or at least 60 per cent below market rate. Another 15 per cent will be affordable as defined by the city – at or below the average market rent – and the final 20 per cent will be affordable for middle-income households. But people on a minimum-wage salary can’t even afford an average-priced one-bedroom apartment. Even if Sidewalk’s second tier of affordability is on par with the market, these homes are wildly overpriced compared to Canadian wages. It’s even more ridiculous to suggest that the homes designed for middle-class households are affordable by any stretch of the imagination. In a real estate economy as hot as ours, affordability cannot depend on the market. The city of Toronto needs to make the definition match the reality.

The Open Space Alliance was another instrument of collective governance proposed by Sidewalk Toronto (Sidewalk Toronto 2019, Vol. 2). The OSA was set to help Sidewalk Labs and the community govern and manage the streets, parks, and recreation zones. In this blueprint for a future smart city, data would help the developers identify new “open space assets” and the OSA would create opportunities for more retail and recreation (Sidewalk Toronto 2019, Vol. 2, 184). The partnership challenged the concept of public space by envisioning “flexible outdoor spaces,” which would be governed and co-financed by a range of sources through the OSA (Sidewalk Toronto 2019, Vol. 2, 123). In this model, local businesses would help maintain the outdoor spaces in Sidewalk Toronto in exchange for potential opportunities.

Redefining public spaces as “flexible outdoor spaces” would also help reduce the need for municipal services in the smart city (Sidewalk Toronto 2019, Vol. 2, 186). The OSA would oversee the gardens and maintenance in Sidewalk Toronto through an interactive digital map, algorithms, and the help of volunteers. Sidewalk Labs partnered with two Canadian nonprofit organizations to develop a prototype of the CommonSpace app, through which residents could report problems and submit maintenance requests to the company (Sidewalk Toronto 2019, Vol. 2, 184).

The partnership argued that the OSA would fix the problem of intersecting responsibilities, which results in public spaces not being properly cared for. Some municipal services, such as the parks and recreation departments, could be eliminated altogether. Data modeling and residents reporting problems through the app would help Sidewalk Labs plan for when additional help is needed and hire temporary workers. Another app would contain the information needed to tend to the trees and plants in the city’s green zones, which could be done by people without specialized knowledge:

This app could use image recognition to help identify plants as well as pest and disease issues, making it easier for people to keep the garden in a state of good repair without specialized landscaping knowledge. The OSA could agree to instruct their maintenance workers to use the app as part of a pilot.

(Sidewalk Toronto 2019, Vol. 2, 191)

Traffic management was another important part of the partnership’s plan to modernize the city (Sidewalk Toronto 2019, Vol. 2, Ch. 1, 5). As a general principle, Sidewalk Toronto aimed to limit the use of private cars in the smart city. Each residential area would offer the necessary infrastructure at a walkable distance; for all other purposes, residents would be encouraged to use self-driving shuttles set to replace public transit. Just like ride-hailing services, the shuttles would be available on demand and transport residents directly to where they need to be. The partnership envisioned the digital mobility system as a core component of the smart city, on top of which all other services and products could be developed. Additionally, the company requested $1.2 billion in public funding to build a light rail transit line in Sidewalk Toronto.

In the proposal, the City of Toronto’s approach to transportation management was called “piecemeal,” pointing to the fact that different departments oversee parking, traffic lights, and transit fees. The proposed Waterfront Transportation Management Association (WTMA) would coordinate all transportation systems in the smart city and employ data about residents’ movements to use the roads and highways more efficiently: “a new public entity tasked with coordinating the entire mobility network – can manage traffic congestion at the curb by using real-time space allocation and pricing to encourage people to choose alternative modes at busy times” (Sidewalk Toronto 2019, Vol. 2, 367).

Reacting to the idea of a new transportation authority, City Councilor Gord Perks pointed to the fact that the new governance entity would erode citizens’ rights by relieving elected officials of decisions regarding the transportation needs of the city. He stated: “Over my dead body. Accountability to the public is greatly harmed […] This would further cement that distance between people that elect governments and the decisions that they make” (Spurr 2019). The proposed governance entity faced a lukewarm reception in the local press for the lack of public oversight:

The proposed WTMA illustrates the point: within the Sidewalk development zones, this body would take over management of traffic, signals, curbsides, price-setting for rides and parking, mobility subscriptions, technology procurement, the operation of Sidewalk’s “dynamic pavement” and flexible streets, and coordination with companies providing navigation apps. Financed by fees generated by these activities on a cost-recovery basis, the WTMA would report to the proposed public administrator, which, Sidewalk officials say, may or may not be Waterfront Toronto. Where the public connects to this formidable entity is anybody’s guess.

The Public Administrator (PA) was the fifth governance entity proposed by Sidewalk Toronto. The PA was expected to become key intermediary between the company and state regulators. When announcing the deal, Alphabet chairman Eric Schmidt mentioned that that a project of the scale of Sidewalk Toronto may need “substantial forbearances from existing laws and regulations” (Hook Reference Hook2017). For instance, the legislation may need to be revised to allow private companies access to the public facilities required to build the physical infrastructure of the smart city; with the outcome-based code in place, Sidewalk Labs would need to communicate regularly with Canadian environmental agencies. The PA was designed to help update Canada’s legislation in response to the demands of the project (Sidewalk Toronto 2019, Vol. 3, 70).

The PA idea was met with skepticism from local public officials. In his open letter to the trustees of Waterfront Toronto, Brian Beamish, the Information and Privacy Commissioner of Ontario, expressed concerns about the PA delivering key public services that fall within the mandate of the City of Toronto while not being subject to the same access to information and privacy legislation (Information and Privacy Commissioner of Ontario 2019).

Members of the public were highly expressive in their criticism of the Public Administrator concept. Bianca Wylie said in an interview with Toronto Star that there were no reasons to make Sidewalk Labs a broker between the citizens and elected officials: “At what cost and for what reason is a corporation becoming a broker between people and their governments in terms of designing how we live?” (Rider Reference Rider2019). Similarly, Pamela Robinson, Canadian urban planning scholar and advisor to Waterfront Toronto, argued that, in Sidewalk Toronto, more robust government oversight was needed to make sure that companies’ financial interests did not override the public interest (Robinson Reference Robinson2019).

It would be fair to say that the Sidewalk Toronto partnership failed to engage meaningfully with the citizens and local authorities in Canada. Many critics of the project pointed to the lack of clarity about the mandate of the new governing entities and argued that the company aimed to privatize the entire city governance. Several of my respondents argued that the three levels of government simply did not have in-house experts to properly evaluate the project. In 2019, the Canadian Civil Liberties Association (CCLA) sued the three levels of government over the deal with Sidewalk Labs.

Conclusion

In this chapter, I have drawn on the GKC framework to analyze Sidewalk Labs’ proposal for the automated planning and collective governance of the smart city assets. My key argument in this chapter is that, without both the public space and the public, no collective action is possible in the smart city. Attractive as a concept, in practice the outcome-based code would erode the city fabric and destroy horizontal ties between the citizens. The partnership’s vision of the commons was similarly deficient. As the Sidewalk Toronto case demonstrates vividly, the trusts become useless when imbued with the mutually exclusive goals of profiting from the city’s resources and protecting the public interest.

Because of the complex socio-material and legal nature of smart cities, their infrastructure is often controlled by multiple stakeholders. The idea to govern these assets collectively has enormous potential, yet the way Sidewalk’s five new governing bodies were designed would have precluded the citizens from having any meaning representation. Both the outcome-based planning and collective governance in this project were intended to serve the commercial interests of Alphabet/Sidewalk Labs.

Canada has a recent history of failed or contested public–private partnerships, where critics pointed to the lack of transparency regarding the financial interests of the parties – e.g., the Superclusters Initiative and several projects that failed to make high-speed internet available in the remote and rural areas (Valverde and Flynn Reference Valverde and Flynn2020). Citizens who had organized against Sidewalk Toronto would continuously invoke Canada’s historic commitment to the welfare state and its strong government oversight in business operations. The #BlockSidewalk citizen group repeatedly questioned the mandate of Waterfront Toronto to represent the Canadian government in the project. Bianca Wyllie argued that the three levels of Canadian government were tragically unprepared to deal with the technology company that had designed smart city policies for its financial gain (Wylie Reference Wylie2020). In 2021, the federal government started appointing Chief Data Officers across its departments. Statistics Canada runs workshops on data stewardship for Canadian public officials.

After the smart city project in Toronto was canceled in 2020, Sidewalk Labs partnered with Kansas City and the City of Portland to trial Replica, the data-driven tool formerly known as the outcome-based code (Bowden Reference Bowden2018). One year later, both municipalities rejected Replica, reportedly over privacy and transparency issues (Coulter 2021). However, startup Replica has raised $41 million in Silicon Valley, and allegedly plans to sell its product in Europe and Asia (Wolpow 2021). The idea of data stewardship through trusts is being implemented as part of the European Union’s AI governance framework (Artyushina Reference Artyshina2021).

9 From Thurii to Quayside Creating Inclusive Blended Spaces in Digital Communities

Richard Whitt

We can be controlled from the outside not simply by having our choice bypassed but by someone controlling the world we perceive.

Introduction

The modern-day digital community provides an opportunity to follow the unifying threads of governance, physical spaces, and technologies, as played out through the deployment of local software-based sensors and gateways. As we will see, the now-defunct Sidewalk Labs project in Toronto highlights the challenges, and limitations, of developing such a comprehensive system of interfaces, in the absence of sufficiently inclusive and holistic mechanisms to govern their use.

This chapter presents a brief thought piece that frames several of the key governance challenges that cities face when approaching the Internet of Things (IoT) and other “smart” technologies. Those challenges in particular fall within two areas: human governance and technical interfaces. In the first section, we will look briefly at two planned cities – the ancient Greek city of Thurii and the modern cityscape of Quayside in Toronto, Canada – as exemplifying the different layers of inclusivity that can and should work well together in communities of trust. One proposed takeaway raised in the second section is the desirability of planning digital communities that invite active human participation in the blended spaces between the self and the world, the private and the public, and the physical and the virtual. As it turns out, this takeaway is entirely consistent with the notions of participatory community governance at the heart of the Governing Knowledge Commons (GKC) framework (Frischmann, Madison, and Strandburg Reference Marr2014), summarized in Chapter 1, and elsewhere in this book.

For the Quayside cityscape, the third section of this chapter focuses on two particular layers. First, it introduces inclusive governance, which in Quayside spanned proposals from civic data trusts to urban trusts. That exploration includes consideration of the knowledge commons, and by extension the ancient Greek agora, as useful framing references. The digital fiduciary is offered up as another governance model worth exploring. In particular, such entities could employ a virtual trust layer, an encapsulation of fiduciary-based obligations within the entity’s data flows and algorithmic decision points.

It then digs into inclusive interfaces, using as an exemplar the evolving and still-incomplete work of the Sidewalk Labs’ design engineering team, the Digital Transparency in the Public Realm (DTPR) project. More human-agential versions of these interfaces exemplify what are introduced here as edgetech systems, as opposed to the cloudtech systems that dominate the Web today. The chapter establishes edgetech capabilities as incorporating three key elements: (1) the edge-to-any/all (e2a) design principle, (2) multiple end-user-facing modalities of data, computation, and interfaces, and (3) a mix of edge-pushing and edge-pulling functionalities that empower the end user (Whitt Reference Whitt2021a, 191–207).

The fourth section observes how the digital fiduciary, paired with a personal AI, could help the individual successfully navigate the inclusive new blends of physical and virtual spaces in their digital communities (Whitt Reference Whitt2021a, 193–97). Finally, in the fifth section, certain government policies are identified that would enable these more participatory governance and edgetech opportunities (Whitt Reference Whitt2021a, 209–15).

Two Cities, Two Governance Challenges
Considering the Open Streets of Thurii

In 444 BC, Pericles of Athens directed that a small group of Athenian citizens converge on the remains of the small settlement of Sybaris, on the coast of the Italian peninsula. There, according to the historian Diodorus Siculus, was founded a Pan-Hellenic colony called Thurii (in modern-day Calabria), presided over by representatives from ten tribes from all over Greece. Aspirationally at least, Thurii was the first planned city to be truly owned by the world.

Author David Fleming has developed an interesting twist on the story of Thurii. His concern is “not so much the facts surrounding the town as the idea behind it, the vision of a good society that seems to have motivated it” (Fleming Reference Galloway2002, 5). In Fleming’s telling, the town was planned as a model city incorporating three core design principles:

  • a democratic constitution (governance);

  • an “open,” orthogonal street layout (architecture); and

  • a rhetorically designed educational system (information flows).

Fleming argues that Pericles the political leader, Hippodamus the city architect, and Protagoras the lawmaker shared a common image for Thurii: “an autonomous community of free and equal citizens who would govern themselves through their own practical human capabilities – that is, through speaking, writing, and debating with one another” (Fleming Reference Galloway2002, 6). This image would play out in crafting the new city’s constitution, forming its educational system, and designing its built space. To Fleming, Thurii stands for the proposition that “a free, open, and well-functioning democracy depend[s] on those interconnections” (Fleming Reference Galloway2002, 27).

The goal of the Thurian enterprise was simple yet profound: to establish an inclusive global city, based on the best political, architectural, and educational precepts of that time. Of course, we should keep in mind that democracy in those days ran both relatively narrow (limited to free adult males) and deep (direct civic participation).

Per Fleming’s suggestion, however, we should focus more on what Thurii can represent for modern sensibilities. To Pericles, for example, Fleming ascribes a rare understanding of how political community can “imply a particular configuration of civic space as well,” an image of “political and spatial equality.” Periclean oration paints a picture of Athens as a polis “where people can come and go as they please without surveillance from an inaccessible and mysterious hilltop.” A place “where the gaze of the many is directed to only a few” (Fleming Reference Galloway2002, 12).

A “valid urban plan” was designed by Hippodamus of Miletus, said to be the originator of planned cities. In Thurii, he organized a carefully laid-out network of main roads (plateiai) and secondary orthogonal roads (stenopoi). Two separate, open-air agoras for collective gathering presumably anchored the layout (Brioschi and Marino Reference Brioschi, Marino. and Amoruso2018). Viewed recently as “more of a philosopher than architect” (Kirkpatrick Reference Kirkpatrick2015), Hippodamus was led to theorize concerning the ideal community, and its political, social, and judicial organizations (Burns Reference Burns1976). The “emphasis of his innovations was directed towards the over-all functional plan of the city rather than the details of street lay-out.” Among other attributed elements expressive of a democracy were land allocation criteria of “absolute equality among residential blocks.” As Fleming puts it, Hippodamus’ design “demonstrates remarkable faith in ordinary people, their practices and capabilities” (Fleming Reference Galloway2002, 18).

Finally, the sophist philosopher Protagoras of Abdera was asked to establish the laws of a sophisticated and inclusive participatory democracy. Noted Greek historian Guthrie speaks of Protagoras’ “invincible respect for the democratic virtues of justice, respect for other men’s opinion and the processes of peaceful persuasion as the basis of communal life” (in Rutter Reference Rutter1973, 165 n. 61). To which Rutter (Reference Rutter1973) adds, “Thurii was a tough assignment.” Nonetheless, the city consequently became known for having a well-ordered system of laws.

Thus, Thurii in its idealized form can be held up as a type of model community, one that sought to merge considerations of inclusive physical spaces with “virtual” environments of open political governance and communal public discourse. In other words, a holistic blending of inviting spaces, participatory public life, and an equality of gaze.

Considering the Smart Streets of Quayside
Digital Communities on the Rise

Planned communities became prevalent in the United States beginning in the 1950s. It is with the so-called smart city, however, that the technology of the Internet of Things is expected to bring the planned community to an entirely new level.

By one definition, smart cities use a mix of connected technology and data to “(1) improve the efficiency of city service delivery (2) enhance quality of life for all, and (3) increase equity and prosperity for residents and businesses” (Digi.city 2021). Prominent smart cities such as Barcelona, Amsterdam, and Helsinki are premised on harnessing connected technologies to help manage common areas, particularly in larger municipalities. Examples of popular use cases include automotive traffic control, air quality sensing, streetlight controls, waste management, and noise detection (Marr Reference Lu2020).

For many, the smart city has a particular connotation: it is presumed to be organized and run by a local municipal government, limited to public land, and dedicated to expressly civic purposes. While that may well the case for a number of these projects, that description does not nearly exhaust the possibilities. In fact, the governance structures, objectives, and functions of these sensor-equipped physical spaces run along a more expansive continuum.

  • First, where connected infrastructure is brought into an outdoors space, the governing entity can be purely public (a government body), purely private (a corporation), some mix of the two (a public–private partnership), or something else altogether.

  • Second, the physical area need not be publicly owned land, but also extends to private lands and spaces. Indeed, the local shopping mall, the popular restaurant, the neighbor’s front door, the airspace by one’s bedroom window – each of these is an example of a physical space hosting IoT devices and interfaces.

  • Third, the primary purpose can be to enhance existing government roles – traffic control, energy and waste management, policing, and so on – or it can accommodate many other “smart” intentions, including deriving pecuniary value for the surveilling entity.

  • Fourth, the data collected can be purely “environmental” – the air quality – or purely “personal” – recognizable human faces. Some have suggested a new category of nonpersonal data (NPD), such as the movements and flows of pseudonymous human bodies (Gopalakrishnan Reference Gopalakrishnan2020).

  • Fifth, as recent speculations about the “metaverse” demonstrate, the physical environment can be further enhanced by a blend of virtual technologies – such as augmented reality (AR), mixed reality (MR), and extended reality (XR). These still-developing digital overlays only present additional legal, policy, and ethical complications to an already challenging mix of data governance-related scenarios (Norton Rose Fulbright 2021).

Given this broad range of users and use cases, we will refer here to digital communities. These will be defined as those physical spaces and their accompanying public/private institutions employing digital technologies to surveil, extract, and analyze personal and environmental data, and utilize for various behavioral manipulations. The smart city is but one particular use case for that broader category.

Sidewalk Labs in Toronto

An early pioneer of the smart city concept was Alphabet’s Sidewalk Labs project in the Quayside neighborhood of Toronto, Canada. As first announced publicly in October 2017, the project carried the potential to provide benefits to citizens and visitors that included enhanced security, environmental monitoring, and more efficient deployment of government resources (Lu Reference Vincent2019).

As the project unfolded over two and a half years, questions arose about its intentions and impact. Two considerations attracted considerable attention: the project’s ever-shifting governance structure, and its use of IoT technologies to gather and analyze what was termed “urban data.”

In May 2020, citing economic conditions arising from the Covid-19 pandemic, project director Daniel Doctoroff announced that Sidewalk Labs was shutting down its Toronto project (Doctoroff Reference Doctoroff2020). While the City of Toronto continues with its own plans for the space, Sidewalk Labs would no longer be a partner.

The Sidewalk Labs project in Quayside leaves both some open issues to explore and some useful insights to be gleaned. In the next section, we first will review the unique challenges for human agency in a digital community environment. We then will focus on the untapped potential from the Sidewalk Labs experience in Quayside, in terms of both human governance model and virtual interfaces.

Virtual Gateways: Lack of Inclusion, Lack of Balance

As natural beings in the world, humans inhabit an environment of mediation. Many modern scientists and philosophers agree that the human mind is not a mere mirror reflecting its surroundings. Instead, our bodily attributes of somatic, sensory, emotional and mental systems interact constantly, helping us to define reality and act accordingly (Whitt Reference Whitt2021a, 160–62).

Technology too mediates between human beings and our experiences, often via software-based interfaces (Whitt Reference Whitt2021a, 162–64). These amount to different kinds of points of presence – physical, virtual, or conceptual – at boundaries where information signals flow between systems. As but one example, Luciano Floridi recently has observed how marketing entities use people as interfaces, to be exploited by commercial and political players for our data, our money, and our votes (Floridi Reference Floridi2019).

In Web-based technologies, an interface is “the way in which one glob of code can interact with another” (Galloway Reference Frischmann, Madison and Strandburg2012, 31). Over time, Web interfaces have been developed to provide a user experience (UX), typically by pushing that experience in the user’s direction. Representative examples of these “cloud-push” gateways include graphical user interfaces (GUIs), voice-controlled interfaces, gesture-based interfaces, and public forms of application programming interfaces (APIs). These choices typically are made on the user’s behalf, without their participation, feedback, or consent. In other words, these interfaces are not particularly inclusive (Whitt Reference Whitt2021a, 195).

Cloudtech Systems in Our Lives: “Screens, Scenes, and Unseens”

Every day we interact with computational systems via three kinds of interface, envisioned here as digital “screens, scenes, and unseens” (Whitt Reference Whitt2021a, 144–45). These cloud-based interfaces can be considered their sensory subsystems – to watch, to listen, and to speak.

  • Online screens on our various devices lead us to the search engines and social media platforms, and countless other Web portals in our lives. Institutional AIs render recommendation engines that guide us to places to shop, or videos to watch, or news content to read.

  • Environmental scenes (sensors) are the “smart” devices – cameras, speakers, microphones, sensors, beacons, actuators – scattered throughout our homes, offices, streets, and neighborhoods. These computational systems gather from these gateways a mix of personal (human) and environmental (rest of world) data. They are the “eyes and ears” of increasingly complex monitoring and analysis systems. The Ring doorbell placed by your neighbor across the street is but one example.

  • Bureaucratic “unseens” are computational systems hidden behind the walls of governments and companies. These systems can render hugely life-altering judgments about our basic necessities, and personal interests – including who gets a job or gets fired, who is granted or denied a loan, and who receives what form of healthcare.

In the digital community context, all three types of systems and their interfaces come into play: our mobile device screens, the environmental scenes all around us, and the unseen actors that actually set the rules of engagement (Whitt Reference Whitt2020b, Reference Whitt2020c). Figure 9.1 shows these three types of systems.

Figure 9.1. Screens, scenes, and unseens, GLIA Foundation

In all three instances, numerous decisions are being made and policies are being carried out by algorithms – but is the decision-making process equitable, and is the handling of the personal data accountable?

Bringing Cloudy SEAMs

To software designers, robust feedback between people is supposed to be “the keystone of the user-friendly world” (Kuang and Fabricant Reference Kuang and Fabricant2019, 32). Problems emerge, however, when one or both sides of the equation lack feedback, so they are “not feeling the stakes” (Kuang and Fabricant Reference Kuang and Fabricant2019, 34). Unfortunately, these issues of imbalanced information flows are pervasive on the Web, and in particular among those who employ so-called cloudtech software applications.

A term I have employed to describe these interrelated activities is the “SEAMs cycle” (Whitt Reference Whitt2021a, 148–53). Cloudtech computational systems require fuel – steady streams of data that in turn render compensation to players in the Web platforms ecosystem. At the direction of platform companies and others, the SEAMs cycle has become the “action verb” of these computational systems.

The SEAMs paradigm is instantiated in exploitative feedback cycles, which harness four interlocking control points of the computational action verb. S is for “surveilling,” via devices in the end user’s physical environment. E is for “extracting” the personal and environmental data encased as digital flows. A is for “analyzing,” using advanced algorithmic systems to turn bits of data into inference and information. And M is for “manipulating,” influencing outward physical behaviors by users and others (Whitt Reference Whitt2021a, 148–51).

A core concept is that cloudtech computational systems deploying SEAMs cycles seek to maximize extraction of data and user engagement – but on their terms. Thus, through institutional control over these data gateways, most derived value from data and content typically flows in one direction – the “SEA” of the SEAM cycles. Figure 9.2 shows these flows.

Figure 9.2. “SEA” cycle flows, GLIA Foundation

And in the other direction flows the shaping influences – the “M” of manipulation. The placement of intelligence and control technologies within infrastructure systems allows companies and governments alike to wield significant power (Frischmann and Selinger Reference Frischmann and Selinger2018, 134–42). Figure 9.3 adds that crucial element of manipulation.

Figure 9.3. “SEAMs” cycle flows, GLIA Foundation

This pronounced interfacial one-sidedness makes many of the computational systems that we use every day unbalanced. As individuals interact with the Web over their device interfaces, we stand on many entities’ virtual borders, without even realizing it. One key way to achieve greater balance is to design more inclusive computational systems.

Receding Interfaces, Hidden Power

The issue, of course, is that those with the power can use it to establish interfaces as “control regimes” (Galloway Reference Frischmann, Madison and Strandburg2012, 90–94). Not merely technical portals; “in the user-friendly world, interfaces make empires” (Kuang and Fabricant Reference Kuang and Fabricant2019, 145). They also provide, or withhold, those digital affordances with which humans can exercise their full autonomous powers (Whitt Reference Whitt2021a, 198–99).

As it turns out, over time interface technologies tend to evolve from the more to the less visible (or even hidden) forms. What once was an obvious part of the user’s interactions with a system, gradually becomes embedded in local environments, and even vanishes altogether. As computer scientist Mark Weiser put it some thirty years ago: “the most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it” (Weiser Reference Weiser1991, 94).

With those “cloudtech” interfaces, the trade-off for humans is straightforward: exchanging control for more simplicity and ease. In these contexts, technology moves from being a tool to becoming its own agent of the underlying system. While interfaces can remove friction, at the same time they can foreclose thoughtful engagement (Frischmann and Selinger Reference Frischmann and Selinger2018, 142–46). When you reduce participation, you reduce involvement in decision-making. While this progression in itself may well bring many benefits, it also renders more muddled the motivations of the system operating silently from a distance.

Human engagement with these receding interfaces also becomes less substantive. From typing on keyboards, to swiping on screens, to voicing word commands, the interface context shapes the mode and manner of the interaction. At the same time, these systems can conjure the illusion that they still support human agency (Frischmann and Selinger Reference Frischmann and Selinger2018, 124–36). From the perspective of the average person, interfaces to these systems can seem deceptively controllable – local, physical, and interactive – even as the mediating processes themselves are far removed, virtual, and unidirectional.

Getting Lost in the Digital Scenes

As we move from the “screens” of our personal devices to the “scenes” of digital communities, the lack of symmetry and inclusion in our cloudtech interfaces becomes all the more acute. As has been pointed out elsewhere, concepts like self and world, the inner and the outer, inhabit more a continuum than a duality (Whitt Reference Whitt2021a, 161–63). Relational boundaries have been called “the space of the self … the open-ended space in which we continually monitor and transform ourselves over time” (Couldry and Mejias Reference Couldry and Mejias2019, 161). This circle of inner and outer spaces never-endingly turns in on itself, as “a materially grounded domain of possibility that the self has as its horizon of action and imagination” (Couldry and Mejias Reference Couldry and Mejias2019, 156). As Brincker (Reference Brincker, Timan, Newell and Koops2017, 85) puts it:

As perspectivally situated agents, we are able to fluidly shift our framework of action judgment and act with constantly changing outlooks depending on the needs and opportunities we perceive in ourselves and our near surroundings in the broader world … We continuously co-construct and shape our environments and ourselves as agents.

If we follow the “4e” school of cognition, the role of natural and technological mediation processes becomes even more important. It turns out that the scope of human cognition is extracranial, constituted by bodily processes (embodied), and dependent on environmental affordances (embedded and extended) (Newen, De Bruin, and Gallagher 2014; Spivery and Huette Reference Spivery, Huette and Shapiro2014, 306). If the self and its environment essentially create each other, whether and how other people and entities seek to control those processes becomes paramount (Frischmann and Selinger Reference Frischmann and Selinger2018, 81–101).

The implications are significant for those now living their lives in the “scenes” of our digital communities environment. These include:

  • The individual’s persona is already comprised of a blend of private, collective, shared, and public data – which includes what some are now calling “non-personal” data (NPD) (Whitt Reference Whitt2021a, 162–63).

  • The individual’s environment is a constantly shifting panorama of the public (the city courthouse), the private (the grocery store), and the in-between (the connecting sidewalks). Architect Stavros Stavrides challenges us to look behind the cityscapes, to perceive the space-as-commons, expressing and exemplifying novel forms of social relations (Stavrides Reference Stavrides2020).

  • The systems are owned and controlled by one or more entities, each with different incentives for employing SEAMs cycles.

Further, the advanced technology systems being deployed in these environments can register and collect a vast range of biometric information about the self – from one’s geolocation, facial expressions, voice patterns, even walking gait (Thales Group 2021). And yet merely by traversing a sensor-laden physical space, an individual is assumed to accept their presence and operation – with no realistic opt-out.

In the shift to a “scenes”-dominated environment, it seems we are expected to remain the largely passive user of the Web’s “screens” environment. As one European report has amply detailed, the user’s loss of control in digital public spaces is manifold – including the inability to consent, or object, to data surveillance, collection, and processing (Christofi and Verdoodt Reference Christofi and Verdoodt2019). In systems parlance, the feedback loops of these physical spaces become even more attenuated, and can disappear altogether. Traditional accountability concepts, such as notice and choice, can become meaningless in these environments.

Nor is there an actual living entity with which to engage. In the typical digital community, drivers, pedestrians, and others at most may receive some transparency in how systems make use of data, and some accountability in how systems safeguard such data. And yet the individual has no place in that decision tree. There is no obvious opportunity to engage, to question, to negotiate, to challenge, to object, to seek recourse – in other words, to exercise one’s personal agency.

Without such a mediating process in place with the underlying “rules of the road,” and interfaces unable to accept and act upon such mediations, there is no viable way to opt out of the system’s prevailing SEAMs control cycles. Stavrides for one argues that the “governing elites” intentionally seek to embed in cityscapes ways of continuing to define individuals as “economic subjects … whose behavior and motives can be analyzed, channeled, predicted upon, and, ultimately controlled by the use of economic parameters and measures only” (Stavrides Reference Stavrides2020, 160). Such population “governance” also seeks to “ensure that people continue to act and to dream without any form of connectedness and coordination with others” (Stavrides Reference Stavrides2020, 160). Fleming (Reference Fleming2009) puts it well:

New technologies have not made place irrelevant in our lives or fundamentally altered our embeddedness in the physical world. If anything, they have made place more important. Despite our fractured subjectivity, our insistently networked existence, and our hybrid culture, the ground under our feet remains surprisingly important to us and desperately in need of our care.

Sidewalk Labs: Untapped Potential

As we consider the potential of modern smart city experiments such as Sidewalk Labs, we can look to ancient precedents for insight. As we have seen in ancient Thurii, observes one scholar, “the act of writing a constitution and tracing a grid … are symmetrical concepts,” because they invite broad participation for citizens (Kirkpatrick Reference Kirkpatrick2015). That same kind of balance from the old world of architecture and city planning can also be achieved in digital connections and the exchange of data.

What useful takeaways can be derived from the Sidewalk Labs project in Toronto? At least two interrelated conversations are worth continuing: first, opening up the back end of the project governance and, second, opening up the front end of the software interfaces. Optimally, providing balanced processes of interactions within these two forms of human-to-system interfaces can be devised and implemented in concert.

Designing Inclusive Governance

The Quayside project’s ultimate demise was unfortunate in at least one respect. It precluded a more open conversation about the precise mechanisms and processes that could comprise a successful governance mechanism. As we will explore below, while community data trusts are one potential form of inclusive governance, the knowledge commons and digital fiduciary offer other models worth exploring.

Community Data Trusts

The data trust offers one particular governance model to engender greater trust and accountability. Here, the data trust concept can be adapted to apply to the technology overlays in a civic or community setting.

An early proponent of the “civic trust,” Sean McDonald (Reference McDonald2015), has explained how the model uses trust law to build public participation spaces. Specifically, the civic trust embeds network considerations into the way that technology products evolve. The public is the trust, the technology company is the licensee, and stakeholders can include users, investors, and the public at large. An independent organization would own the code and data resources, which third parties in turn could use and adapt. The civic trustee would ensure that the public has a meaningful voice, as well as foster the integrity of decision-making processes (McDonald Reference McDonald2015).

There are few and limited examples globally of civic trusts (MaRS 2021). As McDonald (Reference McDonald2018) observes, the Toronto project was to have been the world’s largest-scale proposed civic data trust. The project’s publicly stated goal matched that lofty scale: its “proposed approach to digital governance aims to serve as a model for cities around the world” (Sidewalk Labs 2018).

Sidewalk Labs began exploring the creation of what it first labeled a “data trust,” and then a “civic data trust,” before settling on the nomenclature of an “urban data trust” (UDT). Sidewalk Labs itself made clear that the UDT model would not be a trust in a legal sense – meaning, among other changes, no adoption of express fiduciary duties to trustors (Sidewalk Labs 2018). Not surprisingly, the shifting approaches attracted public resistance, including from some associated with Waterfront Toronto itself (Vincent Reference Vincent2019).

While Sidewalk Labs garnered praise in some quarters for making its proposals public, others deemed the proposals themselves “riddled with contradictions,” including conflicting theories of control over data (McDonald Reference McDonald2018). In an early critique, Ellen Goodman and Julia Powles (Reference Goodman and Powles2019) pointed out the project’s lack of meaningful transparency and public accountability for proposed data practices, while raising questions about the notion of private uses of public spaces. Others decried the “neoliberal” and “post-political” governance model (Carr and Hesse Reference Carr and Hesse2020). The novel concept of “urban data” – data that is collected in public spaces and treated as a type of “public asset” for sharing – also drew criticism.

Element AI (2019) further critiqued the proffered top-down governance model, centered on a private company acting as a data trust settlor. Element AI saw this approach as introducing power imbalances that otherwise would be avoided in the more bottom-up model, championed by Sylvie Delacroix and Neil Lawrence (Reference Delacroix and Lawrence2018), of users collectively pooling their own data. As a matter of process, Sidewalk Labs’ seemingly reactive alterations to the trusts-based governance models likely were also unhelpful to the public deliberations.

Anna Artyushina (Reference Artyushina2020) has conducted an in-depth analysis of the governance angles of the Sidewalk Labs project in Toronto. Her particular theoretical lens is a technoscientific/platform capitalism framework. Her argument is that the civic data trust model (later the urban data trust) proposed by Sidewalk Labs “appeals to and sustains a political-economic regime governed by the logic of rent-seeking, which aims to entrench the economic dominance of technological monopolies” (Artyushina Reference Artyushina2020, 1). She points to key factors such as the drive to collect and control user data, and resulting information asymmetries between platform companies and users (Artyushina Reference Artyushina2020, 19).

Artyushina’s conclusion is that Sidewalk Labs sought to treat its new governance model for the smart city environment as tantamount to the Web model already entrenched by platform companies, with a single entity controlling the market, the data, and the technology. As she puts it succinctly: “The real purpose of the Urban Data Trust was to assetize residents’ information and make it sufficiently easier for technology companies to access, reuse, and profit from the data” (Artyushina Reference Artyushina2020, 23).

Artyushina’s analysis exposes the real dangers from the approach that Sidewalks Labs chose to adopt. Whatever the company’s motivations, its top-down approach to devising and implementing the governance structure was bound to invite suspicion, perhaps well-placed. While the civic data trust concept was created as a way to avoid that kind of outcome, how such a governance model is actually established makes a significant difference. In brief, inclusive process matters. This perspective is consistent with the knowledge commons model, described further below.

Another Form of Trust: The Knowledge Commons

An especially intriguing governance option to consider is the knowledge commons. First championed by Elinor Ostrom, the commons was proposed as a means of governing natural resource commons (Frischmann, Madison, and Strandburg Reference Marr2014). As other chapters in this book attest, some are already exploring viewing the city as a commons. The smart city environment could provide a test case for collecting and sharing personal and environmental data as a form of knowledge production.

Traditional economics offers some support for viewing data as a commons resource. While market mechanisms generally are the most efficient means for allocating rivalrous goods, scholars have found that traditional property rights could unnecessarily constrain beneficial sharing arrangements (Stavrides Reference Stavrides2020). The nonrivalrous nature of most forms of data suggests it could be governed instead as a “commons” (New America 2019).

Importantly, a commons management strategy can be implemented in a variety of institutional forms (Frischmann Reference Frischmann2012, 8). Part of Elinor Ostrom’s genius was perceiving the commons as occupying a space between the two governance poles of government and market – what she labeled the “monocentric hierarchies” (Whitt Reference Whitt2013, 747–48). Her conception of “polycentric governance” by a like-minded community was intended to address the collective-action challenges stemming from a need to manage common pool resources (Whitt Reference Whitt2013, 747).

Data can be likened to other intangibles, such as ideas. New-growth economist Paul Romer found ideas to be both nonrivalrous (readily shared for reuse) and at least partially excludable (sharing can be limited) (Whitt and Schultze Reference Whitt and Schultze2007, 264–67). Data also can be said to constitute part of an “intellectual infrastructure” (Frischmann Reference Frischmann2012, 253–314). Frischmann notes the difficulty of applying infrastructure concepts to “the fluid, continuous, and dynamic nature of cultural intellectual systems” (Frischmann Reference Frischmann2012, 253). The related concept of a “knowledge commons” would govern the management and production of intellectual and cultural resources (Frischmann Reference Frischmann2012, 253).

Here, the institutional sharing of resources could occur among the members of a particular community (Hess Reference Hess2012). The resources would be intellectual and cultural – including information, science, creative works, and even ideas. Many types of data management arrangements could also qualify as “knowledge” for these purposes.

The history of the commons, and subsequent enclosures by political and commercial interests, may provide a suitable framing for ongoing debates about treating data as private property (or “enclosing” it). Scholars have explored various forms of opposition throughout history to market enclosures of shared resources, and sapping the generative power of the commons (Bollier and Helfrich Reference Bollier and Helfrich2012). To some, data may represent the ultimate global enclosure opportunity, beyond the land and labor resources of the past.

Application of Theory: The GKC Framework

Earlier chapters in this book lay out the GKC framework in the specific context of smart cities. Sanfilippo and Frischmann, for example, render a proposal for what they term “intelligent governance” (Chapter 10 in this volume). They posit that such a proposal requires “comprehensive public knowledge,” derived in part from a series of provisional questions to ask throughout the smart city development, procurement, implementation, and management processes. The authors also challenge the supposed downside trade-offs of infeasibility and reduced innovation as part of instituting a GKC framework (Chapter 10 in this volume).

Using the prism of the GKC framework, Teresa Scassa (Reference Scassa2020) provided a thoughtful analysis of the Sidewalk Labs project. In analyzing the rationale for adopting the civic data trust in Quayside, Scassa posits that the combination of shared resources and collective governance is a good fit. Her article utilizes the four key elements of the knowledge commons framework: (1) the background environment and context, (2) the resources to be pooled, (3) the governance mechanism itself, and (4) the costs and benefits of the approach.

Among other findings, Scassa notes that the final chosen governance model of the urban data trust was developed in a top-down and reactive manner, “by a single stakeholder in a complex environment with multiple participants and diverse interests in the data” (Scassa Reference Scassa2020, 56). Further, the novel category of “urban data,” created by Sidewalks Labs to denote the pooled resource, was both unwieldy and uncertain. Scassa believes urban data was defined unhelpfully as a “combination of physical geography and uncertain notions of public and private space” (Scassa Reference Scassa2020, 56). She concludes that the knowledge commons concept is a useful and instructive one to consider in devising data governance models.

The Digital Fiduciary, Employing a Virtual Trust Layer

A final model to consider for smart cities governance is the digital fiduciary. Similar to a civic data trust, this entity would manage data flows within a community in ways that best represent the interests of its citizens. The primary difference is that a digital fiduciary need not be limited to a legal trust arrangement, but instead could govern itself through other types of accountability measures. These could include bilateral or multilateral contracts (including smart contracts on a blockchain), government procurement requirements, self-certifications, professional accreditation bodies, codes of conduct, and/or best practices (Whitt Reference Whitt2021a, 211–12). While a promising governance model, a number of open questions remain to validate the viability and scalability of such an approach (Whitt Reference Whitt2021c).

One way to facilitate the digital fiduciary within a smart cities context is to devise a virtual trust layer. As envisioned by the author’s company, Deeper Edge LLC, this “Trust as a Service” approach would incorporate separate but interrelated conceptual modules that collectively form its reference architecture (Deeper Edge 2021). In essence, this open source model entails mapping end user data streams to and from the cityscape environment, and assigning express duties to each mediating juncture point.

The interoperable modules of a virtual trust layer would include:

  • Network stacks: where data packets travel through layers of information systems;

  • Data lifecycles: where data resides in servers, routers, algorithms, and applications;

  • Algorithmic tussle zones: recognizing external interfaces (screens and scenes) and intra-network mediation points (unseens) where competing interests “tussle for control” over data access; and

  • Duties: operationalizing applicable obligations, based on extant fiduciary/trusts/bailment laws (Deeper Edge 2021).

Importantly, as laid out in the fourth section, the digital fiduciary could operate on either side of the smart city platform – representing the digital community itself, and/or serving as an individualized agent of the local citizen (Whitt Reference Whitt2021c). In turn, the virtual trust layer could inform each entity’s internal decisions, as well as provide an agential form of connectivity linking the two sides (Deeper Edge 2021).

Early adoption of a form of virtual trust layer in a smart city context appears in the World Economic Forum’s ongoing partnership with Helsinki, Finland (WEF 2021a). The resulting WEF white paper adopts a similar conceptual blending of stacks, lifecycles, tussle zones, and duties, applying it as a “human-centric approach to data relationships” to benefit the citizens of Helsinki (WEF 2021b).

Bringing us full circle, Helsinki’s holistic, multilayered approach echoes the model of the ancient Greek agora, as exemplified in the founding of Thurii. For many Greek cities, the agora was far more than a marketplace – it was the center of civic life. People freely mingled and participated in all forms of social interaction, including commercial dealings, political and legal activities, and philosophic discourse. This blending of human engagement led to tremendous creativity, and a number of ideas and institutions that have stood the test of time (Whipps Reference Whipps2008). Perhaps the agora as human trust layer can provide another useful way to conceive of governing the blended public spaces of the digital community.

Designing Inclusive Interfaces
The DTPR Project and Personal Software Agents

In 2019, Sidewalk Labs publicly launched the DTPR project – “Digital Transparency in the Public Realm.” The DTPR team was tasked with creating icons and signage that would allow pedestrians to understand what kind of function was being employed by a particular environmental device (Sidewalk Labs 2018).

As the project heads acknowledged, cities like Boston and London “have already taken important first steps by posting clear signage whenever they employ digital technologies in the public realm” (Lu Reference Vincent2019). One early component proposed by the DTPR team, the “consent through signage” principle, used a comprehensive system of colorful symbols to inform citizens about data collection practices. Citizens then faced a decision: remain on the scene, which indicates consent, or withdraw consent by departing the scene (Artyushina Reference Artyushina2020, 29). Needless to say, such a faux choice grants ordinary citizens little recourse: how can one gain the benefits of belonging to a digital community without giving up control over one’s personal data?

DTPR’s initial focus on transparency – informing pedestrians about the “what” of a sensor’s activity – shifted quickly to a phase two. This phase was devoted to engendering greater accountability for the underlying system’s actions (Sidewalk Labs 2020a). As part of this phase, the DTPR project team gave a concerted outreach to designers and others to “advance digital transparency and enable agency.”

In the last few months before the Quayside project was terminated, the DTPR team went further still. Using co-design sessions, charrettes, small-group discussions, and prototyping, the team sought to investigate opportunities for actual human agency – in particular, direct human-to-interface interactions within the local sensor system (Sidewalk Labs 2020b). Intriguingly, prototypes for conversational chatbots and personal AIs were introduced, discussed, and tested for feasibility (Sidewalk Labs 2020c). As the team summarized:

The chatbot supports visual, auditory, and tactile modalities, makes it easy to find different kinds of information, provides links, schematics, or documentation, and can adapt to the user’s level of interest in detail … We asked charrette participants to imagine that five years in the future, they have a personal digital assistant provided by an organization they trust (such as a bank), that provides automated data/privacy information tailored to an individual’s preferences. We also shared the results from our GRIT [GRIT Toronto, a civic testing service] user tests on how research participants responded to that concept. We explored how that digital personal assistant, in the form of a chatbot, could provide answers about systems and places in a standardized manner, using the DTPR taxonomy. We wanted to see how this concept could encourage users to develop expectations around transparency and accountability of spaces, provide a flexible way for users to interact with a physical space and the digital technology within it, and adapt and learn as users asked new questions.

(Sidewalk Labs 2020b)

The DTPR team also shared out the insights they gleaned from their research on the feasibility of personal digital assistants:

  • “Concept feedback sessions showed the desirability of a trusted digital assistant to help with daily tasks.”

  • “People want to ask questions at a time and context that is convenient to them, not be interrupted mid-flow.”

  • “Trust varies person by person, case by case; there is no ‘one size fits all’ approach.” (Sidewalk Labs 2020d)

The “agency” phase of the aborted DTPR project offers some fascinating prospects. If successfully pursued, creating these kinds of interactive, two-way IoT systems could open up real opportunities for humans to engage on their own terms as they go about their lives in digital communities.

New Edge-Outward Interfaces: Edgetech

Along with the governance institutions, we can in parallel put in place agential technologies such as DTPR that invite our participation, rather than shunt it aside. Refashioning IoT network gateways and applications so that they reflect more control by humans at the edge of the Web can be thought of as “edgetech” capabilities. These new interfaces essentially reverse the unilateral nature of the cloudtech interfaces that facilitate SEAMs control flows for government agencies and platform providers (Whitt 2021a). In essence, while cloudtech-based entities import personal data and export content and influence, edgetech-based entities export personal intentionality and import sought information.

The edgetech concept incorporates three elements: (1) a new edge-to-any/all (e2a) design principle, (2) end-user-facing modalities of data, computation, and interfaces that instantiate the new principle, and (3) one or both of “edge-pushing” and “edge-pulling” functionalities that empower end users (Whitt Reference Whitt2021a, 199–201). Each element is briefly described below.

The “Edge-to-Any” Design Principle

The initial step is to recognize the opportunity to conceptually reset the Web’s current power asymmetries through an entirely new design principle. This approach has the makings of fashioning an edge-based online environment.

Through the revolutionary design attributes of the end-to-end (e2e) principle, functional modularity, global interoperability, and IP as agnostic bearer protocol, the Internet over several decades became a “network of networks.” Its unique decentralized, peer-to-peer configuration enabled participants to interact from “the edge” – symmetrically, equipotentially, with little need for intermediaries. The end-to-end principle in particular promised originally to put end users in charge of their online activities (Whitt Reference Whitt2013, 717–29).

As it turns out, however, first the client–server arrangement of Web 1.0 and then the multisided platforms ecosystems of Web 2.0 ended up reducing the ability of end users at the edge to control their digital selves. Instead, the cloud-based “end” of platforms operating on the other side of the connection exerted increasing control over Web-based interactions. Ordinary people had fewer means to limit unwanted access to data. Beyond even the notion of basic control, individuals gradually lost the ability to engage in mutual value exchange with peers and partners, and otherwise assert their full human rights in digital form.

Where the original Internet architecture included the then-revolutionary concept of the e2e principle, the notion here is to deploy as a Web overlay a new edge-to-any/all principle. Much as the e2e principle first established at least the possibility of connecting true peers, an e2a principle would be instantiated in technologies that deliberately shift power from the Web and its platform overlays to ordinary “end users” at the network’s edge. As a result, the computational core of clouds and algorithms would give way to more distributed networking and decentralized applications. One can think of adopting this principle as a means of reversing the current cloud-centric SEAMs data flows on the Web (Whitt Reference Whitt2021a, 199–201).

Multiple Edge-Based Modalities

Systems designers utilizing an edge-outward design principle like e2a can change the current one-sided dynamic of the Web. The opportunity is two-fold: (1) modifying existing interfaces so that the human being has a viable means of engaging directly with computational systems and (2) designing new interfaces to maximize the human’s ability to shape their own “user” experiences. The emphasis should be on interfaces that promote autonomy (freedom of thought) and agency (freedom of action) (Whitt Reference Whitt2021a, 153–63).

The e2a design principle can be instantiated in any type of digital technology that grants the “end user” more control over their engagement with Web-based systems. These can include the algorithmic element, decentralized Web platforms, interfaces, and of course the data itself (Whitt Reference Whitt2021a, 199). For example:

  • A personal AI acts on behalf of the individual in interactions with institutional AIs.

  • A personal data pod effectively stores the individual’s data and information in a localized (non-cloud) environment, complete with end-to-end encryption.

  • An identity layer provides a one-way screen to shape what information about an individual is provided online, and to curtail the unwanted incursions of third-party agents.

  • A blockchain-based non-fungible token (NFT) encapsulates data in ways that make it far easier for individuals to create, manage, and share access on their own terms.

Edge-Push and Edge-Pull Functions

Various edgetech modalities, operating under the e2a design principle, can empower the individual. There are broadly two types of functions that are enabled (Whitt Reference Whitt2021a, 200):

  • Edge-pull” configurations allow the individual to bring the Web’s computational systems and other resources directly to them. One example is creating one’s own news feeds from disparate sources; another is directing credit-scoring companies to access (but not acquire) personal data that resides locally.

  • Edge-push” configurations allow the individual to send their own commands and requests to designated sites on the Web. Examples include broadcasting one’s own terms of service, and operating one’s own virtual shopping cart.

Each of these two functions has its notable champion (Whitt Reference Whitt2021a, 200–01). The OPAL (open algorithm) project launched by Sandy Petland and others at MIT enables edge-pull functionality, by “pulling” the Web’s computation to the personal data – rather than the other way around (OPAL Project 2021). One early company moving forward with a business model premised on OPAL’s edge-pull functionality is FortifID. The company’s website indicates that its platform is “designed to reduce the raw data footprint across a company’s ecosystem,” because the “algorithms travel to the data and produce insights that are shipped back for use instead of raw data” (FortifID 2021).

The VRM (Vendor Relations Management) project launched by Doc Searls at Harvard University is a well-known leader for edge-push thinking (Project VRM 2021). As one example, Searls has explained how each of us should want to be the first party in a relationship with the operators of websites and apps (the primary and active instigator), rather than the second party (the passive recipient) (Searls Reference Searls2018). In the digital community context, the entity operating on the other side of the interface could be required to accept our terms of service, abide by our privacy policy, and consent to our preferred ways of interacting. Searls uses the term “intentcasting” to describe this new dynamic. In proffering edge-push requests, an active first-party role allows us to engage in a true conversation – question, object, negotiate, and ideally reach a mutual agreement.

With both edge-push and edge-pull functionality, the current Web client–server paradigm is flipped on its head. Among other benefits, by utilizing the appropriate online interfaces, an individual can set their own identity screen to establish protective virtual boundaries.

In the digital community context, e2a design principles would enable the individual to project themself into the physical platform, opening up new points of bilateral interaction and negotiation. A healthy mix of edge-pull and edge-push interfaces then would create “mini” positive feedback loops between the individual and the platform. System designers know that such positive feedback loops have a highly agential impact: “to perturb systems to change” (Lidwell, Holden, and Butler Reference Lidwell, Holden and Butler2003, 92–93). And in the process, the person on the physical scene can define and operate their own two-way “UX.”

Complementary Roles for Digital Fiduciaries and Personal AIs

A digital community could be devised so that a citizen’s digital agent would be able to interact directly, on their behalf, with the community’s computational systems. In the case of Sidewalk Labs Toronto, these interactions could have been facilitated through the very chatbots and personal AIs that were being explored in parallel via the project’s DTPR process. The back-end of trust governance could have benefited from more fruitful connections with the front-end of sensor interface technologies.

In essence, each smart city and other digital community is its own website, or social media platform, or mobile application. As with these better-known digital experiences, the digital community is in a position to adopt and apply its own terms of service, its own privacy policy, its own data protection practices, and its own authorized use policy. As with the World Wide Web, this panoply of overlapping and likely inconsistent policies would overwhelm most typical participants. In essence, the cognitive overload of the Web would become extended and embedded in the physical spaces all around us. At present, there is no obvious recourse to deal with this pervasive problem.

Having one’s own personal agent, such as a digital fiduciary, could help the average person to cope with, and even manage, this brave new world of digital communities (Whitt Reference Whitt2020c). As an individual goes about their daily activities in their local city, a personal digital fiduciary can provide the means of interacting in real time with the digital community – including the civic data trust, and other entities that the individual may encounter in their travels. These interactions would be enabled via the software interfaces embedded all around (Figure 9.4).

Figure 9.4. Digital fiduciary and data trusts, GLIA Foundation

As the DTPR project team recognized, a personal AI or other virtual agent could be an important complementary tool for utilizing one’s edgetech applications (Sidewalk Labs 2020b, 2020c, 2020d). The personal AI could provide forms of “digital pushback” to challenge a digital community’s existing SEAMs cycles, by:

  • blocking the automatic “surveillance” and “extraction” modes;

  • disrupting consent-less operation of the community’s “analysis” mode; and

  • thwarting attempts to “manipulate” the individual’s autonomy in their physical environment.

A Likely Role for Government

A more inclusive and symmetrical interface is only as good as the interoperability behind it – the two-way means of interacting with other underlying networks. Interop constitutes the somewhat unfashionable network plumbing of software standards and protocols. As one example, for a personal AI to “talk” directly with an institutional AI, there must be an accepted means of communication, and an agreement to act upon it.

The basic interop fabric is already there to support robust two-way interfaces. After all, the Internet is a splendid example of an interconnected “network of networks.” Symmetrical interfaces using the e2a design principle can mirror that same peer-to-peer architecture: my system speaking on equal terms with your system, in a reciprocal manner. What would change is the current overlay of unidirectional interfaces leading to tightly controlled platforms.

Voluntary agreement on the operative protocols and standards would be optimal. However, there may well be a role for governments to play in smoothing the path for such agreement. Regulators could introduce a mix of tailored market inputs and incentives that would open up portions of underlying platform resources. These might include system-to-system interconnection, robust interoperability (at the different layers of data, computational, identity, and mixed/augmented reality), and data delegability and mobility (from platforms to selected mediators) (Whitt Reference Whitt2018, 45–65). Many of these “functional openness” concepts – such as network interconnection, services interoperability, and resource portability – are rooted in regulatory policies developed in the 1980s and 1990s by the US Federal Communications Commission (FCC) and other national regulators, as a way to facilitate more competitive communications services markets (Whitt Reference Whitt2018, 45–65).

Some in the US Congress have not overlooked this particular option. As a salient example, the proposed “ACCESS Act” incorporates key functional openness measures aimed at large platform companies (Warner Reference Warner2019). Introduced in the US Senate in October 2019, the bill encompasses two agency-bolstering elements. First, the bill would require the platforms to provide interoperability and data portability, via transparent and accessible interfaces suited for both users and third parties. Second, the bill would allow users to delegate their digital rights to “third party custodians,” operating under a duty of care (Warner Reference Warner2019).

That right of delegation could well prove crucial to enabling individuals and communities to fully exercise whatever statutory rights are granted to them (Whitt 2021b). Indeed, Nobel prize winning economist Paul Romer observed in his statement supporting the original 2019 version of the bill: “By giving consumers the ability to delegate decisions to organizations working on their behalf, the ACCESS Act gives consumers some hope that they can understand what they are giving up and getting in the opaque world that the tech firms have created” (Warner Reference Warner2019). As noted, such asymmetrical opacity is even more pervasive and insidious in the smart cities context.

Conclusion: Adapting Ancient Lessons for Modern Communities

“Places matter!”

(Fleming Reference Fleming2009, 32)

Against a backdrop of widespread governance failures worldwide in our economic, political, and social systems, the near-term opportunity is apparent (Whitt Reference Whitt2020a). As the city of Thurii attempted some 2,500 years ago, today we can craft governance structures and spatial processes that work together to ensure inclusive and supportive physical environments for real people.

Our digital communities should embrace the active participation of citizens and visitors alike in the increasingly blended spaces that constitute the self and world, the private and public, and the physical and virtual. Insights gleaned from the GKC framework, fiduciary-based governance models, and technologies utilizing edge-to-any/all design principles, can form a powerfully inclusive combination. Also, the digital fiduciary and personal AI could be a complementary means to help ensure that ordinary people can readily explore and participate in the brave new world of their digitally equipped communities.

Footnotes

7 Technofuturism in Play Privacy, Surveillance, and Innovation at Walt Disney World

1 Richtel, Matt. 2006. "Selling Surveillance to Anxious Parents." The New York Times, May 3. www.charterequityresearch.com/Portals/0/Press/2006-05-03_surveillance_nyt_richtel.pdf.

2 Walt Disney World. 2019. Carousel of Progress. Walt Disney World. https://disneyworld.disney.go.com/attractions/magic-kingdom/walt-disney-carousel-of-progress/&lang=en/.

3 Vitarelli, W., dir. 1966. E.P.C.O.T Film. Walt Disney Productions.

4 Kerr, M. 2018. “Behind Security at Disney World: What You Don’t Know about the Happiest Place on Earth.” Showbiz Cheat Sheet, January 16. www.cheatsheet.com/culture/behind-security-at-disney-world-what-you-dont-know-about-the-happiest-place-on-earth.html/.

Niles, R. 2010. “Theme Park Cast Member Stories: The One with the Security Camera.” Theme Park Insider, September 13. www.themeparkinsider.com/flume/201009/2099/.

5 Walt Disney World Resort. 2018. My Disney Experience – Frequently Asked Questions. Privacy Policy | FAQ | https://disneyworld.disney.go.com/faq/my-disney-experience/my-magic-plus-privacy/.

6 Dsis, J. 2016. “Disney World Scanning Toddlers’ Fingers to Stop Ticket Fraud.” CNNMoney, September 7. https://money.cnn.com/2016/09/07/news/companies/disney-world-finger-scan/index.html.

7 KYW Newsradio. 2017. “Disney World Collects Your Fingerprint Data: Who Has Access?” Philadelphia Breaking News, Today’s News. https://kywnewsradio.radio.com/articles/news/disney-world-collects-your-fingerprint-data-who-has-access.

8 JC Penney. 2015. “JC Penney Announces Collaboration with Disney.” February 11. https://ir.jcpenney.com/news-events/press-releases/detail/446/jcpenney-announces-its-collaboration-in-promoting-disneys.

9 Walt Disney World. 2017. Minnie Van™ Service Connected by Lyft. https://disneyworld.disney.go.com/guest-services/minnie-van-service/.

10 Walt Disney World. 2016. “Safety and Security Policy.”

11 Walt Disney World. 2019. “Security Measures at Walt Disney World Resort.” https://disneyworld.disney.go.com/faq/parks/security-procedures/?int_cmp=INS-intWDWtoWDW-Security.

12 While Disney does not store the fingerprints themselves for any period of time, it does create a unique hash of them, following a protocol from the FBI, which is stored for the duration of your ticket/pass (1 to 365 days) and up to 30 days after that. This allows them to be interpreted in a case where law enforcement does subpoena or obtain another court order to share that information. Further discussions are available at https://news.ycombinator.com/item?id=16581019 and https://allears.net/walt-disney-world/wdw-planning/finger-scans-for-park-passes/.

13 Clark Estes, A. 2017. “How I Let Disney Track My Every Move.” Gizmodo, March 28. https://gizmodo.com/how-i-let-disney-track-my-every-move-1792875386.

14 DISboards.com. n.d. “Sensor Discussion.” The DIS Disney Discussion Forums. www.disboards.com/threads/just-how-much-are-the-magic-bands-tracking.3247761/#post-50926900.

15 Hertzfeld, E. 2018. “Why Mobile Key Is Taking Over in Hotels.” Hotel Management, December 13. www.hotelmanagement.net/tech/why-mobile-key-taking-over-hotels.

16 Knoll, C. 2019. “When a Phone App Opens Your Apartment Door, but You just Want a Key.” The New York Times, March 23. www.nytimes.com/2019/03/23/nyregion/keyless-apartment-entry-nyc.html.

Porter, J. 2019. “NYC Tenants Successfully Argue for Right to a Physical Key over a Smart Lock.” The Verge, May 10. www.theverge.com/2019/5/10/18564322/new-york-city-apartment-smart-lock-physical-key-judge-lawsuit-latch.

17 Golden, H. 2020. “‘just Walk Out’: Amazon Debuts Its First Supermarket with no Checkout Lines.” The Guardian, February 27. www.theguardian.com/us-news/2020/feb/25/amazon-go-grocery-supermarket-seattle-technology.

18 Warburton, M. 2019. “Alphabet Features Self-Driving Garbage Cans, Apartment Noise Monitors in Toronto Smart City Project.” Reuters, November 15. www.reuters.com/article/us-canada-sidewalk/alphabet-features-self-driving-garbage-cans-apartment-noise-monitors-in-toronto-smart-city-project-idUSKBN1XP1UX?il=0.

19 Budds, D. 2019. “A New Report Outlines Privacy Risks for the MTA’s Contactless Payment System.” Curbed NY, October 3. https://ny.curbed.com/2019/10/3/20895736/mta-omny-privacy-surveillance-report.

20 Vora, S. 2017. “Airlines Try Biometric Identification for Boarding and Bags.” The New York Times, July 7. www.nytimes.com/2017/07/07/travel/airline-passenger-trials-biometric-identification.html?searchResultPosition=6.

8 Can a Smart City Exist as Commons? The Case of Automated Governance in Sidewalk Toronto

1 Sidewalk Toronto was redubbed Quayside, with the former being a project name for the partnership between Sidewalk Labs and Waterfront Toronto and the latter being a local toponym for the 12-acre swath of land at the foot of Parliament Street in Toronto, where the smart city was planned to be built.

2 As of 2022, the mandate of the Ontario Health Data Platform, which operated under the Ontario Emergency Act, is over.

3 Here and throughout the text, the term “automated governance” refers to governance by algorithms. To avoid confusion, the term “data governance policies” refers to the governance of algorithms.

4 The data-driven tool is currently sold by Replica, Sidewalk Labs’ spin off.

5 See further discussion about the concept of “algorithmic governance” in O’Reilly (Reference O’Reilly2013) and Gorwa, Binns, and Katzenbach (Reference Gorwa, Binns and Katzenbach2020).

6 As Sidewalk Labs has removed the affordable housing proposal from company’s website, please refer to the media coverage of the document (e.g., Vincent Reference Coulter2018; O’Kane and Bozikovic Reference Bowden2018).

9 From Thurii to Quayside Creating Inclusive Blended Spaces in Digital Communities

References

References

Andrews, Sheldon, Huerta, Ivan, Komura, Taku, Sigal, Leonid, and Mitchell, Kenny. 2016. “Real-Time Physics-Based Motion Capture with Sparse Sensors.” In Proceedings of the 13th European Conference on Visual Media Production (CVMP 2016), 110. London: ACM.Google Scholar
Bowers, Alan. 2019. “FUTURE WORLD(S): A Critique of Disney’s EPCOT and Creating a Futuristic Curriculum.” PhD dissertation. Georgia Southern University.Google Scholar
Coleman, Roy, and Sim, Joe. 1998. “From the Dockyards to the Disney Store: Surveillance, Risk and Security in Liverpool City Centre.” International Review of Law, Computers & Technology 12 (1): 2745.Google Scholar
Cotter, Bill, and Young, Bill. 2004. The 1964–1965 New York World’s Fair. Charleston, SC: Arcadia Publishing.Google Scholar
Disney, Walt. 1966. “The EPCOT film.” Walt Disney Pictures.Google Scholar
Frischmann, Brett, and Selinger, Evan. 2018. Re-engineering Humanity. Cambridge: Cambridge University Press.Google Scholar
Frischmann, Brett M., Madison, Michael J., and Strandburg, Katherine J., eds. 2014. Governing Knowledge Commons. Oxford: Oxford University Press.Google Scholar
Goldfarb, Avi, and Tucker, Catherine. 2012. “Privacy and Innovation.” Innovation Policy and the Economy 12 (1): 6590.Google Scholar
Hasan, Azhar, Zhou, Chenming, and Griffin, Joshua D.. 2011. “Experimental Demonstration of Transmit Diversity for Passive Backscatter RFID Systems.” In 2011 IEEE International Conference on RFID-Technologies and Applications, 544–48. New York: IEEE. https://doi.org/10.1109/RFID-TA.2011.6068598.Google Scholar
Hollinshead, Keith. 1999. “Surveillance of the Worlds of Tourism: Foucault and the Eye-of-Power.” Tourism Management 20 (1): 723.Google Scholar
Jain, Anil K., and Nandakumar, Karthik. 2012. “Biometric Authentication: System Security and User Privacy.” Computer 45 (11): 8792.Google Scholar
Kling, Rob, and Lamb, Roberta. 1997. “Bits of Cities: Utopian Visions and Social Power in Placed-Based and Electronic Communities.” In Urban Powers and Utopias in the World, edited by Eveno, Emmanuel, 96102. Toulouse: Presses Universitaires du Mirail, in the series “Villes et territoires” (Towns & Territories).Google Scholar
Knight, Cher Krause. 2014. Power and Paradise in Walt Disney’s World. Gainesville: University Press of Florida.Google Scholar
Martin, Daniela. 2019. “EPCOT Theme Park as a Science Communication Space: The Test Track Case.” Journal of Science Communication 18 (4): A09.CrossRefGoogle Scholar
Matusitz, Jonathan, and Palermo, Lauren. 2014. “The Disneyfication of the World: A Globalisation Perspective.” Journal of Organisational Transformation & Social Change 11 (2): 91107.Google Scholar
Mosco, Vincent. 2019. “City of Technology: Where the Streets Are Paved with Data.” In The Smart City in a Digital World. Bingley: Emerald Publishing.Google Scholar
Nissenbaum, Helen. 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford, CA: Stanford University Press.CrossRefGoogle Scholar
Project on Disney. 1995. Inside the Mouse: Work and Play at Disney World. Durham, NC: Duke University Press.Google Scholar
Roberts, Chris M. 2006. “Radio Frequency Identification (RFID).” Computers & Security 25 (1): 1826.CrossRefGoogle Scholar
Roland, Alex, Shiman, Philip, and Aspray, William. 2002. Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993. Cambridge, MA: MIT Press.Google Scholar
Sanfilippo, Madelyn, Frischmann, Brett, and Standburg, Katherine. 2018. “Privacy as Commons: Case Evaluation through the Governing Knowledge Commons Framework.” Journal of Information Policy 8: 116–66.Google Scholar
Sanfilippo, Madelyn Rose, and Shvartzshnaider, Yan. 2021. “Data and Privacy in a Quasi-Public Space: Disney World as a Smart City.” In International Conference on Information, 235–50. Cham: Springer.Google Scholar
Shearing, Clifford, and Stenning, Philip. 1985. “From the Panopticon to Disney World: The Development of Discipline.” In Perspectives in Criminal Law: Essays in Honour of John Ll. J. Edwards. Toronto: Canada Law Book. 335–49. Toronto: University of Toronto Press.Google Scholar
Shearing, Clifford D., and Stenning, Philip C.. 1987. “Say ‘Cheese!’: The Disney Order That Is Not so Mickey Mouse.” Private Policing 23: 317–23.Google Scholar
Shvartzshnaider, Yan, Apthorpe, Noah, Feamster, Nick, and Nissenbaum, Helen. 2019. “Going against the (Appropriate) Flow: A Contextual Integrity Approach to Privacy Policy Analysis.” Proceedings of the AAAI Conference on Human Computation and Crowdsourcing 7 (1): 162–70. https://ojs.aaai.org/index.php/HCOMP/article/view/5266.Google Scholar
Slyper, Ronit, Poupyrev, Ivan, and Hodgins, Jessica. 2010. “Sensing through Structure: Designing Soft Silicone Sensors.” In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, 213–20. New York: Association for Computing Machinery (ACM).Google Scholar
Smoodin, Eric. 1994. “The Glamour Factory.” Film Quarterly (ARCHIVE) 47 (4): 42.Google Scholar
Smoodin, Eric, ed. 2013. Disney Discourse: Producing the Magic Kingdom. London: Routledge.Google Scholar
Solove, Daniel J. 2005. “A Taxonomy of Privacy.” University of Pennsylvania Law Review 154: 477564.Google Scholar
Souther, J. Mark. 2007. “The Disneyfication of New Orleans: The French Quarter as Facade in a Divided City.” Journal of American History 94 (3): 804–11.Google Scholar
Stone, Kaitlyn. 2017. “Enter the World of Yesterday, Tomorrow and Fantasy: Walt Disney World’s Creation and Its Implications on Privacy Rights under the MagicBand System.” Journal of High Technology Law 18: 198238.Google Scholar
Warren, Stacy. 1994. “Disneyfication of the Metropolis: Popular Resistance in Seattle.” Journal of Urban Affairs 16 (2): 89107.Google Scholar
Wills, John. Disney Culture. New Brunswick, NJ: Rutgers University Press, 2017.Google Scholar
Wylie, Bianca. 2018. “Searching for the Smart City’s Democratic Future.” Centre for International Governance Innovation 13. www.cigionline.org/articles/searching-smart-citys-democratic-future/.Google Scholar

References

Alizadeh, Tooran, Helderop, Edward, and Grubesic, Tony. 2020. “There Is no Such Thing as Free Infrastructure: Google Fiber.” In How to Run a City Like Amazon and Other Fables, edited by Graham, Mark, Kitchin, Rob, Mattern, Shannon, and Shaw, Joe. Manchester: Meatspace Press.Google Scholar
Artyushina, Anna. 2020a. “Is Civic Data Governance the Key to Democratic Smart Cities? The Role of the Urban Data Trust in Sidewalk Toronto.” Telematics and Informatics 55 (2): 101456. https://doi.org/10.1016/j.tele.2020.101456.CrossRefGoogle Scholar
Artyshina, Anna. 2020b. “The EU Is Launching a Market for Personal Data. Here’s What That Means for Privacy.” MIT Technology Review, August 11. www.technologyreview.com/2020/08/11/1006555/eu-data-trust-trusts-project-privacy-policy-opinion/.Google Scholar
Artyshina, Anna. 2021. “The Future of Data Trusts and the Global Race to Dominate AI.” Bennett Institute for Public Policy at the University of Cambridge, June 10. www.bennettinstitute.cam.ac.uk/blog/data-trusts1/.Google Scholar
Balsillie, Jim. 2018. “Sidewalk Toronto Has Only One Beneficiary, and It Is Not Toronto.” The Globe and Mail, October 5.Google Scholar
Barns, Sarah, Cosgrave, Ellie, Acuto, Michaelle, and Mcneill, Donald. 2017. “Digital Infrastructures and Urban Governance.” Urban Policy and Research 35 (1): 2031.Google Scholar
Birch, Kean, and Muniesa, Fabian, eds. 2020. Assetization: Turning Things into Assets in Technoscientific Capitalism. Cambridge, MA: MIT Press.Google Scholar
Birch, Kean, Chiappetta, Margaret, and Artyushina, Anna. 2020. “The Problem of Innovation in Technoscientific Capitalism: Data Rentiership and the Policy Implications of Turning Personal Digital Data into a Private Asset.” Policy Studies 41 (5): 468487.Google Scholar
Bozikovic, Alex. 2017. “Google’s Sidewalk Labs Signs Deal for ‘Smart City’ Makeover of Toronto’s Waterfront.” The Globe and Mail, October 17. www.theglobeandmail.com/news/toronto/google-sidewalk-toronto-waterfront/article36612387/.Google Scholar
Bowden, Nick. 2018. “Introducing Replica, a Next-Generation Urban Planning Tool.” Sidewalk Labs. April 6. www.sidewalklabs.com/blog/introducing-replica-a-next-generation-urban-planning-tool/.Google Scholar
Cardoso, Tom, and O’Kane, Josh. 2019. “Sidewalk Labs Document Reveals Company’s Early Vision for Data Collection, Tax Powers, Criminal Justice.” The Globe and Mail, October 30. www.theglobeandmail.com/business/article-sidewalk-labs-document-reveals-companys-early-plans-for-data/.Google Scholar
Cardullo, Paolo, and Kitchin, Rob. 2019. “Being a ‘Citizen’ in the Smart City: Up and Down the Scaffold of Smart Citizen Participation in Dublin, Ireland.” GeoJournal 84: 13. https://mural.maynoothuniversity.ie/14648/1/RK-Citizen-2019.pdf.Google Scholar
Cardullo, Paolo, Di Feliciantonio, Cezare, and Kitchin, Rob. 2021. The Right to the Smart City. Bingley: Emerald Publishing.Google Scholar
Carr, Constance, and Hesse, Markus. 2020. “When Alphabet Inc. Plans Toronto’s Waterfront: New Post-Political Modes of Urban Governance.” Urban Planning 5 (1): 6983.Google Scholar
Coulter, Martin. 2021. “Alphabet’s Sidewalk Labs Has Abandoned Another US Smart City Project after Reported Fights about Transparency.” Insider, February 24. www.businessinsider.com/second-sidewalk-labs-smart-city-project-shutters-portland-oregon-2021-2.Google Scholar
Crawford, Sue E. S., and Ostrom, Elinor. 1995. “A Grammar of Institutions.” American Political Science Review 89 (3): 582600.Google Scholar
Crombie, David. 2020. “Province Refuses to Kill Controversial Legislation in Wake of Greenbelt Council Resignations.” CBC, December 7. www.cbc.ca/news/canada/toronto/ontario-greenbelt-latest-1.5830891.Google Scholar
Fewer, David. “The Price of Trust? An Analysis of Emerging Digital Stewardship Models.” The Samuelson–Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC). www.priv.gc.ca/en/opc-actions-and-decisions/research/funding-for-privacy-research-and-knowledge-translation/completed-contributions-program-projects/2019-2020/p_2019-20_10/.Google Scholar
Frischmann, Brett M. 2012. Infrastructure: The Social Value of Shared Resources. Oxford: Oxford University Press.Google Scholar
Frischmann, Brett M., and Selinger, Evan. 2018 . Re-engineering Humanity. Cambridge: Cambridge University Press.Google Scholar
Frischmann, Brett M., Madison, Michael J., and Strandburg, Katherine J., eds.. 2014. Governing Knowledge Commons. Oxford: Oxford University Press.Google Scholar
Gabrys, Jane. 2014. “Programming Environments: Environmentality and Citizen Sensing in the Smart City.” Environment and Planning D: Society and Space 32 (1): 3048.Google Scholar
Goodman, Ellen, and Powles, Julia. 2019. “Urbanism under Google: Lessons from Sidewalk Toronto.” Fordham Law Review 88 (2): 457–98.Google Scholar
Gorwa, Robert, Binns, Reuben, and Katzenbach, Christian. 2020. “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.” Big Data & Society 7 (1): 2053951719897945.Google Scholar
Government of Canada. 2020, “Proposals to Modernize the Personal Information Protection and Electronic Documents Act.” PIPEDA. www.ic.gc.ca/eic/site/062.nsf/eng/h_00107.html.Google Scholar
Government of Ontario. 2018. Annual Report. Office of the Auditor General of Ontario. www.auditor.on.ca/en/content/annualreports/arreports/en18/v1_315en18.pdf.Google Scholar
Government of Ontario. 2019. Ontario’s Building Code. https://www.ontario.ca/page/ontarios-building-code.Google Scholar
Government of Ontario. 2020. Ontario Health Data Platform (OHDP). https://ohdp.ca/overview/.Google Scholar
Hollands, Robert G. 2015. “Critical Interventions into the Corporate Smart City.” Cambridge Journal of Regions, Economy and Society 8 (1): 6177.Google Scholar
Hook, Leslie. 2017. “Alphabet to Build Futuristic City in Toronto.” Financial Times, October 17.Google Scholar
House of Commons of Canada. 2019. “Standing Committee on Access to Information, Privacy and Ethics.” Evidence of meeting, April 2. https://openparliament.ca/committees/ethics/42-1/141/dan-doctoroff-1/.Google Scholar
Information and Privacy Commissioner of Ontario. 2019. “Re: Sidewalk Labs’ Proposal.” www.ipc.on.ca/wp-content/uploads/2019/09/2019-09-24-ltr-stephen-diamond-waterfront_toronto-residewalk-proposal.pdf.Google Scholar
Katzenbach, Christian, and Ulbricht, Lena. 2019. “Algorithmic Governance.” Internet Policy Review 8 (4): 118.Google Scholar
Keesmaat, Jennifer. 2019. “Sidewalk’s Affordable Housing Isn’t Really Affordable.” Toronto Life, September 4.Google Scholar
Kitchin, Rob. 2021. “Decentering the Smart City.” The Programmable City Working Paper 45, January 26. https://progcity.maynoothuniversity.ie/wp-content/uploads/2021/01/PC-45-Decentring-the-smart-city.pdf.Google Scholar
Lorinc, John. 2019. “Sidewalk Labs and the Problem of Smart City Governance.” Spacing Toronto, June 25. http://spacing.ca/toronto/2019/06/25/lorinc-sidewalk-labs-and-the-problem-of-smart-city-governance/.Google Scholar
Lysyk, B. 2018. Auditor General Annual Report, Province of Ontario. www.auditor.on.ca/en/content/annualreports/arreports/en18/v1_315en18.pdf.Google Scholar
Madison, Michael. 2020. “Tools for Data Governance.” Technology and Regulation 2020: 2943.Google Scholar
Morozov, Evgeniy. 2017. “Google’s Plan to Revolutionize Cities Is a Takeover in All but Name.” The Guardian, October 22.Google Scholar
Muzaffar, Saadia. 2018. “My Full Resignation Letter from Waterfront Toronto’s Digital Strategy Advisory Panel.” Medium, October 8.Google Scholar
O’Kane, Josh. 2018. “Inside the Mysteries and Missteps of Toronto’s Smart-City Dream.” The Globe and Mail, May 17.Google Scholar
O’Kane, Josh. 2019. “Waterfront Toronto Moving Forward on Sidewalk Labs’s Smart City, but with Limits on Scale, Data Collection.” The Globe and Mail, October 31.Google Scholar
O’Kane, Josh, and Bozikovic, Alex. 2018. “Sidewalk Labs Taking Steps to Control Intellectual Property on Toronto’s ‘Smart City,’ Document Shows.” The Globe and Mail, August 31.Google Scholar
O’Reilly, Tim. 2013. “Open Data and Algorithmic Regulation.” Beyond Transparency: Open Data and the Future of Civic Innovation 21: 289300.Google Scholar
O’Shea, Sean. 2018. “Ann Cavoukian, Former Ontario Privacy Commissioner, Resigns from Sidewalk Labs.” Global News, October 21.Google Scholar
Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge: Cambridge University Press.Google Scholar
Rider, David. 2019. “Critics Are Calling Sidewalk Labs’ Proposal to Create New Oversight Agencies a Power Grab.” Toronto Star, June 30. www.thestar.com/news/city_hall/2019/06/30/critics-are-calling-sidewalk-labs-proposal-to-create-new-oversight-agencies-a-power-grab.html.Google Scholar
Robinson, Pamela. 2019. “How Does Government Fit into Smart City Plans?” Spacing Toronto, June 26. http://spacing.ca/toronto/2019/06/26/robinson-how-does-government-fit-into-smart-city-plans/.Google Scholar
Saldana, Johny. 2015. The Coding Manual for Qualitative Researchers. London: Sage Publications.Google Scholar
Sanfilippo, Madelyn Rose, Frischmann, Brett M., and Strandburg, Katherine J., eds.. 2021. Governing Privacy in Knowledge Commons. Cambridge: Cambridge University Press.Google Scholar
Sanfilippo, Madelyn Rose, and Shvartzshnaider, Yan. 2021. “Data and Privacy in a Quasi-Public Space: Disney World as a Smart City.” In International Conference on Information. Champaign, IL: Springer, 235–50.Google Scholar
Scassa, Theresa. 2020. “Designing Data Governance for Data Sharing. Lessons from Sidewalk Toronto.” Technology and Regulation 2: 4456.Google Scholar
Sidewalk Labs. 2018. Plan Development Agreement (PDA), July 31.Google Scholar
Sidewalk Toronto. 2019. Master Innovation and Development Plan. www.sidewalktoronto.ca/midp/.Google Scholar
Spurr, Ben. 2019. “Sidewalk Labs Wants to Create a New Transportation Authority.” Toronto Star, June 25. www.thestar.com/news/gta/2019/06/25/sidewalk-labs-wants-to-create-its-own-transportation-authority-in-quayside.html.Google Scholar
Strandburg, Katherine J., Frischmann, Brett M., and Madison, Michael J., eds. 2017. Governing Medical Knowledge Commons. Cambridge: Cambridge University Press.Google Scholar
Valverde, Marianna, and Flynn, Alexandra. 2020. Smart Cities in Canada. Toronto: James Loriner.Google Scholar
Vanolo, Alberto. 2014. “Smartmentality: The Smart City as Disciplinary Strategy.” Urban Studies 51 (5): 883–98.Google Scholar
Vincent, Donovan. 2018. “Sidewalk Labs Reveals Plans for Housing in Quayside District on City’s Waterfront.” Toronto Star, December 3. www.thestar.com/news/gta/2018/11/29/sidewalk-touts-unprecedented-level-of-affordable-housing-at-quayside.html.Google Scholar
Vincent, Donovan. 2019. “Sidewalk Labs Vows Its 190-Acre Waterfront Plan Will Be ‘Economic Windfall’ for Toronto.” Toronto Star, June 24. www.thestar.com/news/gta/2019/06/21/google-sister-company-pitches-190-acre-toronto-plan-vows-quayside-will-be-economic-windfall-for-city.html.Google Scholar
Waterfront Toronto. 2017 “New District in Toronto Will Tackle the Challenges of Urban Growth.” Waterfront Toronto, October 17. https://waterfrontoronto.ca/nbe/portal/waterfront/Home/waterfronthome/newsroom/newsarchive/news/2017/october/new+district+in+toronto+will+tackle+the+challenges+of+urban+growth.Google Scholar
Wolpow, Nina. 2021. “Sidewalk Labs Spinout Replica Raises $41 Million Series B.” Forbes, April 21.Google Scholar
Wylie, Bianca. 2017. “Civic Tech: A List of Questions We’d Like Sidewalk Labs to Answer.” TORONTOIST, October 30. https://torontoist.com/2017/10/civic-tech-list-questions-wed-like-sidewalk-labs-answer/.Google Scholar
Wylie, Bianca. 2018. “Searching for the Smart City’s Democratic Future.” Centre for International Governance Innovation, August 13. www.cigionline.org/articles/searching-smart-citys-democratic-future.Google Scholar
Wylie, Bianca. 2020. “In Toronto, Google’s Attempt to Privatize Government Fails – For Now.” Boston Review, May 13.Google Scholar
Zarum, Lara. 2019. “#BlockSidewalk’s War against Google in Canada.” Nation, April 24. www.thenation.com/article/archive/google-toronto-sidewalk-gentrification/.Google Scholar

References

Artyushina, Anna. 2020. “Is Civic Data Governance the Key to Democratic Smart Cities? The Role of the Urban Data Trust in Sidewalk Toronto.” Telematics and Informatics 55: 101456. www.sciencedirect.com/science/article/abs/pii/S0736585320301155.Google Scholar
Bollier, David, and Helfrich, Selik, eds. 2012. The Wealth of the Commons: A World beyond Market and State. Amherst, MA: Levellers Press.Google Scholar
Brincker, Maria. 2017. “Privacy in Public and the Contextual Conditions of Agency.” In Privacy in Public Space, edited by Timan, Tjerk, Newell, Bryce Clayton, and Koops, Bert-Jaap, 6490. Northampton, MA: Edward Elgar.Google Scholar
Brioschi, S. A., and Marino., S. D. 2018. “Hypothesis of Reconstruction of Ancient Cities through 3D Printing: The Case-Study of Thurii.” In Putting Tradition into Practice: Heritage, Place and Design, edited by Amoruso, G.. INTBAU 2017. Lecture Notes in Civil Engineering, vol. 3, 654–61. Cham: Springer. https://doi.org/10.1007/978-3-319-57937-5_68.Google Scholar
Carr, Constance, and Hesse, Markus. 2020. “When Alphabet Inc. Plans Toronto’s Waterfront: New Post Political Modes of Urban Governance.” Cogitatio, Urban Planning 5 (1): 6983.Google Scholar
Christofi, Athena, and Verdoodt, Valerie. 2019. “Exploring the Essence of the Right to Data Protection and Smart Cities.” August 20. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3483616.Google Scholar
Couldry, Nick, and Mejias, Ulises A.. 2019. The Costs of Connection: How Data Is Colonizing Human Life and Appropriates It for Capitalism. Oxford: Oxford University Press.Google Scholar
Deeper Edge LLC. 2021. “About.” www.deeperedge.net/about.Google Scholar
Delacroix, Sylvie, and Lawrence, Neil. 2018. “Bottom-Up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance.” October 12. https://ssrn.com/abstract=3265315 or http://dx.doi.org/10.2139/ssrn.3265315.Google Scholar
Digi.city. 2021. “What Is a Smart City?” www.digi.city/smart-city-definitions.Google Scholar
Doctoroff, Daniel. 2020. “Why We’re No Longer Pursuing the Quayside Project and What’s Next for Sidewalk Labs.” May 7. https://medium.com/sidewalk-talk/why-were-no-longer-pursuing-the-quayside-project-and-what-s-next-for-sidewalk-labs-9a61de3fee3a.Google Scholar
Element AI. 2019. “Data Trusts: A New Tool for Data Governance.” https://hello.elementai.com/rs/024-OAQ-547/images/Data_Trusts_EN_201914.pdf.Google Scholar
Fleming, David. 2002. “The Streets of Thurii: Discourse, Democracy, and Design in the Classical Polis.” Rhetoric Society Quarterly 32 (3): 532. www.jstor.org/stable/3886007.Google Scholar
Fleming, David. 2009. City of Rhetoric: Revitalizing the Public Sphere in Metropolitan America. New York: State University of New York Press.Google Scholar
Floridi, Luciano. 2019. “Marketing as Control of Human Interfaces and Its Political Exploitation.” Philosophy and Technology 32 (3): 379–88. https://philpapers.org/rec/FLOMAC-2.Google Scholar
FortifID. 2021. “A Privacy-First Customer Onboarding and Validation Solution.” https://fortifid.com/.Google Scholar
Frischmann, Brett M. 2012. Infrastructure: The Social Value of Shared Resources. Oxford: Oxford University Press.Google Scholar
Frischmann, Brett, and Selinger, Evan. 2018. Re-engineering Humanity. Cambridge: Cambridge University Press.Google Scholar
Frischmann, Brett M., Madison, Michael J., and Strandburg, Katherine J.. 2014. “Governing Knowledge Commons.“ In Brett M. Frischmann, Michael J. Madison, and Katherine J. Strandburg, eds. 1–41. Oxford: Oxford University Press.Google Scholar
Galloway, Alexander. 2012. The Interface Effect. Chichester: Wiley.Google Scholar
Goodman, Ellen P., and Powles, Julia. 2019. “Urbanism under Google: Lessons from Sidewalk Toronto.” Fordham Law Review 88: 457–98.Google Scholar
Hess, Charlotte. 2012. “The Unfolding of the Knowledge Commons.” St. Anthony’s International Review 8 (1): 13–24. https://surface.syr.edu/sul/111/.Google Scholar
Kirkpatrick, A. 2015. “The Image of the City in Antiquity: Tracing the Origins of Urban Planning, Hippodamian Theory, and the Orthogonal Grid in Classical Greece.” Masters thesis, University of Victoria Department of Greek and Roman Studies. www.semanticscholar.org/paper/The-image-of-the-city-in-antiquity%3A-tracing-the-of-Kirkpatrick/dec886275ce0b70330447f26a03e0b04f2a53916?p2df.Google Scholar
Kuang, Cliff, and Fabricant, Robert. 2019. User Friendly: How the Hidden Rules of Design Are Changing the Way We Live, Work and Play. New York: Random House.Google Scholar
Lidwell, William, Holden, Kritina, and Butler, Jill. 2003. Universal Principles of Design. Gloucester, MA: Rockport Publishers.Google Scholar
Lu, Jacqueline. 2019. “How Can We Make Urban Tech Transparent; These Icons Are a First Step.” Sidewalk Labs, April 19. https://web.archive.org/web/20211006055344/https://medium.com/sidewalk-talk/how-can-we-make-urban-tech-transparent-these-icons-are-a-first-step-f03f237f8ff0.Google Scholar
MaRS. 2021. “Examples of Civic Data Trusts.” https://marsdd.gitbook.io/datatrust/trusts/global-examples.Google Scholar
McDonald, Sean. 2015. “The Civic Trust.” Medium, August 4. https://medium.com/@digitalpublic/the-civic-trust-e674f9aeab43.Google Scholar
McDonald, Sean. 2018. “Toronto, Civic Data, and Trust.” Medium, October 17. https://medium.com/@digitalpublic/toronto-civic-data-and-trust-ee7ab928fb68.Google Scholar
New America. 2019. “A Commons Approach to Data Governance.” www.newamerica.org/weekly/commons-approach-to-data-governance/.Google Scholar
Newen, Albert, De Bruin, Albert, and Gallagher, Shaun, eds. 2014. The Oxford Handbook of 4E Cognition. Oxford: Oxford University Press.Google Scholar
Norton Rose Fulbright. 2021. “The Metaverse: The Evolution of a Universal Digital Platform.” July 2021. www.nortonrosefulbright.com/en/knowledge/publications/5cd471a1/the-metaverse-the-evolution-of-a-universal-digital-platform.Google Scholar
OPAL Project. 2021. “About OPAL.” www.opalproject.org.Google Scholar
Project VRM. 2021. “About.” http://blogs.harvard.edu/vrm/about/.Google Scholar
Rutter, N. K. 1973. “Diodorus and the Foundation of Thurii.” Historia: Zeitschrift Für Alte Geschichte 22 (2): 155–76. www.jstor.org/stable/4435327.Google Scholar
Scassa, Teresa. 2020. “Designing Data Governance for Data Sharing: Lessons from Sidewalk Toronto.” Technology and Regulation 2020. https://techreg.org/article/view/10994.Google Scholar
Searls, Doc. 2018. “Why Personal Agency Matters more than Personal Data.” https://blogs.harvard.edu/vrm/2018/06/23/matters/.Google Scholar
Sidewalk Labs. 2020a. “Advancing Digital Transparency in the Public Realm.” May 21, 2020. https://process.dtpr.dev/.Google Scholar
Sidewalk Labs. 2020c. “Conversational UI Prototypes for Sidewalk Labs DTPR Phase 2.” https://swl-convo-ui-proto.herokuapp.com/307-numina-sensor/.Google Scholar
Sidewalk Labs. 2020d. “Exploring the Potential of Trusted Digital Assistants.” https://process.dtpr.dev/blog/research-session-3-exploring-the-potential-of-trusted-digital-assistants.Google Scholar
Spivery, Michael J., and Huette, Stephanie. 2014. “The Embodiment of Attention in the Perception-Action Loop.” In The Routledge Handbook of Embodied Cognition, edited by Shapiro, Lawrence, 306–14. Oxford: Routledge.Google Scholar
Stavrides, Stavros. 2020. Common Space: The City as Commons. London: Bloomsbury.Google Scholar
Vincent, Donovan. 2019. “Sidewalk Labs’ Urban Data Trust Is ‘Problematic,’ Says Ontario Privacy Commissioner.” Toronto Star, September 26. www.thestar.com/news/gta/2019/09/26/sidewalk-labs-urban-data-trust-is-problematic-says-ontario-privacy-commissioner.html.Google Scholar
Warner, Mark. 2019. “Senators Introduce Bipartisan Bill to Encourage Competition in Social Media.” October 22, 2019. www.warner.senate.gov/public/index.cfm/2019/10/senators-introduce-bipartisan-bill-to-encourage-competition-in-social-media.Google Scholar
Weiser, Mark. 1991. “The Computer for the 21st Century.” Scientific American, September 1.Google Scholar
Whipps, Heather. 2008. “How the Greek Agora Changed the World.” www.livescience.com/4861-greek-agora-changed-world.html.Google Scholar
Whitt, Richard S. 2013. “A Deference to Protocol: Fashioning a Three-Dimensional Public Policy Framework for the Internet Age.” Cardozo Arts and Entertainment Law Journal 31 (3): 689768. https://cardozoaelj.com/wp-content/uploads/2013/08/Whitt-31.3.pdf.Google Scholar
Whitt, Richard S. 2018. “Hiding in the Open: How Tech Network Policies Can Inform Openness by Design (and Vice Versa).” Georgetown Tech Law Journal 3 (28): 2880. https://georgetownlawtechreview.org/wp-content/uploads/2019/01/3.1-Whitt-pp-28-80.pdf.Google Scholar
Whitt, Richard S. 2020a. “New Days, New Paradigms: Covid-19 and Pathways to Our Digital Empowerment.” Medium, June 24. https://whitt.medium.com/new-days-new-paradigms-covid-19-and-pathways-to-our-digital-empowerment-3b57aa357e2b.Google Scholar
Whitt, Richard S. 2020c. “A Human-Centered Paradigm for the Web: HAACS in Action: Digital Fiduciaries, plus Personal AIs” (Article 3 of 6), Medium, August 17. https://whitt.medium.com/a-human-centered-paradigm-for-the-web-e7ceaee8fb0e.Google Scholar
Whitt, Richard S. 2020d. “From Thurii to Quayside: Creating Inclusive Digital Communities” (Article 5 of 6), Medium, October 22. https://whitt.medium.com/from-thurii-to-quayside-creating-inclusive-digital-communities-348cde93215f.Google Scholar
Whitt, Richard S. 2021a. “Hacking the SEAMs: Elevating Digital Autonomy and Agency for Humans.” Colorado Technology Law Journal 19 (1). https://ctlj.colorado.edu/?p=720.Google Scholar
Whitt, Richard S. 2021b. “Codifying the Right of Trustworthy Delegation: A Crucial Empowerment Tool for the Web.” Mozilla Blog, October 5. https://foundation.mozilla.org/en/blog/codifying-the-right-of-trustworthy-delegation-a-crucial-empowerment-tool-for-the-web/.Google Scholar
Whitt, Richard S. 2021c. “Exploring a Deeper Edge: Gaining Real-World Validation of the Digital Fiduciary Model.” Aapti Institute, The Data Economy Lab, November. https://thedataeconomylab.com/2021/11/20/gaining-real-world-validation-of-the-digital-fiduciary-model/.Google Scholar
Whitt, Richard S., and Schultze, Stephen. 2007. “The New ‘Emergence Economics’ of Innovation and Growth, and What It Means for Communications Policy.” Journal on Telecommunications and High Technology Law 7 (2): 217316. http://jthtl.org/content/articles/V7I2/JTHTLv7i2_WhittSchultze.PDF.Google Scholar
World Economic Forum. 2021a. “First-of-Its-Kind Blueprint for Data Policy Adopted by City of Helsinki.” World Economic Forum Press Release, September 7. www.weforum.org/press/2021/09/first-of-its-kind-blueprint-for-data-policy-adopted-by-city-of-helsinki/.Google Scholar
World Economic Forum. 2021b. “Empowered Data Societies: A Human-Centric Approach to Data Relationships.” WEF White Paper, September 8. www.weforum.org/whitepapers/empowered-data-societies-a-human-centric-approach-to-data-relationships.Google Scholar
Figure 0

Figure 7.1. Categorical perceptions of action arenas

Figure 1

Figure 9.1. Screens, scenes, and unseens, GLIA Foundation

Figure 2

Figure 9.2. “SEA” cycle flows, GLIA Foundation

Figure 3

Figure 9.3. “SEAMs” cycle flows, GLIA Foundation

Figure 4

Figure 9.4. Digital fiduciary and data trusts, GLIA Foundation

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×