Skip to main content Accessibility help
×
Hostname: page-component-f554764f5-qhdkw Total loading time: 0 Render date: 2025-04-17T02:54:17.680Z Has data issue: false hasContentIssue false

12 - Co-constructing Misinformation and Community

Some Conclusions

Published online by Cambridge University Press:  13 March 2025

Madelyn R. Sanfilippo
Affiliation:
University of Illinois School of Information Sciences
Melissa G. Ocepek
Affiliation:
University of Illinois School of Information Sciences

Summary

Misinformation is ubiquitous in everyday life and exists on spectrum from innocuous to harmful. Communities manage issues of credibility, trust, and information quality continuously, so as to to mitigate the impact of misinformation when possible and evolve social norms and intentional governance to delineate between problematic disinformation and little white lies. Such coproduction of governance and (mis-)information raises a complex set of ethical, economic, political, social, and technological questions that requires systematic study and careful deliberation. The Conclusion discusses key themes across chapters in this volume, as well as connections to emergent themes from other books in this series, considering implications for future research, everyday life, and the governing knowledge commons framework.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Misinformation is no more nor less than a category of information that diverges in some way from objectivity or truth; like its veracious counterparts, it is produced, saved, processed, and shared among individuals and groups of people, for better or for worse. The predominant assumption, however, is that misinformation is always bad or dangerous, disregarding little white lies and satire. As such, the broader sphere of misinformation research tends to focus on automatic detection or moderation of misinformation, the negative impacts of such content on political dynamics and polarization, and social and psychological effects that impact susceptibility to misinformation or conspiracy.

In addition to all these social and psychological factors producing misinformation and increasing our susceptibility, there are a variety of structural and institutional facets that support and constrain our engagement with misinformation, as well as our ability to effectively manage, mitigate, and remediate associated harms. We have developed a variety of social norms over time that encourage overly positive or optimistic communication of information regardless of whether it aligns with truth, as we saw in Chapter 10; for example, concerning our expectations of positivity and superficiality in greeting others and asking how they are. Further, we have strong social norms across many countries with respect to face saving, as grounded in Hofstede, facets of cultural identity (Hofstede Reference Hofstede2011), as well as Goffman’s frame analysis (Goffman Reference Goffman1974). We expect people to put their best face forward in a variety of circumstances from dating interviewing to school applications, and it scarcely matters whether they are honest or truthful.

Beyond the social norms, we have a legal and regulatory system in the United States that privileges freedom of speech, regardless of information quality. While we have many institutional restrictions on hate speech or threats, with varying degrees of efficacy in their impact and enforcement, we do very little to restrict speech with respect to information quality in other contexts, or with respect to restrictions outside of government. Given that the modern-day public sphere exists as much, if not more in online environments which are owned, managed, and governed by private entities, rather than representative or public officials, there is much greater discretion in permitting low quality information to flourish, disseminate, and impact society. It is costly, and in fact, cost prohibitive, to manage your impact and the content produced by users via moderation to the degree necessary to fully eliminate every embellishment, falsehood, or misinformation.

Yet this does not mean that we should give up. Rather, it means that we need to differentiate between those instances in which this information is truly harmful, and those instances in which misinformation occurs. Aligning rules with social norms can have this affect. For this to be effective, it is critical that researchers and policymakers effectively gauge social perceptions in different contexts, and with respect to different populations. Toward this aim, we find that the Governing Knowledge Commons (GKC) framework serves to understand how different concerns and policy objectives align with different groups of people in order to institutionalize strategies as norms. Building on the GKC research tradition, which has historically focused on aspects of common resources such as intellectual property or biomedical research contexts, personal information as a resource, smart cities, and markets (e.g., Frischmann et al. Reference Frischmann, Madison and Sanfilippo2014; Sanfilippo et al. Reference Sanfilippo, Frischmann and Strandburg2021; Strandburg et al. Reference Strandburg, Frischmann and Madison2017), this collection of case studies moves beyond that to specifically inquire about aspects of information and data quality. This approach helps us to understand how the GKC framework can more practically support research and governance arrangements, as well as the ways in which it ought to be extended and modified to better address aspects of misinformation as a social product call produced with community.

One of the key aspects that needs to be addressed in amendments to the GKC framework is that of information quality. Historically there has been an assumption that knowledge products produced by communities were inherently valuable. This series of case studies regarding misinformation highlights the ways in which many community knowledge products have a fine edge between valuable and costly, normatively acceptable or appropriate, and normatively frowned upon. It is imperative that we tease apart these tensions and identify the contextually specific line between good and bad, useful and harmful, in order to fully understand the nature of the resource and develop appropriate governance arrangements around it.

Complexity of Misinformation in Everyday Life

Consider a child caught in a lie. Whatever the falsehood or inconsistency, whether innocuous or serious, regardless of motivation, the child has only a few options when confronted in their lie. On the one hand, the child may own up to their lie, embarrassed or abashed. On the other hand, the child might double down, digging in their heels, or even upping the ante. Well, of course, there are middle-of-the-road options wherein they acknowledge exaggeration or walk back parts of the lie, and we can imagine many children who don’t want to give in, cede ground, or lose face. It is a pattern that happens every day, in every culture, in every context.

While we often recognize these lies for what they are, as in the dog ate my homework scenarios, there are certainly instances in which we take some untruth at face value, building future decisions and social constructions of reality upon poor quality information. More importantly, in many instances a child’s peers will buy in to the lie, basing their own understanding or expectations upon falsehoods. Little by little, every day, normative assumptions about honesty allow, and even facilitate, tiny bits of misinformation and common nonsense to enmesh themselves in our perceived reality.

Further, it is our perceptions of reality, rather than some ideal neutral or objective reality, that often matter the most to our behaviors, expectations, and interactions. Every time we misremember an experience or detail, it is the interactions between our fallible memories and socially, emotionally influenced perceptions that fill in the blanks of uncertainties. Just like the children, we may or may not acknowledge uncertainty or inaccuracy in our memories. Many adults also double down on untruths, perhaps based on the little bits of truth they are certain of within the assemblage of ideas or memories, perhaps based on their perceived expertise in the domain, perhaps based on any number of contributing factors. For example, worldviews and prior knowledge color human perceptions and information behaviors, shaping our understanding of the world around us in ways to make us more or less susceptible to misinformation that aligns with our preconceptions and epistemology. In this sense, just as beauty lies in the eye of the beholder, so too does truth.

Information asymmetries also often align with social structures, hierarchies, and the status quo, thereby allowing ample opportunity for manipulation of information or framing to bend representation and reflections of reality to social dynamics in subjective and favorable ways. We are more likely to accept misinformation when it aligns with our interests, and to reject or suspect it when it is misaligned or infringes on our status, safety, or interests. It takes a more critical eye to recognize misinformation or falsehood when to do so counters one’s interests.

Ubiquitous misinformation in our everyday lives poses numerous challenges, to be sure, but more than that, the spectrum of information quality throughout our information environments requires nuance in how we address it. While misinformation is often created and propagated without necessarily an intent to deceive, or with malicious motivations, it nonetheless makes us all more susceptible in cases where that is the aim via desensitization, inescapability, and an intersection between information quality problems and information overload. However, it doesn’t make us powerless to address these challenges. We need to learn from the innocuous to appropriately identify when it is noxious or controversial, and how we can best handle those instances when we encounter them.

Through the case studies in this volume, we recognize the complexity of misinformation and its origins. Not all, or even in a majority of misinformation that we encounter is malicious, but those few malicious instances can have disproportionate impact. As such, the emergent patterns regarding norms, expectations, and behaviors associated with misinformation engagement in the everyday world offer insight into what separates acceptable and unacceptable misinformation, and how we might address those problematic aspects and instances.

Emerging Patterns across Cases

Narrative Frames and Rhetoric

In considering the patterns of discussion surrounding informational framing and rhetoric with respect to misinformation, which has emerged in our multiple case studies and has been built on in conceptual chapters, a few key elements arise that highlight both the unique aspects of misinformation as a resource and the inherent challenges in governing these resources.

Partial Truth

First among them, the most convincing pieces of misinformation couple truth with untruth. Regardless of intention, deception is especially impactful via this structure. From the principles of storytelling explored in Chapters 1 and 2, to Instagram influencers in Chapter 10, this was a key pattern providing evidence both for how people influence dialogue via misinformation and why people are susceptible. If we recognize that most information is analogous to an onion with many layers, how do we peel apart the layers of truth from those layers that are problematic? How do we ensure that interpretation of information is correct or contextual? In this sense, problems of governing misinformation are not merely about enforcement of standards of quality. At a much more complex level, the real challenge is in how to differentially enforce standards or norms when some inaccuracies are tolerable and some are not, or when untruths are tightly coupled to truths.

Chapters 2 and 3 provide qualitative and conceptual understanding as to why this is particularly effective, as well as to the duality of this practice. For example, Chapter 3 contained an analysis of why misremembering who won the Oscar for best picture in a particular year might layer fact with honest mistake, although only causing problems as that misinformation gets passed on, is certainly without serious consequences. In other instances, it’s not a matter of a simple mistake but rather opinion that layers with fact; Chapter 6 illustrates how Google’s Local Guides and crowdsourcing of business and service reviews via digital platforms embed “hot takes” in what otherwise may be factual recounting of individuals’ experiences in particular places. The compelling nature of truth and conspiracy coupled together is illustrated in Chapter 9, via analysis of QAnon. Taken as a whole, we see a spectrum from the innocuous to quite serious misinformation that leverages this strategy.

How might we deal with this practice? Wikipedia is one place we might wish to look for lessons, as their content moderation strategies reflecting knowledge commons arrangements do not throw out entire articles over bits of unsupported opinions or inaccuracies, but rather iteratively approach some consensus over truth.

Audience and Receptiveness

Second, audiences are especially receptive to messaging, whether with valuable information or with misinformation, when it is delivered by or framed with respect to trusted sources or individuals. Whether those sources are public figures, as in Chapter 10, or family or friends, as in Chapters 2 and 7 though 9, or sources of authority, as in Chapter 4, there is considerable evidence that framing with respect to legitimate, credible, trusted, and recognizable entities leads people to share and believe misinformation.

Chapters 6 and 10 offered how platform design features indicating contributions, even when not directly known, to Local Guides or influence on Twitter respectively via metrics and engagement, serve as signposts for other users and imply credibility as a source. Other cases, as with Chapters 4 and 11 illustrate how triangulation, in the form of signposting in course syllabi or to official policies on social media can be used to communicate implicit authority and reinforce false statements with policies, which are not necessarily aligned with that misinformation. In this sense, institutionalization is compelling to audiences, even when they do not understand or attempt to interpret the details. This aligns with the discussion of epistemology, as presented relative to the complexity of everyday misinformation; signifiers make us more receptive to false content.

Storytelling

Third, technology eases storytelling and knowledge construction patterns, providing social and networked mechanisms to communicate in ways that appear polished, compelling, and are more far-reaching than traditional analog writing or oral storytelling traditions. In lowering barriers to content production, reducing distance between those who share content and those who consume it, and providing feedback mechanisms, technologies make many forms of storytelling engaging without requiring us to be authentic (or BeReal). In fact, it facilitates more social construction of misinformation while providing the appearance of authenticity by supporting multiple emotional layers and enhancing contextual narratives in ways that might enhance or downplay the messiness of reality, depending on the objectives of the storyteller, to provide salience.

Every chapter in this collection highlighted the intersection between technology and storytelling, with Chapter 2 and those cases that engaged with social media providing some of the clearest illustrations. However, Chapter 5 illustrated this in one of the most surprising ways; password misinformation and misperceptions regarding security practices are not a problem born of attitudes or a lack of knowledge, but rather an often technologically mediated system of delivering best practices or rules to users that leads to nonexperts absorbing the wrong lessons.

Socialization in Context

As quality knowledge and misinformation are coproduced with community, socialization in those communities is key to discerning fact from fiction, as well as understanding the line of what is and what is not appropriate. Certainly, humans are fallible, making genuine mistakes, minor errors, or miscommunications. However, without meaningful socialization in a community, we are much more likely to be susceptible to deception, as observed in many cases.

Belongingness

Whoever belongs and perceives their belongingness, not merely those who participate in a community, is the key to outcomes and governance patterns in many cases.

At a deeper level than active or passive participation or secondary impacts in interacting with active participants in other relationships, as will be explored in relationship to issues of community boundaries in the next subsection, issues of acculturation and belongingness are also tightly coupled to evaluation of appropriateness of misinformation or governance interventions regarding this, as with Chapters 6, 7, 8, and 11. Subjective judgments about intentions to deceive versus novice misunderstandings are related to perceptions of belongingness in online communities, as well as educational contexts – as in Chapter 4 – and interactions between experts and novices around security, as in Chapter 5. Understanding of contextual norms about truthfulness, or perceived understanding, also appears associated with strength of enforcement and the types of consequences violators of expectations might face.

Power and Incentives

Power and the incentives to produce and share and engage with content within many of the cases explored tells us a lot about the relative impact of influencers on spreading misinformation, the pressures to accept misinformation in order to belong, and inequities with respect to enforcement of governance.

Many of the cases examined instances in which misinformation sharing and production actually empowered the powerless. From marginalized voices in Brazil and India – Chapters 7 and 8, respectively – to US undergraduates simply looking to fit in to computer science programs, people often reinforce misinformation when they get past community boundaries so as to reinforce their own belongingness. Power was also tightly coupled with influence on norm formation processes, setting community standards via influence and esteem within communities, such as with reinforcement logic on Facebook QAnon groups, in Chapter 8, and the power of Instagram influencers to move on from violations of community expectations, as in Chapter 11. Perceptions about the appropriateness of misinformation were tightly coupled in many chapters to power, as power was also an incentive to produce misinformation by producing more engagement via sensational or manipulative content.

Echoing Other Applications of the GKC Framework

The final two patterns importantly echo key aspects of other collections of knowledge commons case studies, including case studies on governance privacy (Sanfilippo et al. Reference Sanfilippo, Frischmann and Strandburg2021) and smart cities (Frischmann et al. Reference Frischmann, Madison and Sanfilippo2023). Issues regarding infrastructure and the margins of participation, as associated with boundaries or gatekeeping or outsiders, were prevalent and impactful on outcomes in many of the cases in this volume.

Infrastructure

As with other knowledge resources, we are dependent on infrastructure at material, social, knowledge, and technological levels in order to produce, share, and engage with misinformation. Infrastructure is often easy to overlook, but its importance to the lifecycle of knowledge resources, including misinformation, is especially evident when examining the coproduction of ephemeral communities on social media and prominent content of the moment. Across many of the cases in this collection, the layers of information and support for that were important in shaping the unique concerns in each case, including contextual norms about what was and was not appropriate in terms of sharing misinformation, as with Chapter 11.

Platforms documentation, including moderation policies, were clearly identified as providing support for the proliferation of misinformation, as opposed to mechanisms to appropriately govern or moderate if and when it gets out of hand, in many other cases, such as Chapters 7, 9, and 10. This highlights where platforms fall short in supporting the needs of their communities, in contrast with the example provided in Chapter 8 regarding WhatsApp introducing limits on forwarding content to reduce the reach of misinformation.

Boundaries, Gatekeeping, and Impacted Outsiders

Participation dynamics also contributed to the production, reach, and impact of misinformation throughout every case study. As we considered who are the relevant communities as researchers, again and again we were reminded that it is necessary to look more broadly. What does it really mean to participate? Who is eligible? If someone didn’t participate in misinformation production, but was exposed or passed that along to others, what is their responsibility?

Various issues regarding boundaries, gatekeeping, and impacted outsiders arose in each case study, echoing a key pattern evident in many other applications of the GKC framework. Those with power over knowledge – and misinformation – production and governance are not necessarily the only people impacted. While at the surface, the impact on lurkers in online communities examined might be the group of people most directly impacted by misinformation, however, the impact on family members, as in Chapter 9’s discussion of QAnon and Chapter 2’s scenarios regarding family lore highlight the everydayness of these issues and the broad reach in terms of secondary influence.

Further, gatekeeping is often supported by misinformation, as in Chapter 4. Exclusionary practices and proliferation of misinformation as a weapon to maintain control and dominance in an academic discipline or field, as in Chapter 5, depend on bending the truth, layering fact with negative and fearful sentiment, and framing.

Broad Lessons

Patterns from this collection of case studies offer several broad lessons, from reminders of unremarkable but often overlooked axioms to more nuanced and actionable insights.

At one end of that spectrum: Sometimes people simply get things wrong. This can happen by accident, perhaps compounding other factors. It may not reflect malice, even when it is undoubtedly a reflection of bias. For example, in 2023, the ESPN credited Vladimir Guerrero Jr. as the second Cuban-born winner of the home run derby, after Yeonis Cespedes, despite the fact that he’s the son of Dominican MLB hall of famer Vladimir Guerrero (not coincidentally also a winner of the Home Run Derby) and was born in Montreal, making him Canadian-Dominican. Yet his name led to an assumption by the network research team and their claim went unchecked prior to inclusion in an internationally broadcast chyron. This incident both illustrates a simple truth – mistakes happen – and an uglier reality – complex social factors contribute to those mistakes in undeniable ways that cannot go unremarked upon. As such, a broad lesson is that we need to be able to correct mistakes, and those who produced them are responsible for that.

However, responsibility to correct erroneous content is not necessarily a responsibility to correct misinterpretation. Beyond the cases published in this volume, there are many examples of satire that are misinterpreted as sincere news or communication, from politicians who participated in Stephen Colbert’s Better Know a District (2005–2014) parody interview series to those who quote The Onion as if it were journalism. The comedians behind that are not responsible for correcting misconceptions and in fact the humor elicited by a dividing line between who is and is not in on the joke is key to the success of these examples. In this way, parody as a light-hearted and certainly appropriate example of misinformation illustrates the analogy to political trolling, which despite its extremely negative connotations to the general public given recent election cycles and issues of election interference, is pervasive among innocuous social media interactions between users and elected officials, such as rickrolling or pseudo-sincere engagement to draw attention to policy positions that the public feels are absurd; for example, asking former Vice President Mike Pence for reproductive health advice (e.g., Connor Reference Conner2020). Both embody the truthiness issues that continue to grow.

The malleability of truth is another key take away from this collection of case studies. While we have long known about the social construction of reality in the social sciences, broader perceptions of quantitative indicators, whether measurements, or apparently “neutral” statistics as objective, have long delineated certain types of information as unassailable truths. Yet mounting distrust of experts, authorities, and professionals has eroded the trust and perceived legitimacy of many of these facts. Is the earth flat or round? This question was answered correctly at least as early as the sixth century bce by Pythagoras (Arif et al. Reference Arif, Rahman, Maulud and Kamaludin2019), with increasing acceptance over time, yet, recent years have seen the number of flat earthers grow again, perhaps foremost among them professional basketball player Kyrie Irving (Paolillo Reference Paolillo2018). Politicization and polarization over these issues, coupled with socioeconomic dynamics, and technologies that enhance and exacerbate these processes explain the unfortunate trend.

However, eradication of misinformation is not only impossible, but also undesirable. Beyond satire, there are instances as in Chapter 11 where misinformation is socially normative. We need to reduce the spread and impact and availability of harmful and antinormative misinformation, but not all of it. Many communities are good at identifying what is right or wrong for them. Contextual norms should stipulate enforcement and prioritization. What communities need are meaningful tools to address and to reduce pervasiveness and spread as with both WhatsApp cases in Chapters 7 and 8, wherein simple changes regarding limits on forwarding had significant impacts. Beyond these simple design changes that can introduce useful friction to reduce the spread of misinformation, the space where we most clearly require additional support is with respect to manipulation, as it reflects a more challenging problem for communities to address without support or meaningful enforcement mechanisms, whether technical or legal.

As we think about the increasing ubiquity and nonplussing nature of misinformation in everyday life, we might consider the lyrics from Michael Jackson’s (Reference Jackson1982) song “Billie Jean” “and the lies become the truth.” Not only does this provide an excellent example of how socially constructed truthiness may be perceived as reality, it also reflects the inherent polarization around truth and lie, fact and fiction. Michael Jackson, as the singer and songwriter in question, is the embodiment of social disagreement regarding truth, with some revering the art and reviling the man, while others embrace or reject both. If there is a high degree in of uncertainty around him, and the reality of his life. what do we really know? What do we believe? What do we want to believe? How do we know it? What do we trust? Yet despite all these questions, we feel strongly about our position.

Finally, as a pair of academic parents editing this book who both try to varying degrees of success to pass along a love of reading to our children, we saw many these insights echoed in children’s books moralizing and satirizing values around truth and honesty. For example, Ian Falconer, author of Olivia (an anthropomorphic pig with a colorful personality) wrote:

Mother: “Well, Olivia, what have you learned by eavesdropping?”

Olivia: “Partial truths and misinformation – “

Mother: “And how did that make you feel?

Olivia: “Insecure and suspicious – “

Olivia speaks for us all in this moment. There are certainly costs to misinformation and, as we saw in the study of QAnon on Facebook in Chapter 9, conspiracy. It is important to confront these issues in our everyday lives and for families to have engagement with reality, not merely digitally mediated constructions of it.

Amending the GKC Framework

Many cases illustrated the coproduction of information resources and community, exemplifying the typical institutional structure of the knowledge commons. In this sense, the GKC framework is useful for exploring institutionalization processes to manage misinformation resources, as well as to mitigate harms from communities’ misinformation. Yet we also gain significant insights into the ways in which the GKC framework might be expanded in more nuanced ways to explore the quality of information resources, and the nuanced processes that allow for distinct governance patterns of similar misinformation differentiated upon intent and perception by the community.

The patterns that emerged from these misinformation cases suggest the following concepts are key to knowledge commons processes:

  1. 1. Infrastructure: Shared infrastructure has long been explored with respect to commons, both in relationship to physical resources (McGinnis Reference McGinnis1999; Ostrom Reference Ostrom1990) and knowledge commons (Frischmann et al. Reference Frischmann, Madison and Sanfilippo2014), yet control of the infrastructure has not directly been explored. Rather, endogeneity has been assumed via a question regarding shared infrastructure. As such, we suggest it would be fruitful to ask: Is exogenous infrastructure required to engage with resources? Who controls infrastructure?

  2. 2. Incentives for knowledge production: These cases indicated that there are often financial and power incentives to create misinformation or sensationalize relevant information to capitalize on modern recommendation systems and surveillance capitalism, as when we benefit from advertising dynamics, there is power in a click or share, or other forms of engagement. As such, it is important to interrogate: What incentives exist to produce knowledge? To participate in the commons?

  3. 3. Contextual complexity: Analysis of misinformation in the everyday importantly illustrated how complex aspects of context unintentionally influence governance arrangements and knowledge production dynamics. Intersections between the everyday and politics and speech protections, for example, complicate efforts to enforce governance and appropriately address misinformation. However, the implications suggest applications beyond this context: What parallel action arenas might inadvertently or unintentionally impact governance?

  4. 4. Implicit institutional dynamics: Past cases have explored informal norms and their formation as implicit institutional forces on commons arrangements, yet did not directly examine other implicit issues of political influence or power. We suggest asking: What implicit power dynamics influence the commons?

  5. 5. Information quality: Falsehood or deception are key attributes of misinformation, both indicating information quality issues for information behavior in the everyday and for information professionals. Based on these issues, we offer the following questions: Who evaluates dimensions of resource quality? Are there criteria or standards to delineate important dimensions of quality?

As with other collections of case studies, the questions raised based on these concepts were appended to illuminate future cases, as indicated in Table 12.1 via italicization.

Table 12.1 An appended GKC framework (emerging questions based on everyday misinformation cases are indicated via italicization)

Knowledge commons framework and representative research questions
Background environment
What is the background context (legal, cultural, etc.) of this particular commons?
What normative values are relevant for this community?
What is the “default” status of the resources involved in the commons (patented, copyrighted, open, or other)?
How does this community fit into a larger context? What relevant domains overlap in this context?
Attributes
ResourcesWhat resources are pooled and how are they created or obtained?
What are the characteristics of the resources? Are they rival or nonrival, tangible or intangible?
Is there shared infrastructure? Is exogenous infrastructure required to engage with resources? Who controls infrastructure?
What is personal information relative to resources in this action arena?
What technologies and skills are needed to create, obtain, maintain, and use the resources?
What are considered to be appropriate resource flows? How is appropriateness of resource use structured or protected?
Who evaluates dimensions of resource quality? Are there criteria or standards to delineate important dimensions of quality?
Community membersWho are the community members and what are their roles, including with respect to resource creation or use, and decision making?
Are community members also information subjects?
What are the degree and nature of openness with respect to each type of community member and the general public?
What noncommunity members are impacted?
Goals and objectivesWhat are the goals and objectives of the commons and its members, including obstacles or dilemmas to be overcome?
Who determines goals and objectives?
What values are reflected in goals and objectives?
What are the history and narrative of the commons?
What is the value of knowledge production in this context?
What incentives exist to produce knowledge? To participate in the commons?
Governance
ContextWhat are the relevant action arenas and how do they relate to the goals and objective of the commons and the relationships among various types of participants and with the general public?
Are action arenas perceived to be legitimate?
What parallel action arenas might inadvertently or unintentionally impact governance?
InstitutionsWhat legal structures (e.g., intellectual property, subsidies, contract, licensing, tax, antitrust) apply?
What other external institutional constraints are imposed? What government, agency, organization, or platform established those institutions and how?
How is institutional compliance evaluated?
What are the governance mechanisms (e.g., membership rules, resource contribution or extraction standards and requirements, conflict resolution mechanisms, sanctions for rule violation)?
What are the institutions and technological infrastructures that structure and govern decision making?
What informal norms govern the commons? What implicit power dynamics influence the commons?
What institutions are perceived to be legitimate? Illegitimate? How are institutional illegitimacies addressed?
ActorsWhat actors and communities: are members of the commons, participants in the commons, users of the commons, and/or subjects of the commons?
Who are the decision makers and how are they selected? Are decision makers perceived to be legitimate? Do decision makers have an active stake in the commons?
How do nonmembers interact with the commons? What institutions govern those interactions?
Are there impacted groups that have no say in governance? If so, which groups?
Patterns and outcomes
What benefits are delivered to members and to others (e.g., innovations and creative output, production, sharing, and dissemination to a broader audience, and social interactions that emerge from the commons)?
What costs and risks are associated with the commons, including any negative externalities?
Are outcomes perceived to be legitimate by members? By decision makers? By impacted outsiders?
Do governance patterns regarding participation provide exit and/or voice mechanisms for participants and/or community members?
Which rules-in-use are associated with exit-shaped, voice-shaped, or imposed governance? Are there governance patterns that indicate the relative impact of each within the commons overall?

Final Reflections

As the opportunity to make final revisions came in summer 2023, it seemed imperative to reflect on how what we’d learned from these cases applied to broader conversations about truth (and Truth™), misinformation, and politics as wave after wave of indictments associated with a former US president rolled in. What about misinformation in everyday life and the innocuous and contentious ways in which people govern that might help us to make sense of the surreality we find ourselves in? Issues of trust around channels of communication and specific messages across cases and throughout history point to why this scenario is so challenging; longstanding digital literacy principles to look toward experts and authority for quality information are undermined when the legitimacy of the role is in question, and when those in power have the motivation and behavioral patterns of deception. This is not merely a problem of media or political communication, but a much broader one.

Perhaps the answer is in something as relatable as a childhood game of telephone? The average person does not find it difficult to see how the message becomes distorted as one giggling child whispers to another. It also maps intuitively onto longstanding information theory, supporting Shannon and Weaver’s mathematical theory of communication, regarding noise as it impacts communication channels and information quality (Shannon Reference Shannon1948; Weaver Reference Weaver1953). Each time the message is forwarded, opportunities arise for the message to become distorted, whether intentionally or unintentionally. Beyond that there is also incentive to begin with a silly or attention-grabbing or sensational message to keep and hold engagement. If this happens on the scale of the everyday, of course it also happens in all contexts regardless of impact or seriousness. If we look at how this game maps onto patterns of sharing on social media, in an era of filter bubbles, we see how algorithms may reduce noise, both in the form of detraction from and diversity of content; this too poses a problem, as we are increasingly exposed to echo chambers that are susceptible to misinformation via partial truth or rhetoric that seems analogous to legitimate content, from the perspective of the recommendation system.

With or without modern technology, there is also a long history demonstrating human motivation to ignore ugly facts or hypothesize unfounded alternative explanations for reality, as with numerous conspiracies about the Kennedy family or holocaust deniers. These conspiracies are sometimes emotional reactions or efforts to reconcile facts with alternative epistemologies, while others are born of malice or bias or bigotry. Sometimes they reflect concerted efforts to control the historical record, as with friends of presidents writing their official biographies to proclaim their significance or greatness (Aldrich Reference Aldrich1992), when the reality was that they ranged from unremarkable to terrible. They offer a contrast to instances of error in modern history, such as with a headline of “Dewey defeats Truman.”

And thus, the takeaway from these examples and comparisons is that while the fact pattern might change, as we see in the many scandals and challenges around Trump – or Theranos or FTX, as in the Introduction – providing an absurd combination of issues, none of the issues are new. For example, Octavian leveraged misinformation to gain support and undermine Mark Anthony in the Roman Empire between 42 and 23 bc (Fraser Reference Fraser2020); he certainly was not the first to do so.

These examples and the cases presented in this book illustrate that while the same behaviors occur across time and contexts, reflecting universal behavioral patterns about information quality, engagement, and flow, norms about appropriateness vary widely. When we compare half-truths in dating profiles to half-truths in the personal narratives of politicians, it’s a comparison between something that is par for the course and “lies and the lying liars who tell them” (Franken Reference Franken2004). Ironically, NY Representative George Santos is the embodiment of this dichotomy in a single person, illustrating the lack of concern about patterns of untruth when it did not affect democracy, as opposed to widespread bipartisan concern when it did (Guest and Wild Reference Guest and Wild2023).

We collectively produce (mis)information, community, and governance of that (mis)information and community in the everyday. They are necessarily linked and sociotechnically constructed in the modern information environment. Just as misinformation is quotidian, so too are the knowledge commons arrangements with the potential to appropriately govern misinformation.

References

Aldrich, Elizabeth Kaspar. 1992. “Necessary Saints: Some Reflections on the Varieties of American Hagiography.” Yale Journal of Criticism 5 (3): 1.Google Scholar
Arif, Faiz, Rahman, Abdul Aziz Ab, Maulud, Khairul Nizam Abdul, and Kamaludin, Amir Husni. 2019. “Debunking Flat Earth: From Geomatics Perspective.” In 2019 6th International Conference on Space Science and Communication (IconSpace), 150153. IEEE.CrossRefGoogle Scholar
Colbert, Stephen. 2005–2014. “Better Know a District.” The Colbert Report. Comedy Central.Google Scholar
Conner, Berkley D. 2020. “Menstrual Trolls: The Collective Rhetoric of Periods for Pence.” The Palgrave Handbook of Critical Menstruation Studies: 885–899.CrossRefGoogle Scholar
Falconer, Ian. 2017. Olivia the Spy. New York: Atheneum/Caitlyn Dlouhy.Google Scholar
Franken, Al. 2004. Lies – and the Lying Liars Who Tell Them: A Fair and Balanced Look at the Right. New York: Penguin.Google Scholar
Fraser, Matthew. 2020. In Truth: A History of Lies from Ancient Rome to Modern America. Lanham, MD: Rowman & Littlefield.Google Scholar
Frischmann, Brett M., Madison, Michael J., and Sanfilippo, Madelyn Rose, eds. 2023. Governing Smart Cities as Knowledge Commons. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Frischmann, Brett M., Madison, Michael J., and Sanfilippo, Katherine Jo, eds. 2014. Governing Knowledge Commons. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA: Harvard University Press.Google Scholar
Guest, Michael, and Wild, Susan. 2023. “Statement of the Chairman and Ranking Member of the Committee on Ethics Regarding Representative George Santos.” Committee on Ethics, US House of Representatives, March 2.Google Scholar
Hofstede, Geert. 2011. “Dimensionalizing Cultures: The Hofstede Model in Context.” Online Readings in Psychology and Culture 2 (1): 8.CrossRefGoogle Scholar
Jackson, Michael. 1982. “Billie Jean.” Epic Records.Google Scholar
McGinnis, Michael Dean. 1999. Polycentricity and Local Public Economies: Readings from the Workshop in Political Theory and Policy Analysis. Ann Arbor, MI: University of Michigan Press.CrossRefGoogle Scholar
Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge, MA: Cambridge University Press.CrossRefGoogle Scholar
Paolillo, John C. 2018. “The Flat Earth Phenomenon on YouTube.” First Monday. https://doi.org/10.5210/fm.v23i12.8251.CrossRefGoogle Scholar
Sanfilippo, Madelyn Rose, Frischmann, Brett M., and Strandburg, Katherine J., eds. 2021. Governing Privacy in Knowledge Commons. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Shannon, Claude Elwood. 1948. “A Mathematical Theory of Communication.” Bell System Technical Journal 27 (3): 379423.CrossRefGoogle Scholar
Strandburg, Katherine J., Frischmann, Brett M., and Madison, Michael J., eds. 2017. Governing Medical Knowledge Commons. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Weaver, Warren. 1953. “Recent Contributions to the Mathematical Theory of Communication.” ETC: A Review of General Semantics 10 (4), Special Issue on Information Theory (Summer): 261281.Google Scholar
Figure 0

Table 12.1 An appended GKC framework (emerging questions based on everyday misinformation cases are indicated via italicization)

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×