Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-20T20:16:45.424Z Has data issue: false hasContentIssue false

Why Are All of the Children Perceived to Be Above Average? Stakeholders and the Lake Wobegon Effect in Attitudes toward Public Schools

Published online by Cambridge University Press:  05 December 2023

Timothy Vercellotti*
Affiliation:
Department of History, Philosophy, Political Science, and Economics, Western New England University, Springfield, MA, USA
Peter Fairman
Affiliation:
Department of History, Philosophy, Political Science, and Economics, Western New England University, Springfield, MA, USA
*
Corresponding author: Timothy Vercellotti; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

The Lake Wobegon effect, named for the fictional town where all children are above average, is well documented in surveys about education. Respondents tend to rate their local public schools higher in quality than schools overall in the state or nation, even despite contrary evidence. One potential explanation for this disconnect is a psychological construct known as “illusory superiority.” While the superiority aspect of illusory superiority is well studied, the illusory nature of these attitudes typically is assumed rather than empirically demonstrated. Further, the predictors of these attitudes also merit exploration. We seek to address both of these points. We hypothesize that illusory superiority in the context of attitudes toward public education may be driven in part by self-interest, and thus may be more likely to be found among those with the biggest stake in local schools, such as parents of students, homeowners, and longtime residents. Using survey data from Massachusetts, we find a factual basis for illusory superiority by comparing perceptions of local school performance on standardized tests to actual scores. We also model predictors of illusory superiority, including property ownership, length of residency, and having children enrolled in public schools, and the role that illusory superiority may play in school ratings. We then assess the effects of overstatement of test scores on attitudes toward a key educational issue – whether to increase taxes to provide additional funding for local schools.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of the State Politics and Policy Section of the American Political Science Association

Introduction

“Be true to your school,” as the Beach Boys’ lyric goes, might easily describe attitudes toward public education. Numerous studies have found that individuals are more likely to assess their local public schools in positive terms compared to public schools in general. This tendency is known in public policy circles as the “Lake Wobegon effect,” drawing from the name of radio personality Garrison Keillor’s fictional community in which, among other things, all of the children are above average. The Lake Wobegon effect has been widely studied and documented, not just in the realm of education attitudes, but also in terms of self-assessments of individual performance in different settings and in evaluations of workplaces and other institutions. One potential explanation for these overly positive assessments is a psychological construct known as “illusory superiority,” in which individuals differentiate themselves or the institutions that they are part of by positively comparing themselves or their institutions to other individuals or institutions. These assessments may stem from a need to build or maintain self-esteem or to reassure oneself that one is safe from possible negative events or outcomes (Hoorens Reference Hoorens1993).

Studies that document the Lake Wobegon effect, particularly in the area of attitudes toward public education, typically reveal a disparity between positive assessments of local schools and positive assessments of schools in general. We aim to build on that literature by measuring citizens’ perceptions of school performance on statewide standardized tests, and then comparing those assessments to the schools’ actual performance on the tests. In addition, we contribute to the literature by examining the potential tendencies of three groups of stakeholders to overstate local school performance: parents of children who attend public schools, homeowners, and longtime residents of communities. While studies routinely explore parents’ views on school performance, and a few also examine the views of homeowners (see, e.g., Chingos, Henderson, and West Reference Chingos, Henderson and West2012), our research also explores residential longevity to test whether assessments of local schools also vary based on the depth of one’s roots in the community.

Understanding the origins and extent of the disconnection between perceived and actual school performance is important for at least two reasons. First, major changes in education policy and tax increases for public education often require the consent of citizens in the community. If local residents have overly rosy views of their schools, policy changes and tax increases designed to improve those schools might lack widespread support. Second, to the extent that children who attend underperforming schools might be entitled to switch schools under school choice provisions, parents’ exaggerated views about the success of their local schools might prevent them from exploring educational alternatives that might benefit their children. As a result, understanding the roots and impact of overly positive assessments of public schools has important ramifications for schools and students.

This paper proceeds as follows. We begin by giving an overview of the Lake Wobegon effect in the context of attitudes toward public education, as well as in other contexts. We then discuss the psychology behind the Lake Wobegon effect, and the role that illusory superiority might play in this area. Drawing from previous research, we develop hypotheses regarding the groups most prone to this type of thinking, as well as the role that these attitudes might play when it comes to raising taxes to support local schools. We test our hypotheses using data we gathered in a statewide survey of Massachusetts adults matched to standardized test scores for public schools in the communities in which those adults reside. We report our results and discuss their implications for the understanding of illusory superiority in the area of education attitudes, as well as the potential policy implications.

The Lake Wobegon Effect in education and elsewhere

The tendency for people to like their local public schools more than they like schools across the nation comes through repeatedly and clearly through annual polls conducted by PDK International, a professional association for educators, and Education Next, a journal focusing on K-12 education issues. Education Next asks survey respondents to grade public schools using a letter grade system. Responses from parents of children in public schools are invariably in line with the Lake Wobegon effect. In 2017, 27% of parents gave “public schools in the nation as a whole” a grade of A or B, but 62% of parents gave “public schools in your community” such a rating. The numbers for the general population were 23% for schools in the nation overall and 54% for local schools (West et al. Reference West, Henderson, Peterson and Barrows2017). Similar results have occurred in the annual survey in previous years, dating to 2007, as well as in PDK International surveys (see Moe Reference Moe2001; Peterson, Henderson, and West Reference Peterson, Henderson and West2014).

Survey respondents’ general optimism about schools in their community does not translate into broad satisfaction with the amount of money being spent on them, however. As one example, 63% of respondents in the 2012 Education Next survey felt that “government funding for public schools in your district” should increase, while just 9% thought that it should decrease (Howell, West, and Peterson Reference Howell, West and Peterson2013). Even if the results concerned schools nationally, people were of the opinion that more spending was needed. For instance, 55% of respondents in the 2011 poll thought that teacher salaries should be increased, while just 7% said that they should be decreased. Perhaps not surprisingly, once the subject in these polls turned to raising taxes, respondents were quicker to close their wallets. The 2011 survey asked people whether “local taxes to fund public schools in your district should increase, decrease, or stay about the same.” A solid majority, 57%, thought that they should remain as they are with just 28% feeling they should be increased (Howell, West, and Peterson Reference Howell, West and Peterson2011).

The Lake Wobegon effect is not limited to attitudes toward public schools. Corporate managers and team leaders in 120 companies surveyed across nine countries tended to give higher marks to their organizations compared to competitors, with disparities decreasing only as respondents learned more information about their competitors (Betts, Croom, and Lu Reference Betts, Croom and Lu2011). High school students indicated that they possessed positive traits to a greater extent and negative traits to a lesser extent than others on average (Hoorens Reference Hoorens1995). College students with below average grades and test scores also were likely to overstate their performance (Maxwell and Lopus Reference Maxwell and Lopus1994). In addition, people tend to express unrealistic optimism about their health risks compared to others (Dunning, Heath, and Suls Reference Dunning, Heath and Suls2004). The illusion of superiority also can vary by the level of difficulty of tasks. Moore and Small (Reference Moore and Small2007) found that individuals rate themselves as being above average on easy tasks and below average on difficult tasks.

Psychological basis for illusory superiority

Hoorens (Reference Hoorens1993) defines illusory superiority as the degree to which an individual reports possessing positive characteristics to a greater extent than the average person, and having negative characteristics to a lesser extent than the average person. This way of thinking can apply to personality traits, abilities, conformity to group norms, or life circumstances. Illusory superiority is one of a collection of “self-related superiority biases” (Hoorens Reference Hoorens1993, 114). These biases also can include false consensus, in which an individual over-estimates the number of others who share an attribute with that individual; false uniqueness, in which an individual under-estimates the number of others who share an attribute with that individual; and unrealistic optimism, in which one over-estimates the likelihood of experiencing desirable events and under-estimates the likelihood of experiencing negative events. While illusory superiority can apply to assessments of one’s own abilities, this type of thinking can also extend to groups or organizations of which one is a member (Betts, Croom, and Lu Reference Betts, Croom and Lu2011). The authors caution that engaging in this type of thinking can pose a barrier to honest assessment and improvement, either at the individual or the organizational level.

Hoorens (Reference Hoorens1993) identifies several motivations for illusory superiority, including the need to maintain self-esteem, and to reassure oneself that one will be shielded from undesirable events. Presentation of oneself in a positive light, or “impression management,” also may be motivated by social desirability bias when responding to survey questions (Hoorens Reference Hoorens1995, 814). Mutz (Reference Mutz1998) cites personal motivations and cognitive errors as possible explanations for why individuals tend to rate their personal circumstances more positively than that of collective experience. People may be motivated to hold optimistic views of their situation in order to stave off fear or protect their egos. They may also be prone to misperceptions of collective experience if they tend to focus on “prototypes of high-risk individuals, which inevitably make the self look better in comparison” (Mutz Reference Mutz1998, 129).

Illusory superiority as a predictor of the Lake Wobegon effect

Scholars argue that the prevalence of comparing local schools more favorably to schools in general may be explained in part by familiarity. Loveless (Reference Loveless1997) contends that people typically assess social institutions in general according to harsher standards than they apply to institutions with which they are familiar. “The public’s relationship with the educational system writ large consists primarily of voting in elections and paying taxes. Local schools forge more intimate ties. Even nonparents may know friends, family, neighbors or colleagues at work who have children attending local schools” (154). Moe (Reference Moe2001) cites parental familiarity with public schools based on direct experience as a factor in overly positive assessments of school performance compared to nonparents. Moe finds that the parents tend to rate their local schools higher than nonparents even in school districts that are “disadvantaged,” as defined by socioeconomic factors and poor test performance (75). In addition to direct experience, parents may give high ratings to their local schools out of psychological necessity. Peterson, Henderson, and West (Reference Peterson, Henderson and West2014) refer to this necessity as “buyer’s delight,” observing: “It takes hard-nosed realists to admit that they are sending their child to a failing school or that the community in which they live has low-quality schools” (48).

Whether individuals are equipped to make accurate assessments of the performance of public schools is a subject of much discussion in scholarly research. Jacobsen, Snyder, and Saultz (Reference Jacobsen, Snyder and Saultz2014) argue that information asymmetries complicate citizens’ attempts to accurately evaluate the performance of schools in their communities. Most individuals have no direct contact with their local public schools. Even parents of children enrolled in the schools may have contacts that are limited to informal interactions with a small group of teachers on a regular basis (Jacobsen, Snyder, and Saultz Reference Jacobsen, Snyder and Saultz2014, 3).

Studies have generated mixed evidence regarding parents’ ability to accurately assess local school performance. Chingos, Henderson, and West (Reference Chingos, Henderson and West2012), analyzing national survey data and data from an oversample in Florida, found a significant and positive relationship between student performance on standardized tests and the public’s perception of local school quality nationally and in Florida. While the authors found a relationship, they did not test specifically for survey respondents’ knowledge of student or aggregate school performance on standardized tests. Howell (Reference Howell2006) tested for such knowledge and found significant gaps. A survey of 1,000 parents of students in public schools in Massachusetts showed that while 25% of parents who responded to the survey had children in schools that were rated as “underperforming” based on student scores on standardized tests, only 12% of parents correctly identified that their children attended underperforming schools (Howell Reference Howell2006, 152). At the same time, more than half of parents whose children attended higher performing schools were more likely to correctly characterize the schools their children were attending. Howell speculated that part of the gap may have been due to social desirability bias in responding to the survey. “After all, who wants to admit, especially to a stranger on the telephone, that they send their child to a public school that is underperforming?” (153). But Howell said some of the responsibility may also lie with the schools and disparities in the amount and effectiveness of communication between schools and parents. The study found that parents of students in higher performing schools were more likely to accurately identify the size of the school and the name of the principal, suggesting that those schools did a better job of communicating with parents.

Motivation may play a role in whether parents have accurate knowledge of school performance. While one might assume higher socioeconomic characteristics would predict accurate knowledge of school performance, Teske, Fitzpatrick, and Kaplan (Reference Teske, Fitzpatrick and Kaplan2006) found that in areas where school choice policies had become well established, perceived knowledge of school performance did not vary dramatically between low- and moderate-income parents. Only parents with extremely low annual household incomes, below $10,000, felt like they were not well informed about their local schools. Bickers and Stein (Reference Bickers and Stein1998), however, found that accurate perception of local school test scores was a function of parents’ levels of education, and how recently parents had moved to the community. Favero and Meier (Reference Favero and Meier2013) found that parent and teacher assessments of the performance of New York City’s public schools tended to be consistent with objective measures of school quality, including standardized test scores and crime statistics. The authors attributed this consistency to the motivation arising out of the school choice program in the New York City schools, in which parents relied on quality measures to make decisions as to where to enroll their children.

Whether this motivation, informed by factors such as standardized test scores, is widespread or unique to specific school systems also is a subject of debate. The Carnegie Foundation (1992) found that non-academic factors often played a key role in parents’ decisions. More recent studies confirm this tendency. Harris and Larsen (Reference Harris and Larsen2015), in a study of public schools in New Orleans, concluded that while academic factors matter somewhat for elementary school parents, “practical considerations such as distance and availability of extended school days seem especially important,” and for parents of high school students, “extracurricular activities such as band or football seem especially important” (3). Not all studies agree with these conclusions. Kelly and Scafidi (Reference Kelly and Scafidi2013) surveyed Georgians who were recipients of scholarships to attend a private school and found that “better student discipline,” “better learning environment,” and “smaller class sizes” were the most common bases for deciding where to go (1). Pride (Reference Pride2002) argues that for the community overall, it is not test scores that influence parents’ assessments. Instead, Pride contends, “critical events,” such as referenda on education policies, intervention by politicians, and even violent events such as school shootings, do more to shape the public’s assessment of schools than standardized test scores.

Does having accurate knowledge of student performance on standardized tests matter when parents choose a community based on the quality of its schools? Bickers and Stein (Reference Bickers and Stein1998) argue that even with imperfect knowledge homeowners can rely on “informational heuristics,” or psychological shortcuts, to correctly choose communities that provide the services that they deem desirable given what they can afford. For example, parents in search of high-quality schools might gravitate toward affluent communities based on reputation or property values.

While accurate knowledge of measures of school performance may or may not affect decisions about moving to a specific community, when individuals are exposed to actual performance indicators, they might revise their views of their local schools. Barrows et al. (Reference Barrows, Henderson, Peterson and West2016) found that average evaluations of local schools declined after survey respondents were exposed to information about their schools’ performance relative to other schools in their state, the nation, and in other developed countries. In sum, accurate information about school performance has the potential to reduce the incidence of illusory superiority when it comes to assessing the quality of local public schools. This suggests that there is a factual underpinning to illusory superiority, and that perceptions of facts shape views about school performance. It is possible, then, that misperceiving facts could lead to illusory superiority, and that misperceptions of facts could contribute to the Lake Wobegon effect when individuals compare their local public schools to public schools in general.

Factors that may shape views of school performance

Previous research suggests that parents of children in public schools may be more likely than others to overstate school performance, either due to a desire to cast their children’s schools in the most positive light to survey researchers, or to reassure themselves that their children attend good schools, or a mix of both motivations. Moe (Reference Moe2001) argues that direct contact with the schools may contribute to positive assessments, so the longer that parents have been involved with the schools the more likely they may be to think highly of the schools. Thus, it is possible that parents of students in public high schools might have more positive views than parents of students in middle or elementary schools.

In addition, individuals who own homes in a community may err on the positive side in evaluating their local schools because of the link between school performance and property values. Figlio and Lucas (Reference Figlio and Lucas2004) found that public school “report cards” issued by the state of Florida, with grades ranging from A to F, had a significant effect on home values, although the effect tapered off over time (see also McCabe Reference McCabe2013 for an extensive summary of research linking school performance to property values).

Length of time living in the community also might contribute to the overstatement of school quality due to ingrained community pride. It is plausible that individuals who are unhappy with the community will vote with their feet if they have the resources to move elsewhere. Those who stay behind may be satisfied with their community, and may develop feelings of pride out of a genuine appreciation for the community or as a rationalization for living there. Previous research suggests there are significant differences in attitudes toward public schools based on longevity of living in a community. Berkman and Plutzer (Reference Berkman and Plutzer2004), in a study of more than 9,000 school districts across 40 states, found a positive association between per pupil public school expenditures and concentrations of older residents ages 60 and up who had lived in a community for six years or more. The relationship was negative for 60-plus residents who had lived in the community for five years or fewer. The findings were “consistent with the idea that loyalty – an emotional bond between residents and their community’s institutions – competes with and often trumps instrumental self-interest” (Berkman and Plutzer Reference Berkman and Plutzer2004, 1190). It may be that the longer one lives in a community, the greater the possibility that one’s emotional attachment to institutions might get in the way of forming an accurate assessment of local schools. Research has found that more recent arrivals tend to have a more accurate sense of school performance, perhaps as a result of gathering information before deciding to move. Bickers and Stein (Reference Bickers and Stein1998) found that newcomers were more likely than long-term residents to correctly identify whether standardized test scores for the community were above, below, or the same as the county average.

The potential effects of other demographic characteristics are less clear cut. Previous research has found that African-Americans are less likely than the public as a whole to assign high ratings to local public schools, while Hispanics are more likely than the general public to do so (Peterson, Henderson, and West Reference Peterson, Henderson and West2014; see also Moe Reference Moe2001). In terms of education, Moe (Reference Moe2001) notes that one might reasonably expect individuals with high levels of education to have self-selected into more advantaged school districts, and therefore be more likely to express satisfaction with those schools. But Moe’s research found the opposite, that the least educated were the most likely to express satisfaction with public schools. Moe speculated that people with lower levels of education might also have lower expectations for their public schools, and consequently may be more likely to be satisfied with their schools.

Moe (Reference Moe2001) also discusses a concept called “public school ideology” that might play a role in shaping views on local public schools. The ideology centers on the idea that public schools, with their open access and egalitarian principles, are a cornerstone of the community and a manifestation of local democracy. Consequently, according to the ideology, local public schools deserve the community’s commitment and support (86–7). Moe speculates that public school ideology may be higher among individuals from low socioeconomic backgrounds, and that in the political sphere Democrats would be more likely to hold these views than Republicans or independent voters. Given the intersection between party identification, race and ethnicity, and to some extent socioeconomic status, predicting the effects of these characteristics on school ratings is difficult. We include them as controls in our models, but without a clear predictive sense of the direction of their influence.

Our focus instead is on the effects of having a child or children in the public schools, owning a home, and long-term residency on perceived school performance, and the role that those factual perceptions may play in comparing the quality of local schools to that of schools overall. These factors give rise to the following hypotheses:

Hypothesis 1: Parents of children in public schools are more likely than others to rate their local schools more highly than schools in general.

Hypothesis 2: Given that direct exposure to public schools may shape parents’ positive views, the strongest effects may develop over time, with parents of high school students being more likely to give positive ratings compared to parents of students in middle and elementary schools.

Hypothesis 3: Homeowners are more likely than others to rate their local schools more highly than schools overall.

Hypothesis 4: As length of residency in a community increases, so too does the tendency to rate local schools more highly than schools overall.

Hypothesis 5: Members of each of these groups also are more likely to over-estimate their local schools’ performance on standardized tests compared to assessments by the general population (illusory superiority).

Hypothesis 6: Illusory superiority, in the form of overestimation of local schools’ performance on standardized tests, positively predicts rating one’s local schools higher than schools overall.

Overstating local school performance on standardized tests also may shape views on school-related policies. It is possible that an inflated sense of school performance might affect how citizens view the need for additional resources for schools. If individuals view their local schools as performing better than schools overall in the state, they might conclude that the local schools have sufficient resources and that additional funds are not needed. On the other hand, one could argue that positive views of public schools might prompt support for additional resources to maintain and further enhance the quality of the schools. For example, Jacobsen, Snyder, and Saultz (Reference Jacobsen, Snyder and Saultz2014) argue that positive performance ratings of a public agency, such as a public-school system, confer a sense of legitimacy for the agency that can lead to support for more funding through taxation. We hypothesize that:

Hypothesis 7: Those who overstate local school performance on standardized tests are more likely than others to support raising taxes to provide additional money for local schools.

Methods and data

We test these hypotheses using data from a random-digit-dial telephone survey of 401 adults in Massachusetts that we conducted through a university-based survey research center (February 24 to March 2, 2014).Footnote 1 We build on previous research by moving beyond simple comparisons of assessments of local schools to schools in general by comparing evaluations of local schools to actual school performance on standardized tests, and then by comparing that performance to average scores across the state of Massachusetts.

We asked survey respondents to compare the performance of their local public schools to that of schools across the state using the following measures:

  1. 1. Using a grading scale of A, B, C, D, or failing, what grade would you give to the quality of education that the public schools in your local school district are providing?

  2. 2. Using a grading scale of A, B, C, D, or failing, what grade would you give to the quality of education that public schools across Massachusetts are providing overall?

  3. 3. How confident are you that the public schools in your local school district are providing a high-quality education to students? Are you very confident, somewhat confident, not very confident, not at all confident?

  4. 4. How confident are you that public schools across Massachusetts are providing a high-quality education to students? Are you very confident, somewhat confident, not very confident, not at all confident?

  5. 5. Using a grading scale of A, B, C, D, or failing, what grade would you give to the quality of teaching that occurs in your local school district?

  6. 6. Using a grading scale of A, B, C, D, or failing, what grade would you give to the quality of teaching that occurs in public schools across Massachusetts overall?

Interviewers were trained to also accept “don’t know” and “refused” as voluntary responses for each question. We interspersed other questions between some of the measures to reduce question order effects. We also rotated questions about local schools with the matching statewide assessments to reduce the possibility of one question consistently anchoring responses to the next question (the full text of the questionnaire is available from the authors on request).

The pairs of questions in which respondents assigned letter grades to their local schools and schools across the state, expressed confidence regarding the local schools and schools statewide, and assigned letter grades to the quality of teaching locally and statewide, provided ample evidence of the Lake Wobegon effect (Table 1).

Table 1. Assessments of local schools and schools statewide in Massachusetts

Source: Massachusetts Statewide Survey, February 24 to March 2, 2014.

Note. N = 401. Figures are column percentages, and may not sum to 100% due to rounding.

When asked to assign letter grades for the overall quality of schools in their communities and statewide, respondents were more likely to give higher grades to local schools. About two-thirds of the sample assigned an A or a B to their local schools, while 50% of the sample assigned grades of A or B to schools overall in the state. A similar disparity emerged when respondents rated the quality of teaching, with 70% of respondents giving an A or a B to the teaching in their local schools, while 54% did so for teaching in schools statewide. In terms of confidence that public schools are providing a high-quality education, 31% of respondents said they were very confident in their local schools, while only 12% said they were very confident in schools across the state.

The aggregate percentages give some sense of the disparate views of survey respondents when they compare their local schools to schools statewide. But the differences reflected in the aggregate numbers might mask interesting patterns in the data. Do survey respondents as a group rate local schools higher than schools in general, or do some give equal rankings to both local schools and schools statewide, and do some rate their local schools lower than schools overall? Cross-tabulating the local and statewide assessments yielded the following patterns (Table 2).

Table 2. Patterns of ratings of local schools and schools statewide

Source: Massachusetts Statewide Survey, February 24 to March 2, 2014.

Note. N = 401. Figures are row percentages.

While large percentages of survey respondents gave higher marks to their local schools than to schools overall across the state, in no instance did this reflect the view of a majority of respondents. Thirty-six percent assigned a higher letter grade to their local schools compared to schools overall, while 35% gave the same grade, 15% gave lower grades to local schools compared to schools statewide, and 14% responded that they did not know or declined to give a response for one or both of the ratings. When asked about levels of confidence in schools locally and statewide, about 30% of the sample had higher confidence in local schools than in schools statewide, while a majority – 53% – had equal levels of confidence in schools locally and across the state. One-third of respondents gave higher letter grades to their local schools for the quality of teaching compared to schools statewide, while 46% assigned the same grade, and 9% assigned a lower grade to teaching in local schools compared to the state overall.

These findings give some context to the conventional wisdom that individuals tend to rate their local schools higher than schools in general. While this may be true, the results indicate that it is the case for only about one-third of the respondents to our survey. The larger question, and in some ways the more interesting one, is to what extent does misperception of school performance (the basis for illusory superiority) drive these ratings? We asked the following survey question to measure respondents’ perceptions of their local schools’ performance on a standardized test that the state administers annually in public schools:

“Next, the state measures the progress of students in public schools using a test known as MCAS. The test evaluates students in language arts, math, and science. Based on what you have read or heard, in general, do the schools in your local school district perform better on the MCAS tests than the state overall, worse than the state overall, or about the same as the state overall?”

The Massachusetts Comprehensive Assessment System (MCAS) tests students in public and charter schools in grades 3–8 and grade 10 in math, English language arts, and science and technology/engineering, under the provisions of the Massachusetts Education Reform Law of 1993. Following the adoption of the national No Child Left Behind law in 2001, Massachusetts began using MCAS results to assess whether schools were making adequate yearly progress toward meeting objectives laid out in No Child Left Behind. In addition, students in 10th grade must earn passing grades on the math, English language arts, and science and technology/engineering MCAS exams in order to partially meet requirements for receiving a high school diploma in Massachusetts (Massachusetts Department of Elementary and Secondary Education n.d.a). The tests are administered in the winter and spring, with individual results reported to parents over the summer and aggregate results released to the news media and the public in the fall. The state reports aggregated results at the state and school system level, as well as for individual schools and grades within each school, with percentages of student scores falling into one of four categories for each test: advanced, proficient, needs improvement, and warning/failing.

In response to the survey question on their local schools’ performance on the MCAS exams, 48% said their local schools performed better than the state overall on the exams, 8% said worse, 33% said about the same as the state overall, and 11% said they did not know or declined to answer the question.

In the demographic section of the survey, we asked respondents for their zip codes so that we could correctly place them in the public school system for their community.Footnote 2 Using that information, we examined the 2013 MCAS scores for the public schools in each respondent’s community to assess the accuracy of respondents’ perceptions of local school performance on the standardized tests (Boston.com 2013). This validation process was a rough approximation of schools’ performance given that the overall scores for the school systems are aggregations of scores from different grade levels. In 2013, 69% of students across the state scored at the advanced or proficient level in English language arts, 61% in math, and 53% in science and technology/engineering (Massachusetts Department of Elementary and Secondary Education n.d.b). Table 3 shows the distribution of scores for advanced or proficient in the 207 communities in which the survey respondents resided.

Table 3. Comparison of MCAS scores in local schools and statewide in Massachusetts

Source: Massachusetts Department of Elementary and Secondary Education (n.d.a, n.d.b).

The mean scores for all of the respondents’ communities were within 1 percentage point of each of the statewide scores, with considerable variation in terms of maximum and minimum scores for the communities.

Matching community-level scores to survey respondents’ perceptions of the scores, however, revealed the extent to which respondents tended to overstate how their local schools performed. Using the survey responses and the local and state-level MCAS scores, we divided respondents into the following groups: those who had overstated their local schools’ performance compared to the state overall, those who had understated their local schools’ performance, and those who had been relatively accurate in their assessment of their local schools’ performance compared to the state overall. We also allowed for respondents who said they did not know or who declined to answer the question.

We calculated the difference between the scores on each exam in each respondent’s local public school system with the statewide score on that respective exam. We summed the differences across the three exams for each community, allowing for the possibility that the local schools might not have had consistent outcomes relative to the state overall across the three exams. For example, schools in a given community might have outscored the state on one or two exams, then scored below the state on the remaining exam(s). We coded communities that scored a total of at least three points higher than the state (theoretically scoring at least one point higher than the state on each of the three tests) as having scored higher than the state overall, and we coded communities with a negative aggregate difference of three points or more as having scored lower than the state overall. Allowing for close results, we coded communities that came within two points in either direction of the total state score as having scored about the same as the state overall. Among our sample of respondents, 41% lived in communities that scored lower than the state overall, 5% lived in towns that scored about the same as the state overall, and 55% lived in communities that scored better than the state overall (the percentages sum to 101 due to rounding).

We compared these results to respondents’ assessments of how their communities performed, and coded whether respondents rated their school performance lower than the actual results, about the same as the actual results, or higher than the actual results. In estimating the percentage of respondents who had overstated their local schools’ performance, we focused on survey respondents who fell into one of two categories. The first category encompassed respondents who said their local schools scored better than the state overall in percentages of students who scored advanced or proficient, when in fact their schools scored about the same or worse than the state overall. The second category consisted of respondents who said their schools scored about the same as the state overall, when in fact their local schools scored worse than the state overall. Respondents who had understated their local schools’ performance were those who had said their schools scored below the statewide numbers for schools overall, when their schools’ scores actually met or surpassed the statewide mark, or respondents who said their schools had scored about the same as the state overall when in fact their schools had exceeded the statewide scores.

We found that 14% understated their schools’ performance, 45% provided an accurate assessment, and 30% overstated their schools’ performance. Another 11% said they did not know how their local schools performed on the tests. Based on these results, we created a dichotomous measure of whether the respondent overstated school performance (1 = yes, 0 = no) as an indicator of illusory superiority. The variable reflects the extent to which individuals believe that their schools performed better than the state overall on the MCAS tests when the schools were lower than or level with the state’s performance, or that their schools were comparable to the statewide performance when the schools actually fell short of the statewide mark. Validating this phenomenon using the test scores for the local public schools helps to quantify the illusory part of illusory superiority in a way that is rare for scholarly treatments of the subject.

Multivariate models

Having illustrated indicators of illusory superiority in the sample, we now turn to multivariate models to test our hypotheses. In testing the models, we use a dependent variable that represents the traditional Lake Wobegon effect, and a dependent variable that quantifies the disconnection between assessments of schools and their actual performance on MCAS tests.

The traditional measure of the Lake Wobegon effect consists of a scale that we constructed from the three pairs of variables in which survey respondents compared local schools to schools statewide: the letter grades that respondents assigned to local schools and schools around the state; the levels of confidence respondents expressed in the quality of education that occurs in their local schools compared to schools statewide; and the letter grades respondents gave to the quality of teaching in local schools and in schools across the state. For each pair of measures, we assigned a value of one if the respondent rated local schools higher than schools in the state overall, and a value of zero otherwise. We summed the scores for the three pairs of measures, creating a four-category variable that reflected whether a respondent rated local schools higher than schools in the state overall for zero, one, two, or all three pairs of measures. Sixteen percent of respondents rated local schools higher than schools in the state overall for all three sets of measures, while 14% did so for two of the sets of measures, 24% did so for one of the pairs of measures, and 46% did not rate local schools higher than schools in the state overall on any of the sets of measures.

The other dependent variable that we used was based on the validated assessments of local school performance on the MCAS tests compared to the statewide performance, and captured the extent to which survey respondents overstated their local school performance on the standardized tests.

To test our hypotheses, we included in the models a measure of whether the respondent said s/he had one or more children enrolled in local public schools (23% of respondents). We also controlled for whether the respondent owned a home (77% of respondents). To capture longevity of residence, we created a four-category variable for years of residence: five years or fewer (18% of the sample), 6–15 years (27%), 16–25 years (18%), and more than 25 years (38%), and included the measures as dummy variables with the lowest category omitted as the reference category.Footnote 3 We chose to treat this category as categorical instead of continuous to try to distinguish between views of newcomers as opposed to long-term residents. Newcomers may have gathered specific information prior to moving and experiencing life in a community firsthand. Prior expectations and lived experience might vary, leading to changes over time, but not necessarily in a linear fashion. We also included a three-category measure of education: high school or less, some college, and college graduate, with dummy variables for some college and college graduate.Footnote 4 The model also controls for party identification (Democrat and Republican, with independents, third party voters, and nonvoters as the reference category), gender, and age, and dummy variables for respondents who identified as Black, Hispanic, or another race other than white. Given that we are measuring attitudes toward local schools, which are administered at the city and town level in Massachusetts, we estimated the models with robust standard errors clustered for the 207 cities and towns in which the respondents lived (Table 4).

Table 4. Predictors of rating local schools higher than schools in the state overall

Note. Dependent variable is a four-category measure of whether a survey respondent rated local schools higher than schools overall in the state on zero, one, two, or three dimensions. Coefficients are maximum likelihood estimates using ordered logistic regression. Standard errors are in parentheses. Models were estimated with robust standard errors clustered at the town or city level (207 clusters). N = 401. χ 2 = 44.37, 14 df, p < 0.01 (Model 1); χ 2 = 46.07, 15 df, p < 0.01 (Model 2). Pseudo-R 2 = 0.04 (both models).

* p < 0.05.

** p < 0.01 (two-tailed tests).

p < 0.10.

The model featuring the dependent variable scale constructed from three pairs of measures of school quality at the local and state levels provided some support for Hypothesis 1. Parents with one or more children enrolled in local public schools were more likely to rate local public schools highly than were respondents with no children in local public schools. The results also offer some evidence, in support of Hypothesis 2, that positive feelings about public schools will increase over time, with parents of students in high schools being more likely to favorably assess their local schools than parents with children in middle or elementary schools. Parents of students in public high schools were more likely than others to offer a positive assessment, but the effect was significant only at the level of p < 0.10 (p = 0.08). Parents of students in middle and elementary schools were not more likely than others in the sample to rate their local schools more positively than schools in the state overall.

The models do not provide evidence in support of Hypothesis 3, that homeowners would be more likely than others to assess local schools more positively than schools in the state overall. But there is evidence that, as longevity of residence increases, so too does the likelihood of rating local schools more positively than schools overall in the state (Hypothesis 4). With residential longevity of five years or fewer as the reference category, residents who have lived in communities for 16–25 years, or for more than 25 years, were significantly more likely to rate their local schools higher than schools in the state overall.

Education and race also had significant effects on whether survey respondents rated local schools higher than schools statewide. Using dummy variables for levels of education, we found that college graduates were more likely to rate local schools more positively than schools overall compared to respondents in the omitted category, individuals with a high school diploma or less. While Moe (Reference Moe2001) found that individuals with less education were more likely to express satisfaction with their local public schools, the opposite result appears here. It may be that as educational attainment increases, individuals self-select into more affluent communities with greater funding for public schools, and form more positive views of those schools. We also found that Black/Non-Hispanic respondents were more likely to rate their local schools higher than schools across the state compared to White/Non-Hispanic respondents. The controls for Hispanic respondents and those of other nonwhite races were not significant.

It is difficult to quantify the effects of the significant independent variables in an ordered logistic regression model, so we calculated the predicted probabilities for rating local schools higher than schools in the state overall for each of the significant independent variables. Using the predicted probabilities, we calculated the average marginal effects of the variables of interest (Table 5).

Table 5. Predicted probabilities of rating local schools higher than schools in the state overall on up to three measures (Models 1 and 2)

Parents who have one or more children in the local public schools had a predicted probability of 22% of rating their schools higher on all three dimensions, compared to 14% for those without children in the local public schools, for an average marginal effect of 8 percentage points. The average marginal effect for rating local schools higher on two dimensions was 4 percentage points higher for parents compared to non-parents. Residential longevity had similar effects, with individuals who had lived in their communities for 16–25 years being 8 percentage points more likely to rate their local schools higher on all three dimensions compared to those who had lived in their communities for five years or fewer. Among those who had lived in their communities for more than 25 years, the average marginal effect was 9 percentage points for rating local schools higher across the three sets of measures. The average marginal effect for having a college degree compared to a high school diploma or less was 7 percentage points. Race provided the largest marginal effect. Blacks were 40 percentage points more likely to rate their local schools higher than schools in the state overall on all three dimensions compared to White/Non-Hispanics.

In terms of overstating local school performance on standardized tests, our models generated mixed results (Table 6).

Table 6. Predictors of overstating local school performance on standardized tests

Note. Dependent variable is a dichotomous measure of whether respondent overstated local school performance on standardized tests. Coefficients are maximum likelihood estimates using logistic regression. Standard errors are in parentheses. Models were estimated with robust standard errors clustered at the town or city level (207 clusters). N = 401. χ 2 = 37.69, 14 df, p < 0.01 (Model 3); χ 2 = 37.71, 15 df, p < 0.01 (Model 4). Pseudo-R 2 = 0.08 (both models).

* p < 0.05.

** p < 0.01 (two-tailed tests).

p < 0.10.

One of the three groups of key stakeholders, long-term residents, were more likely to overstate how their local schools performed on standardized tests. Parents of students in public schools and homeowners were no more likely than others to overstate local school performance compared to the state overall. The effects for long-term residents varied by tenure, with those living in the community for more than 25 years the most likely to overstate local school performance compared to residents who lived in the community for five years or fewer, providing further support for Hypothesis 4. Residents of the community for 6–15 years also were more likely to overstate local school performance in Models 3 and 4, as were residents of 16–25 years, but the effects were significant only at p < 0.10.

As was the case with rating local schools in comparison to schools in the state overall, education was a significant predictor here, but in the opposite direction. College graduates were less likely to overstate local school test performance compared to those with a high school diploma or less. In contrast, in Models 1 and 2, college graduates were more likely to rate their local schools higher than schools in the state overall. Hispanic respondents also rated their local schools higher on test performance compared to White/Non-Hispanic respondents.

The average marginal effects provide additional details (Table 7).

Table 7. Predicted probability of overstating local school performance on standardized tests

Residents who had lived in their communities for more than 25 years were 22 percentage points more likely to overstate local school performance than residents who had lived in the community for five years or fewer. This may stem from a growing sense of connection to the community and pride in the community’s schools. Hispanic respondents also were 27 percentage points more likely to overstate local school performance. While there is little in the previous research to suggest why this might be the case, Howell (Reference Howell2006) found that Hispanic parents of children in public schools were less likely than white parents to accurately state whether the schools their children attended were making “adequate yearly progress” (154). Whether the same connection exists in our data is difficult to test, given the small number of Hispanic parents of children in public schools in the data (4 out of the 11 Hispanic respondents in the survey said that one or more of their children attended public schools).

The effect by education provides an interesting puzzle. College graduates were 18 percentage points less likely than those with a high school diploma or less to overstate local school performance. Yet college graduates were seven points more likely than those with a high school diploma or less to rate local schools higher than schools in the state overall in the three measures of educational quality (see Table 5). It may be that college graduates are more likely to view their local schools as being of higher quality than schools overall in the state, but may also have a more accurate view of school performance on standardized tests. When we re-ran these models with a dependent variable measuring whether respondents accurately described local school test performance, we found that education was a significant and positive predictor. Respondents with a college degree were more likely to accurately describe school performance compared to respondents with a high school diploma or less, with an average marginal effect of 26 percentage points (model results can be found in Table A2 in the Appendix).

Our results show that about 30% of survey respondents overstate local school performance on standardized tests compared to schools in the state overall. Length of time spent living in the community and Hispanic ethnicity are positive predictors of overstatement, and educational attainment is a negative predictor.

Is overstatement of school performance related to rating one’s local public schools higher than schools in the state overall? In other words, does misperception of local school performance positively predict a belief in the superiority of one’s local schools? We added perceptions of local school performance to our models’ predicting ratings of local schools relative to schools overall in the state (Table 8).

Table 8. Perceptions of local test performance as predictors of rating local schools higher than schools in the state overall

Note. Dependent variable is a four-category measure of whether a survey respondent rated local schools higher than schools overall in the state on zero, one, two, or three dimensions. Coefficients are maximum likelihood estimates using ordered logistic regression. Standard errors are in parentheses. Models were estimated with robust standard errors clustered at the town or city level (207 clusters). N = 401. χ 2 = 117.78, 19 df, p < 0.01 (Model 5); χ 2 = 120.08, 20 df, p < 0.01 (Model 6). Pseudo-R 2 = 0.12 (both models).

* p < 0.05.

** p < 0.01 (two-tailed tests).

p < 0.10.

The dependent variable in Models 5 and 6 is the same as in Models 1 and 2 in Table 4, a four-category measure in which respondents could rate local schools higher than schools overall on zero, one, two, or three dimensions. We added to the models’ dummy variables for whether respondents overstated local school performance on standardized tests, understated local school performance, or offered an accurate assessment. Further, we broke out accurate assessments by whether respondents correctly stated that their local public schools scored worse, about the same, or better than schools in the state overall. Respondents who said they did not know served as the omitted reference category. The results did not provide evidence of a link between overstatement of local school performance and rating local schools higher than schools overall in the state (Hypothesis 6). Instead, two categories of accurate assessment of local public schools had significant effects, but in opposite and interesting directions. Respondents who accurately described their local schools’ performance as worse than schools in the state overall were less likely to rate their local schools higher than schools across the state compared to people who responded “don’t know.” Those who correctly described their local schools as scoring higher than the state overall were more likely to rate their local schools higher than schools in the state overall. When controlling for perceptions of school performance, parents of students in public schools were not significantly more likely than others to rate their schools higher than schools in the state overall.

The results, then, do not provide a factual basis for illusory superiority, in that overstating local school performance does not have a significant effect on rating local schools more highly than schools in the state overall. Instead, accurate perceptions of local school performance are related to rating local schools differently than schools overall (Table 9).

Table 9. Predicted probabilities of rating local schools higher than schools in the state overall on up to three measures using assessments of school performance (Models 5 and 6)

Among respondents who correctly reported that their local schools scored higher than the state overall, the predicated probability of rating local schools higher than all schools in the state on all three measures was 0.32, compared 0.07 for all others, for an average marginal effect of 25 percentage points. Among those who correctly stated that their local schools scored worse than schools in the state overall, the probability of rating local schools higher than the state overall on none of the three dimensions was 0.78, compared to 0.44 for all others, an average marginal effect of 34 percentage points. The results indicate that accurate perception of school performance is related to comparative assessments of local schools and schools across the state.

While overstating school performance is not related to rating local schools higher, there is still the question of whether an overly rosy view of local school performance shapes policy preferences, particularly when it comes to additional funding for local schools. We asked survey respondents, “Would you be willing to pay higher taxes to improve the quality of public schools in your local school district?” Sixty-three percent said yes, 32% said no, and 5% said they did not know. We predict that respondents who overstate local school performance on standardized tests are more likely to view increasing taxes as further investment in strongly performing schools, and therefore may be more likely to be willing to pay higher taxes to improve the quality of their schools (Hypothesis 7).

We tested the hypothesis with willingness to pay higher taxes as the dependent variable (1 = yes, 0 = no). We excluded the 19 respondents who said they did not know or declined to answer the question, reducing the sample to 382 respondents. We used overstatement, understatement, and accurate descriptions of school performance as predictors, along with other variables used in the earlier models: whether the respondent has a child or children in the public schools, years of residence, whether the respondent is a homeowner, party identification, gender, age, education, and measures for Hispanic ethnicity and members of other nonwhite races. We had to omit the control variable for Black/Non-Hispanic respondents because all eight of those respondents indicated they would be willing to pay higher taxes to improve their local schools (Table 10).

Table 10. Predictors of willingness “to pay higher taxes to improve the quality of public schools in your local school district”

Note. Dependent variable consists of responses to the following question: “Would you be willing to pay higher taxes to improve the quality of public schools in your local school district?” 1 = yes, 0 = no. Coefficients are maximum likelihood estimates using logistic regression. Standard errors are in parentheses. Models were estimated with robust standard errors clustered at the town or city level (203 clusters). N = 382 (omits 19 cases in which respondents answered “don’t know” or refused to answer the question). χ 2 = 41.82, 18 df, p < 0.01 (Model 7); χ 2 = 43.92, 19 df, p < 0.01 (Model 8). Pseudo-R 2 = 0.10 (Model 7) and 0.12 (Model 8).

* p < 0.05.

** p < 0.01 (two-tailed tests).

p < 0.10.

Overstating local school performance was a positive predictor of willingness to pay taxes, as we hypothesized. Understating school performance also was a positive predictor, while accurately describing school performance had no effect across any of the three conditions of accuracy (correctly stating that local schools performed better, the same, or worse than schools in the state overall). Being the parent of a child or children in public schools had no significant effect for parents of all public-school students. But parental status was a significant predictor of willingness to pay higher taxes for respondents with children in middle and/or elementary schools (Model 8). Having a child in high school was not a significant predictor. Length of residence in the community was a negative predictor, while identifying as a Democratic voter had a positive effect. Republican Party identification had a negative effect, but the effect fell short of statistical significance (p = 0.10 for both models). Having a college degree exerted a positive influence, reaching significance at p = 0.047 in Model 7, but falling short of significance in Model 8 (p = 0.06).

Calculating the average marginal effects of the significant variables showed that overstating and understating school performance, as well as parenthood, had the largest effects (Table 11).

Table 11. Predicted probability of willingness to pay higher taxes to improve the quality of local public schools

Overstating and understating local school performance on standardized tests increased the probability of supporting higher taxes by an average marginal effect of about 21 percentage points. It might be the case that overstating school performance reflects a belief that the schools are doing a good job, and that additional tax dollars would be well spent to further improve the quality of local schools. The positive effect of understating school performance on support for raising taxes may rest on a different logic, that local schools are not doing well, and could use the additional financial support.

Parental views on a hypothetical tax increase also provided interesting results. The probability of being willing to pay higher taxes to improve public schools was 0.85 for parents of children in public middle and elementary schools, compared to 0.64 for those with children in private schools or who did not have school-aged children, an average marginal effect of 21 percentage points. Why would parents of middle school and elementary school students be more willing to pay higher taxes, while the same was not true for parents of high school students? We speculate that parents of students who still have several years to go in the public schools may view higher taxes as an investment with a direct payoff for their children. Parents of students who may be graduating soon may be less willing to pay higher taxes given that their children might benefit from the additional resources for a shorter period of time.

Discussion

The results of our analyses reinforce some aspects of previous research, while breaking new ground in ways that we had and had not anticipated. Some Massachusetts residents were more likely to view their local schools in a positive light compared to schools across the state. These results were consistent across three different pairs of measures. This was in keeping with findings in numerous national surveys. In parsing the factors that might explain this phenomenon, we focused on self-interest and predicted that three groups of stakeholders would rate local schools higher than schools in general – parents of students in public schools (Hypothesis 1), and particularly parents of high school students (Hypothesis 2); homeowners (Hypothesis 3); and long-term residents of a community (Hypothesis 4). We found support for Hypotheses 1 and 4, and some evidence in support of Hypothesis 2, but only at a significance level of p < 0.10. Having a stake in the public schools does appear to be connected to a tendency to view local schools more favorably than schools overall, but not for all stakeholders.

We then turned to a consideration of whether the self-interest of stakeholders might shape their perceptions of local school performance on standardized tests compared to schools in general. We hypothesized that the three groups (parents, homeowners, and long-term residents) would be more likely than others to overstate local school performance (Hypothesis 5). Our survey found that 30% of respondents overstated local school performance, but that only one group of stakeholders, long-term residents, was likely to fall into this category. This specific finding is consistent with previous research. Given that school performance can be a metric for deciding whether to move to a community, more recent arrivals may have a more accurate sense of school performance. Bickers and Stein (Reference Bickers and Stein1998) found that newcomers were more likely than long-term residents to correctly identify whether standardized test scores for the community were above, below, or the same as the county average. Further examination of the data found that accurately describing school performance was more likely for homeowners, as well as college graduates. In the case of homeowners, it might be that concern about property values might make homeowners more vigilant about information concerning the quality of local schools, and less prone to misperception of school performance.

We also hypothesized that overstating local school performance on standardized tests would positively predict rating local schools higher than schools overall (Hypothesis 6). We did not find support for that hypothesis, but we still found interesting results. While overstating school performance was not a significant predictor of rating local schools higher than schools in general, accurate perceptions of school performance did have significant effects, and in logical directions. If one accurately perceived that local schools performed better than schools in the state overall, one was more likely to rate local schools more highly than schools in general. If one correctly perceived that local schools performed worse than schools overall, one was less likely to rate local schools more highly than schools in general. Our marker for illusory superiority, overstating local school performance, did not matter in this context, but accurate perceptions of local school performance did.

We also predicted that overstating local school performance on standardized tests would be positively related to supporting a tax increase to generate additional revenue for public schools (Hypothesis 7). We based this expectation on the logic that if local schools are perceived to be doing well relative to schools in general, that may reflect confidence in local schools, which could lead to support for raising more revenue to build on that success. The relationship was significant, and it was in a positive direction. Understating local school performance also had a significant and positive effect, while accurate perceptions of local school performance were not significantly related to views on a tax increase to support local schools. The positive link between understating school performance and support for a tax increase makes sense, in that those citizens might believe the schools are in need of help in the form of additional funding. Overall, factual misperceptions about school performance can contribute to support for higher taxes for local schools. This is a valuable lesson for educators and stakeholders to consider when engaging in debate on this issue.

Taken together, the results advance our knowledge of the extent and potential impact of illusory superiority in the assessment of local public schools, particularly with the use of standardized test scores to validate the scope and magnitude of this phenomenon. This provides quantitative evidence for a line of thinking that has been widely accepted but not always carefully documented – that stakeholders give more positive assessments to their local schools than to schools in general. The evidence presented here indicates that, in doing so, some stakeholders – specifically long-term residents – are offering an overly optimistic view of the state of their public schools.

By testing for awareness of school performance on standardized tests, and then including that awareness in models of opinion on local public schools, we build on and advance previous research. Moe (Reference Moe2001) used test scores as part of a composite measure of whether school districts were advantaged or disadvantaged, and used that as context for residents’ attitudes toward their schools. Chingos, Henderson, and West (Reference Chingos, Henderson and West2012) used school performance on standardized tests as an independent variable predicting public attitudes toward local schools. But they did not test for public awareness of school performance on those tests. Barrows et al. (Reference Barrows, Henderson, Peterson and West2016) used comparative data on school performance as an experimental treatment, and tested the post hoc effects of learning that information on public assessments of their local schools. But Barrows et al. did not test for pre-existing knowledge.

In addition to measuring for accurate knowledge of local school performance relative to schools in the state overall, we extend the literature by incorporating that knowledge into models of overall assessment of local public schools. We also build on previous research by examining the factors that may predict the overstatement, including self-interest on the part of specific stakeholders (parents, homeowners, and long-term residents).

The attitudinal dynamics that we describe here illustrate how perceptions of school performance and comparative assessments of that performance might shape views of local public schools. But we must qualify our findings. While we use the term illusory superiority to describe why survey respondents may offer overly favorable assessments of schools in contrast to empirical evidence, our results only tell part of the story. We have surmised that self-interest generates these views for stakeholders, but our data do not allow us to test for the psychological motivations that might underlie a sense of illusory superiority. Those factors could include an individual’s need to maintain self-esteem, or to reassure oneself that one is shielded from undesirable events (Hoorens Reference Hoorens1993), in this case living in a community with underperforming schools. Group identity also could come into play, with respondents viewing students in local schools as the “ingroup” and students in schools in general as the “outgroup.” Social identity theory posits that group members tend to evaluate members of the ingroup more favorably than they do members of the outgroup (see Brewer Reference Brewer1979; Tajfel Reference Tajfel1982). This can extend to more favorable estimates of a number of traits, including levels of knowledge, in an ingroup compared to an outgroup (Goldie and Wolfson Reference Goldie and Wolfson2014). Future research could directly examine the individual- and group-level psychological factors behind assessments of local school quality and performance by measuring for possible reasons behind respondents’ comparative assessments of local schools and schools in general.

Other individual and community-level factors could come into play as well. In examining the impact of long-term residence in a community, it would be helpful to consider whether survey respondents also attended the schools in question, and if so whether their experiences might influence perceptions of local schools. In addition, among parents of students in public schools, how heavily are their assessments influenced by their children’s academic performance? At the community level, taking into account socioeconomic indictors and per-pupil spending also might provide additional context within which to consider individual-level assessments.

We must also caution that our results may be shaped by the school culture of Massachusetts, and the publicity surrounding MCAS and its role in local education policy. Given that Massachusetts’ public schools frequently rank at the top of various measures of academic quality nationally, one could argue that testing our hypotheses in Massachusetts might serve as a conservative test of the dynamic represented by perceptions of school performance. Would the extent of the disconnection between perception and reality be wider in states that rank lower on measures of academic quality? Or would those lower rankings temper public expectations in those states? These are additional areas for future research. We also acknowledge that standardized test scores are only one measure of student achievement and school excellence, and a contested measure at that.

Conclusion

Even with these caveats, our findings contribute to our knowledge about the Lake Wobegon effect and its manifestation in assessments of the performance of local public schools. About one-third of respondents in our study rated local schools higher than schools overall, although conventional wisdom sometimes ascribes this view to a majority of the population. Two groups of stakeholders – parents of public-school students and long-term residents – were more likely than others to fall into this category. Long-term residents also tended to overstate local school performance on standardized tests, while homeowners and college graduates were more likely to offer accurate assessments. Overstating local school performance on standardized tests did not shape comparative assessments of local public schools, but overstatement was associated with support for increasing taxes to fund schools (as was understatement of school performance). Are all of the children above average? The data indicate they are not, but perceptions that such is the case have the potential to influence public discourse about school funding.

Data availability statement

Replication materials are available on SPPQ Dataverse at https://doi.org/10.15139/S3/DZQNQW (Vercellotti and Fairman Reference Vercellotti and Fairman2023).

Funding statement

The authors received no financial support for the research, authorship, and/or publication of this article.

Competing interest

The authors declared no potential competing interest with respect to the research, authorship, and/or publication of this article.

Author Biographies

Timothy Vercellotti is a professor of Political Science at Western New England University. His current research focuses on youth political participation in the United States and the United Kingdom, and the effects of election outcomes on political efficacy.

Peter Fairman is an associate professor of Political Science at Western New England University. His teaching and research interests include public policy, state and local politics, and public administration.

Appendix

Table A1. Comparison of sample to US Census estimates for Massachusetts adults aged 18 and older (American Community Survey Five-Year Estimate, 2010–2014)

Source: Western New England University Polling Institute, Massachusetts Statewide Telephone Survey, February 24 to March 2, 2014, and US Census Bureau, American Community Survey Five-Year Estimates for Massachusetts, 2010–2014 (US Census Bureau, n.d.).

Table A2. Predictors of accurately stating local school performance on standardized tests

Note. Dependent variable is a dichotomous measure of whether respondent accurately stated local school performance on standardized tests. Coefficients are maximum likelihood estimates using logistic regression. Standard errors are in parentheses. Models were estimated with robust standard errors clustered at the town or city level (207 clusters). N = 401. χ 2 = 40.66, 14 df, p < 0.01 (Model A1); χ 2 = 40.99, 15 df, p < 0.01 (Model A2). Pseudo-R 2 = 0.07 (both models).

Footnotes

1 A demographic breakdown of the sample in comparison to US Census population estimates for Massachusetts adults aged 18 and older from the rolling five-year American Community Survey (2010–2014) is available in Table A1 in the Appendix. Our sample is close to the estimated population in terms of gender, but our sample tends to be whiter and older than the estimated population. We believe that the sample still allows us to assess differences between groups of interest (i.e., parents, homeowners, and long-term residents) in terms of their relationships to the dependent variables in our analyses.

2 Using the US Postal Service database of zip codes, we matched each zip code to a municipality or a neighborhood within a municipality (United States Postal Service n.d.). From there, we matched respondents’ municipalities to public school systems, which tend to follow municipal boundaries at the regional, city, or town level in Massachusetts (Massachusetts Bureau of Geographic Information n.d.). In 11 cases where zip codes crossed municipal and school district boundaries, we drew from other information in the data to place respondents in the correct school system, such as the name of the county that the respondent provided during the interview, or the corresponding telephone number from the sample, which we cross-checked against addresses using the reverse-lookup feature on WhitePages.com. While zip codes may lack the geographic precision of other measures, such as Census tract, we reasoned that zip code would be the most straightforward geographic marker to request from survey respondents, who might hesitate to give a complete address and who might not be able to name their local school system. Previous research in political science has relied on zip codes to capture contextual data across a wide range of subjects, including foreign-born voter turnout (Barreto Reference Barreto2005), Anglo voting on ballot initiatives (Branton et al. Reference Branton, Dillingham, Dunaway and Miller2007), racial proximity and attitudes toward crime (Gilliam Jr., Valentino, and Beckmann Reference Gilliam, Valentino and Beckmann2002), the political geography of campaign contributions (Gimpel, Lee, and Kaminski Reference Gimpel, Lee and Kaminski2006), and environmental determinants of white racial attitudes (Oliver and Mendelberg Reference Oliver and Mendelberg2000).

3 There was some overlap between the three stakeholder variables. Among parents of children in public schools, 83% owned a home, while 75% of the rest of the sample owned a home. In terms of residential longevity, parents were distributed across the four categories, but the largest concentration (46%) had lived in their community for 6–15 years. Among those without children in public schools, the highest concentration had lived in the community for more than 25 years (44%). Homeowners also were distributed across the four categories of residential longevity, with the highest concentration in the “more than 25 years” category (42%), and the lowest in the “five years or less” category (13%). Among non-homeowners, the largest concentration was in the “five years or less” category of residential longevity (33%).

4 We opted not to include respondents’ household income in the models for practical and methodological reasons. Fourteen percent of the sample declined to respond when asked to choose a category for household income, resulting in 56 missing cases if we included the variable in the model. Household income and respondent education levels appeared to have overlapping effects on respondents’ assessments of school quality and performance. Income, when included in the model, was significant, but education was not. Without income in the model, education exerted a significant effect. In order to minimize the effects of missing data, we opted to use only education as a proxy for socioeconomic status.

* p < 0.05.

** p < 0.01 (two-tailed tests).

References

Barreto, Matt A. (2005) “Latino Immigrants at the Polls: Foreign-Born Voter Turnout in the 2002 Election.” Political Research Quarterly 58 (1): 7986.CrossRefGoogle Scholar
Barrows, Samuel, Henderson, Michael, Peterson, Paul E., and West, Martin R.. 2016. “Relative Performance Information and Perceptions of Public Service Quality: Evidence from American School Districts.” Journal of Public Administration Research and Theory 26 (3): 571–83.CrossRefGoogle Scholar
Berkman, Michael B., and Plutzer, Eric. 2004. “Gray Peril or Loyal Support? The Effects of the Elderly on Educational Expenditures.” Social Science Quarterly 85 (5): 1178–92.CrossRefGoogle Scholar
Betts, Allan, Croom, Simon, and Lu, Dawei. 2011. “Benchmark to Escape from Lake Wobegon.” Benchmarking: An International Journal 18 (5): 733–44.CrossRefGoogle Scholar
Bickers, Kenneth N., and Stein, Robert M.. 1998. “The Microfoundations of the Tiebout Model.” Urban Affairs Review 34 (1): 7693.CrossRefGoogle Scholar
Boston.com. 2013. “2013 MCAS Results: Find the Scores for Your District and School.” http://archive.boston.com/news/special/education/mcas/scores13/.Google Scholar
Branton, Regina, Dillingham, Gavin, Dunaway, Johanna, and Miller, Beth. 2007. “Anglo Voting on Nativist Ballot Initiatives: The Partisan Impact of Spatial Proximity to the U.S.–Mexico Border.” Social Science Quarterly 88 (3): 882–97.CrossRefGoogle Scholar
Brewer, Marilynn B. 1979. “In-Group Bias in the Minimal Intergroup Situation: A Cognitive-Motivational Analysis.” Psychological Bulletin 86 (2): 307–24.CrossRefGoogle Scholar
Carnegie Foundation for the Advancement of Teaching. 1992. School Choice: A Special Report. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching.Google Scholar
Chingos, Matthew M., Henderson, Michael, and West, Martin R.. 2012. “Citizen Perceptions of Government Service Quality: Evidence from Public Schools.” Quarterly Journal of Political Science 7 (4): 411–55.CrossRefGoogle Scholar
Dunning, David, Heath, Chip, and Suls, Jerry M.. 2004. “Flawed Self-Assessment: Implications for Health, Education, and the Workplace.” Psychological Science in the Public Interest 5 (3): 69106.CrossRefGoogle ScholarPubMed
Favero, Nathan, and Meier, Kenneth J.. 2013. “Evaluating Urban Public Schools: Parents, Teachers, and State Assessments.” Public Administration Review 73 (3): 401412.CrossRefGoogle Scholar
Figlio, David N., and Lucas, Maurice E.. 2004. “What’s in a Grade? School Report Cards and the Housing Market.” American Economic Review 94 (3): 591604.CrossRefGoogle Scholar
Gilliam, Franklin D. Jr., Valentino, Nicholas A., and Beckmann, Matthew N.. 2002. “Where You Live and What You Watch: The Impact of Racial Proximity and Local Television News on Attitudes about Race and Crime.” Political Research Quarterly 55 (4): 755–80.CrossRefGoogle Scholar
Gimpel, James G., Lee, Frances E., and Kaminski, Joshua. 2006. “The Political Geography of Campaign Contributions in American Politics.” Journal of Politics 68 (3): 626–39.CrossRefGoogle Scholar
Goldie, Sarah R., and Wolfson, Sandy. 2014. “Soccer Fans’ Self and Group Perceptions of Superiority over Rival Fans.” Journal of Sport Behavior 37 (1): 2436.Google Scholar
Harris, Douglas N., and Larsen, Matthew F.. 2015. “What Schools Do Families Want (And Why)? New Orleans Families and Their School Choices Before and After Katrina.” Education Research Alliance for New Orleans. https://educationresearchalliancenola.org/publications/policy-brief-what-schools-do-families-want-and-whyGoogle Scholar
Hoorens, Vera. 1993. “Self-Enhancement and Superiority Biases in Social Comparison.” European Review of Social Psychology 4 (10): 113–39.CrossRefGoogle Scholar
Hoorens, Vera. 1995. “Self-Favoring Biases, Self-Presentation, and the Self-Other Asymmetry in Social Comparison.” Journal of Personality 63 (4): 793817.CrossRefGoogle Scholar
Howell, William. 2006. “Switching Schools? A Closer Look at Parents’ Initial Interest in and Knowledge about the Choice Provisions of No Child Left Behind.” Peabody Journal of Education 81 (1): 140–79.CrossRefGoogle Scholar
Howell, William G., West, Martin R., and Peterson, Paul E.. 2011. “The Public Weighs In on School Reform: Intense Controversies Do Not Alter Public Thinking, But Teachers Differ More Sharply Than Ever.” Education Next 11 (4): 1022.Google Scholar
Howell, William G., West, Martin R., and Peterson, Paul E.. 2013. “Reform Agenda Gains Strength: The 2012 EdNext-PEPG Survey Finds Hispanics Give Schools a Higher Grade Than Others Do.” Education Next 13 (1): 819.Google Scholar
Jacobsen, Rebecca, Snyder, Jeffrey W., and Saultz, Andrew. 2014. “Informing or Shaping Public Opinion? The Influence of School Accountability Data Format on Public Perceptions of School Quality.” American Journal of Education 121 (1): 127.CrossRefGoogle Scholar
Kelly, James P. III, and Scafidi, Benjamin. 2013. “More Than Scores: An Analysis of Why and How Parents Choose Private Schools.” Friedman Foundation for Educational Choice. https://www.edchoice.org/wp-content/uploads/2015/07/More-Than-Scores.pdfGoogle Scholar
Loveless, Tom. 1997. “The Structure of Public Confidence in Education.” American Journal of Education 105: 127–59.CrossRefGoogle Scholar
Massachusetts Bureau of Geographic Information. n.d. “MassGIS Data: Public School Districts, Commonwealth of Massachusetts.” https://docs.digital.mass.gov/dataset/massgis-data-public-school-districts.Google Scholar
Massachusetts Department of Elementary and Secondary Education. n.d.a. “Massachusetts Comprehensive Assessment System: Overview.” http://www.doe.mass.edu/mcas/overview.html?faq=1.Google Scholar
Massachusetts Department of Elementary and Secondary Education. n.d.b. “Statewide Reports: MCAS Achievement Results 2013.” https://profiles.doe.mass.edu/statereport/mcas.aspx.Google Scholar
Maxwell, Nan L., and Lopus, Jane S.. 1994. “The Lake Wobegon Effect on Student Self-Reported Data.” American Economic Review 84 (2): 201–5.Google Scholar
McCabe, Brian J. 2013. “Are Homeowners Better Citizens? Homeownership and Community Participation in the United States.” Social Forces 91 (3): 929–54.CrossRefGoogle Scholar
Moe, Terry M. 2001. Schools, Vouchers, and the American Public. Washington, DC: Brookings Institution Press.Google Scholar
Moore, Don A., and Small, Deborah A.. 2007. “Error and Bias in Comparative Judgment: On Being Both Better and Worse Than We Think We Are.” Journal of Personality and Social Psychology 92 (6): 972989.CrossRefGoogle ScholarPubMed
Mutz, Diana C. 1998. Impersonal Influence: How Perceptions of Mass Collectives Affect Political Attitudes. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Oliver, J. Eric, and Mendelberg, Tali. 2000. “Reconsidering the Environmental Determinants of White Racial Attitudes.” American Journal of Political Science 44 (3): 574–89.CrossRefGoogle Scholar
Peterson, Paul E., Henderson, Michael, and West, Martin R.. 2014. Teachers versus the Public: What Americans Think about Schools and How to Fix Them. Washington, DC: Brookings Institution Press.Google Scholar
Pride, Richard A. 2002. “How Critical Events Rather Than Performance Trends Shape Public Evaluations of Schools.” Urban Review 34 (2): 159–78.CrossRefGoogle Scholar
Tajfel, Henri. 1982. “Social Psychology of Intergroup Relations.” Annual Review of Psychology 33: 140.CrossRefGoogle Scholar
Teske, Paul, Fitzpatrick, Jody, and Kaplan, Gabriel. 2006. “The Information Gap?Review of Policy Research 23 (5): 969–81.CrossRefGoogle Scholar
United States Census Bureau. n.d. “American Community Survey, 2010–2014 American Community Survey Five-Year Estimates for Massachusetts.” Data downloaded from https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=ACS_pums_csv_2010_2014&prodType=document.Google Scholar
United States Postal Service. n.d. “Look Up a Zip Code: Cities by Zip Code.” https://tools.usps.com/zip-code-lookup.htm?citybyzipcode.Google Scholar
Vercellotti, Timothy, and Fairman, Peter. 2023. “Replication Data for: Why Are All of the Children Perceived to Be Above Average? Stakeholders and the Lake Wobegon Effect in Attitudes toward Public Schools.” UNC Dataverse, V1, UNF:6:YXmXk7HD0+XqTAdiipkPvA== [fileUNF]. https://doi.org/10.15139/S3/DZQNQWCrossRefGoogle Scholar
West, Martin R., Henderson, Michael B., Peterson, Paul E., and Barrows, Samuel. 2017. “The 2017 EdNext Poll on School Reform: Public Thinking on School Choice, Common Core, Higher Ed, and More.” https://www.educationnext.org/2017-ednext-poll-school-reform-public-opinion-school-choice-common-core-higher-ed/.Google Scholar
Figure 0

Table 1. Assessments of local schools and schools statewide in Massachusetts

Figure 1

Table 2. Patterns of ratings of local schools and schools statewide

Figure 2

Table 3. Comparison of MCAS scores in local schools and statewide in Massachusetts

Figure 3

Table 4. Predictors of rating local schools higher than schools in the state overall

Figure 4

Table 5. Predicted probabilities of rating local schools higher than schools in the state overall on up to three measures (Models 1 and 2)

Figure 5

Table 6. Predictors of overstating local school performance on standardized tests

Figure 6

Table 7. Predicted probability of overstating local school performance on standardized tests

Figure 7

Table 8. Perceptions of local test performance as predictors of rating local schools higher than schools in the state overall

Figure 8

Table 9. Predicted probabilities of rating local schools higher than schools in the state overall on up to three measures using assessments of school performance (Models 5 and 6)

Figure 9

Table 10. Predictors of willingness “to pay higher taxes to improve the quality of public schools in your local school district”

Figure 10

Table 11. Predicted probability of willingness to pay higher taxes to improve the quality of local public schools

Figure 11

Table A1. Comparison of sample to US Census estimates for Massachusetts adults aged 18 and older (American Community Survey Five-Year Estimate, 2010–2014)

Figure 12

Table A2. Predictors of accurately stating local school performance on standardized tests

Supplementary material: Link

Vercellotti and Fairman Dataset

Link