Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-01-26T00:37:13.196Z Has data issue: false hasContentIssue false

The PollyVote Forecast for the 2024 US Presidential Election

Published online by Cambridge University Press:  15 October 2024

Andreas Graefe*
Affiliation:
Macromedia University of Applied Sciences, Germany
Rights & Permissions [Opens in a new window]

Abstract

Originally founded in 2004 to improve election forecasting accuracy through evidence-based methods, the PollyVote project applies the principle of combining forecasts to predict the outcome of US presidential elections. The 2024 forecast uses the same methodology as in previous elections by combining forecasts from four methods: polls, expectations, models, and naive forecasts. By averaging within and across these methods, PollyVote predicts a close race, giving Kamala Harris a slight edge over Donald Trump in both the two-party popular vote (50.8 vs. 49.2%) and the Electoral College (276 vs. 262 votes). The forecast gives Harris a 65% chance of winning the popular vote and a 56% chance of winning the Electoral College, making both outcomes toss-ups. Compared to the combined PollyVote, component forecasts that rely on trial-heat polls tend to favor Harris, whereas methods that rely on alternative measures are less optimistic about the Democratic candidate’s chances. The polls may be overestimating Harris’s lead.

Type
Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association

The PollyVote project was founded in 2004 with the aim of applying and validating findings from the general forecasting literature to the domain of election forecasting. Although its initial focus was on applying the principle of combining forecasts, PollyVote has expanded its scope over the years to include a wide range of methodological advances into the combined forecast: the development and incorporation of prospective index models (Graefe et al. Reference Graefe, Armstrong, Jones and Cuzán2014), citizen forecasts (Graefe et al. Reference Graefe, Jones, Armstrong and Cuzán2016), and naive models (Graefe Reference Graefe2023). In addition, the PollyVote method has been applied to elections in Germany (Graefe Reference Graefe2022a) and France (Graefe Reference Graefe2022b).

PollyVote is a long-term project. In addition to demonstrating the benefits of evidence-based forecasting for improving forecast accuracy, PollyVote tracks and evaluates the performance of election forecasting over time. The ability to analyze the accuracy and use of forecasts across multiple election cycles provides insights into the relative effectiveness of different forecasting approaches depending on the conditions: thus, it contributes to the evolution of election forecasting as a scientific discipline.

COMBINING FORECASTS FOR ENHANCED ACCURACY

Combining forecasts is a well-established practice, known for its simplicity and effectiveness, with roots in forecasting research dating back to Bates and Granger (Reference Bates and Granger1969). It has long been applied successfully across various fields, including economics, meteorology, and sports (Clemen Reference Clemen1989). Three major benefits of this approach are the following:

  1. 1. Enhancing accuracy: The combined forecast usually outperforms most individual forecasts in a single election and generally does so across many elections. Historical data from the PollyVote project for the five US presidential elections from 2004 to 2020 shows that the combined forecast has provided more accurate predictions than any of its individual components, missing the final popular two-party vote by only 0.8 percentage points on average across the last 100 days prior to each election (Graefe Reference Graefe2023).

  2. 2. Reducing bias: Individual forecasts often fail to capture all relevant information because of methodological limitations. For instance, regression-based models are limited by the number of variables they can include. This is particularly true when historical data are limited and the relationships between predictor variables are uncertain or correlated (Armstrong, Green, and Graefe Reference Armstrong, Green and Graefe2015), as is the case in election forecasting. Combining multiple forecasts, using different methods and data, reduces the risk of bias due to omitted information.

  3. 3. Avoiding the selection of poor forecasts: People mistakenly believe that they know which forecast out of a set of forecasts will be best (Soll and Larrick Reference Soll and Larrick2009). For example, people may use simple heuristics such as relying on the forecast that was most accurate in the last election. However, the accuracy of individual methods can vary significantly across elections, and past accuracy is often a poor predictor of future accuracy (Graefe et al. Reference Graefe, Küchenhoff, Stierle and Riedl2015). Combining forecasts ensures that the prediction is not overly reliant on any single, potentially flawed forecast.

Combining forecasts is a well-established practice, known for its simplicity and effectiveness, with roots in forecasting research dating back to Bates and Granger (Reference Bates and Granger1969).

THE 2024 POLLYVOTE METHODOLOGY

To forecast the popular two-party vote in the 2024 US presidential election, PollyVote applied its established methodology of combining forecasts from different methods, using the same specification as in 2020 (Armstrong and Graefe Reference Armstrong and Graefe2021). It first averaged forecasts within each of four component methods—polls, expectations, models, and naive forecasts—and then averaged these aggregated forecasts across the four component methods. Each of these component methods can include different subcomponents, as detailed here and shown in Table 1. In addition, this article reports combined forecasts not only for the popular vote but also for the Electoral College vote in presidential elections, using the same methodology.

Table 1 Presidential Election Forecasts by the Pollyvote and Its Component Methods

Notes. The PollyVote forecast calculated by averaging within and across forecasts. Win probabilities, if available, are as reported in the original forecasts. Where win probabilities are not provided, they are calculated from historical forecast errors where data are available. Forecasts marked with an asterisk (*) are part of this special issue. Other forecasts: Big-issue model based on Graefe and Armstrong (Reference Graefe and Armstrong2012), issues and leader model based on Graefe (Reference Graefe2021), time-for-change model based on Abramowitz (Reference Abramowitz2016), and the fundamentals-only forecast is the forecast from Fair (Reference Fair2009). The Keys to the White House (Lichtman Reference Lichtman2008) were translated to a forecast of the two-party vote following Armstrong and Cuzán (Reference Armstrong and Cuzán2006). Naive forecasts were calculated as follows: popular vote: electoral cycle, 50/50, and electoral vote: electoral cycle, random walk.

Expectations

Judgment is often an integral part of forecasting, whether in providing input to forecasting models (e.g., in the selection of data and/or variables) or as direct forecasts, hereafter referred to as “expectations.” Judgment can be particularly valuable in dealing with unusual events or structural breaks that statistical models may not capture effectively (Lawrence et al. Reference Lawrence, Goodwin, O’Connor and Önkal2006). However, a key challenge in using judgment is avoiding bias, which is common and often unconscious in forecasting (Armstrong, Green, and Graefe Reference Armstrong, Green and Graefe2015). For the 2024 forecast, PollyVote averaged a range of expectation-based forecasts, which can be categorized into three subcomponents: expert judgement, crowd forecasting, and citizen forecasting.

Expert Judgment

Expert judgment involves consulting with subject-matter experts to predict election outcomes. Experts can contextualize polling data, account for campaign events, and provide historical perspectives. However, research suggests that expert forecasts are not necessarily more accurate than polling averages. A study comparing polling averages to 452 expert vote share forecasts across US presidential elections from 2004 to 2016 found that, even though roughly two out of three experts correctly identified the directional error of polls, their forecasts were typically 7% less accurate than polling averages (Graefe Reference Graefe2018). A similar study analyzing 4,494 expert vote share forecasts across three German federal elections found that experts’ forecasts were less accurate than polls in one out of three cases and failed to identify the directional error of polls more than half the time (Graefe Reference Graefe2024).

To forecast the 2024 US election, PollyVote conducted monthly expert surveys starting in mid-July 2024. These surveys asked political science professors, some of whom have participated in these surveys since 2004, to predict the popular and electoral vote outcomes (both vote shares and win probabilities) in the November US presidential election.Footnote 1 In addition, PollyVote incorporated forecasts from various expert sites listed in table 1, such as Larry Sabato’s Crystal Ball and the Cook Political Report. These sites provide qualitative ratings—for example, Safe Democratic to Safe Republican—for the presidential election at the state level, which have been translated into Electoral College predictions.Footnote 2

Crowd Forecasting

Crowd forecasting involves aggregating the predictions or judgments of, usually, a self-selected group of individuals to arrive at a consensus or collective forecast. Participants usually have some kind of incentive to participate and make accurate predictions. One example is betting markets that allow participants to wager on election outcomes. For example, on PolyMarket, participants can bet money on who will win the popular vote and the Electoral College and what the final vote margins will be. They are incentivized to make accurate predictions because of the financial stakes involved, although some markets, such as the Iowa Electronic Markets (IEM), allow only limited investments (Gruca and Rietz Reference Gruca and Rietz2024). Another example is crowdsourcing sites such as Metaculus, where participants earn points for accurate predictions and lose points for inaccuracies. Leaderboards show participants’ rankings, fostering competition and encouraging continued participation.

Citizen Forecasts

Citizen forecasts are derived from survey respondents’ expectations of who will win the election, a question that more and more pollsters are asking in addition to the traditional vote intention question. Following Graefe (Reference Graefe2014), PollyVote translates these expectations into two-party vote share forecasts using the incumbent’s vote share as the dependent variable in a simple linear regression. An analysis of forecast errors across the last 100 days prior to the elections from 2004 to 2020 showed that these citizen forecasts were the most accurate single component forecast that entered the PollyVote, with an average error of only 1.2 percentage points (Graefe Reference Graefe2023).

An analysis of forecast errors across the last 100 days prior to the elections from 2004 to 2020 showed that these citizen forecasts were the most accurate single component forecast that entered the PollyVote, with an average error of only 1.2 percentage points.

MODELS

PollyVote classifies models for forecasting US elections by the theories of retrospective voting, prospective voting, or a combination of both.

Retrospective Models

Retrospective models assume that voters reward or punish incumbents based on past performance. They rely on national economic or political conditions, essentially assuming sociotropic voting in which voters evaluate the incumbent based on national conditions, rather than personal circumstances. PollyVote distinguishes between two types of retrospective models:

  1. 1. Fundamentals-only models use only structural (economic or political) variables, called fundaments and ignore public opinion. The use of fundamentals-only models has become rare because of their limited accuracy, and only the Fair (Reference Fair2009) forecast was available for the 2024 election. This is unfortunate because fundamentals-only models can provide insights into how fundamentals affect vote choice and can be useful in indicating the direction of polling errors (Graefe Reference Graefe2018).

  2. 2. Fundamentals-plus models incorporate retrospective public sentiment, such as presidential job approval, in addition to economic fundamentals (Enns et al. Reference Enns, Colner, Kumar and Lagodny2024; Mongrain et al. Reference Mongrain, Nadeau, Jérôme and Jérôme2024; Saeki Reference Saeki2024; Tien and Lewis-Beck Reference Tien and Lewis-Beck2024). Although these models are historically more accurate than fundamentals-only models, their explanatory power is limited because they cannot distinguish between the impacts of economic and non-economic factors.

Prospective Models

Prospective models assume that voters are forward-looking, evaluating candidates based on their future promises and campaign platforms. Existing models assess factors such as candidates’ perceived leadership abilities and issue-handling skills (Graefe Reference Graefe2021) or their potential to address the country’s most important problems (Graefe and Armstrong Reference Graefe and Armstrong2012).

Mixed Models

Mixed models combine retrospective and prospective elements. This category includes most contemporary election forecasting models, such as those published by FiveThirtyEight or The Economist, which incorporate both economic data and polling averages in their forecast. Although they offer high accuracy, their explanatory power is limited because of the confounding effects of combining economic fundamentals with public opinion data. That said, mixed models do not necessarily have to rely on trial-heat polls, as shown in several contributions to this special issue (Algara et al. Reference Algara, Gomez, Headington, Liu and Nigri2024; Cerina and Duch Reference Cerina and Duch2024; DeSart Reference DeSart2024; Lockerbie Reference Lockerbie2024).

POLLS

Polls that ask respondents for which candidate they will vote on Election Day do not provide true forecasts: instead, they capture vote preferences at a particular time, which can change before the election. Not surprisingly then, polls are less accurate the further away they are from the election date. In addition, poll results obtained at the same time can vary widely among pollsters due to differing methodologies (Erikson and Wlezien Reference Erikson and Wlezien2012). Although aggregating polls can improve accuracy by canceling out random errors of individual polls, poll aggregation cannot correct for systematic polling errors such as those due to nonresponse (Gelman et al. Reference Gelman, Goel, Rivers and Rothschild2016).

Poll aggregators report poll numbers for each candidate, including third-party candidates who poll at significant levels, while excluding undecided voters. PollyVote converts these numbers into two-party vote shares by normalizing the support for the major party candidates relative to their combined total, effectively redistributing the third-party and undecided votes proportionally between the two main candidates.

NAIVE FORECASTS

Complex models often reduce forecast accuracy, whereas simple models, such as a naive no-change model, can be surprisingly effective (Green and Armstrong Reference Green and Armstrong2015). Naive models assume either that the situation will remain the same or that the direction of change is unpredictable. This approach acknowledges inherent uncertainty and adheres to the principle of conservatism in forecasting (Armstrong, Green, and Graefe Reference Armstrong, Green and Graefe2015). Additionally, naive forecasts tend not to correlate with other forecasts, which is expected to improve the accuracy of combined forecasts (Graefe Reference Graefe2023).Footnote 3

POLLYVOTE FORECASTS OF THE 2024 US ELECTIONS

At the time of this writing (October 8, one month before the election), the combined PollyVote forecast predicts a close presidential race with a slight edge for Harris (see table 1). Harris leads the popular vote by 1.6 percentage points (50.8 to 49.2) and the Electoral College by 14 votes (276 to 262). However, with an estimated 65% chance that Harris will win the popular vote and a 56% chance that she will win the electoral vote, both outcomes are considered toss-ups.

The components of the PollyVote show that poll-based methods tend to be more optimistic about Harris’s prospects than alternative methods. For example, the polling averages show her with a lead of about 3 percentage points in the popular vote and 40 votes in the Electoral College. When it comes to model-based forecasts, models that rely on trial-heat polls (e.g., FiveThirtyEight, Economist, JHK, Race to the White House, DeSart & Holbrook) tend to be more optimistic about Harris’s chances than models that do not incorporate trial-heat polls. For example, the models in this special issue by Algara et al. (Reference Algara, Gomez, Headington, Liu and Nigri2024), Cerina and Duch (Reference Cerina and Duch2024), and Lockerbie (Reference Lockerbie2024), which are also in the mixed-model category but do not rely on trial-heat polls, tend to be more favorable for Trump. The same is true for retrospective models that either ignore public opinion altogether (fundamentals-plus) or incorporate retrospective public opinion only in the form of the incumbent president’s approval rating (fundamentals-plus).

Citizen forecasters, who may be more likely to take cues from their social circles than do polls, see a slight advantage for Trump in the popular vote. Although it seems unlikely that Trump will win the popular vote, given the preponderance of forecasts pointing to a Harris victory, the citizen forecast may suggest that current polls are overestimating Harris’s chances.

Among expectation-based methods, expert and crowd forecasters, who are likely to rely heavily on polls, are either in line with the PollyVote or slightly more optimistic about Harris’s chances. Interestingly, citizen forecasters, who may be more likely to take cues from their social circles than polls, see a slight advantage for Trump in the popular vote. The prediction is particularly interesting, given that citizens provided the most accurate individual component forecasts in the PollyVote across the five US presidential elections from 2004 to 2020. Although it seems unlikely that Trump will win the popular vote, given the preponderance of forecasts pointing to a Harris victory, the citizen forecast may suggest that current polls are overestimating Harris’s chances and thus help identify the directional error of polls.

ACKNOWLEDGMENTS

The PollyVote project was founded in 2004 by J. Scott Armstrong († 2023), Randy Jones († 2024), and Alfred Cuzán.

DATA AVAILABILITY STATEMENT

The editors have granted an exception to the data policy for this manuscript. In some forecast aggregations, the numbers reported in the Dataverse may slightly deviate from those reported in Table 1 of the published paper. This is because the PollyVote is a dynamic forecasting tool that continuously updates whenever new information becomes available. As such, the live forecasts on pollyvote.com were generated by separate scripts running automatically at different times, leading to minor variations in the aggregated results. The forecast presented in the paper reflects the PollyVote forecast at the time of writing.

CONFLICTS OF INTEREST

The author declares no ethical issues or conflicts of interest in this research.

Footnotes

1. To determine popular vote shares, experts provided the predicted vote shares for the major party candidates and the combined share for all other candidates. PollyVote then converted these numbers into two-party vote shares by normalizing the support for the major party candidates relative to their combined total, effectively redistributing third-party votes proportionally between the two main candidates. For the Electoral College, experts were asked to provide the estimated electoral votes for both major party candidates and all other candidates combined. In addition, experts were asked to estimate the likelihood for Kamala Harris to get elected.

2. PollyVote turned expert ratings about the likelihood of each party winning state elections into probabilities using the following system: Safe R (Republicans: 90% chance of winning, Democrats: 10% chance of winning), Likely R (R:80%, D:20%), Leans R (R:67%, D:33%), Tilt R (R:55%, D:45%), Toss-up (R:50%, D:50%), Tilt D (R:45%, D:55%), Leans D (R:33%, D:67%), Likely D (R:20%, D:80%), Safe D (R:10%, D:90%). These probabilities were averaged across forecasters for each race. Treating these probabilities as independent forecasts, PollyVote conducted 100,000 simulations to generate forecasts for Electoral College votes.

3. PollyVote averaged forecasts from two models for forecasting the popular vote: (1) the electoral cycle model (Norpoth Reference Norpoth2014), which uses incumbent vote shares from the two most recent elections as predictors, and (2) a 50/50 model, assuming an equal split of the popular vote between the two major-party candidates, reflecting political polarization. For forecasting the Electoral College, PollyVote used the electoral cycle and a random walk to estimate vote-share results at the state level before using a Monte Carlo simulation to generate Electoral College forecasts.

References

REFERENCES

Abramowitz, Alan I. 2016. “Will Time for Change Mean Time for Trump?PS: Political Science and Politics 49 (4): 659–60. https://doi.org/10.1017/S1049096516001268.Google Scholar
Algara, Carlos, Gomez, Lisette, Headington, Edward, Liu, Hengjiang, and Nigri, Bianca. 2024. “Forecasting Partisan Collective Accountability during the 2024 US Presidential & Congressional Elections.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000854.Google Scholar
Armstrong, J. Scott, and Cuzán, Alfred G.. 2006. “Index Methods For Forecasting: An Application to the American Presidential Elections.” Foresight: International Journal of Applied Forecasting 3: 1013.Google Scholar
Armstrong, J. Scott, and Graefe, Andreas. 2021. “The PollyVote Popular Vote Forecast for the 2020 US Presidential Election.” PS: Political Science & Politics 54 (1): 9698. https://doi.org/10.1017/S1049096520001420.Google Scholar
Armstrong, J. Scott, Green, Kesten C., and Graefe, Andreas. 2015. “Golden Rule of Forecasting: Be Conservative.” Journal of Business Research 68 (8): 1717–31. https://doi.org/10.1016/j.jbusres.2015.03.031.Google Scholar
Bates, J. M., and Granger, C.W.J.. 1969. “The Combination of Forecasts.” Journal of the Operational Research Society 20 (4): 451–68. https://doi.org/10.1057/jors.1969.103.Google Scholar
Cerina, Roberto, and Duch, Raymond. 2024. “The 2024 U.S. Presidential Election PoSSUM Poll.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000982.Google Scholar
Clemen, Robert T. 1989. “Combining Forecasts: A Review and Annotated Bibliography.” International Journal of Forecasting 5 (4): 559–83. https://doi.org/10.1016/0169-2070(89)90012-5.Google Scholar
DeSart, Jay. 2024. “Long-Range State-Level 2024 Presidential Election Forecast: How Can You Forecast an Election When You Don’t Know Who the Candidates Are Yet?” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000805.Google Scholar
Enns, Peter K., Colner, Jonathan, Kumar, Anusha, and Lagodny, Julius. 2024. “Understanding Biden’s Exit and the 2024 Election: The State Presidential Approval/State Economy Model.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000994.Google Scholar
Erikson, Robert S., and Wlezien, Christopher. 2012. The Timeline of Presidential Elections. Chicago: University of Chicago Press.Google Scholar
Fair, Ray C. 2009. “Presidential and Congressional Vote-Share Equations.” American Journal of Political Science 53 (1): 5572. https://doi.org/10.1111/j.1540-5907.2008.00357.x.Google Scholar
Gelman, Andrew, Goel, Sharad, Rivers, Douglas, and Rothschild, David. 2016. “The Mythical Swing Voter.” Quarterly Journal of Political Science 11 (1): 103–30. http://doi.org/10.1561/100.00015031.Google Scholar
Graefe, Andreas. 2014. “Accuracy of Vote Expectation Surveys in Forecasting Elections.” Public Opinion Quarterly 78 (S1): 204–32. https://doi.org/10.1093/poq/nfu008.Google Scholar
Graefe, Andreas. 2018. “Predicting Elections: Experts, Polls, and Fundamentals.” Judgment and Decision Making 13 (4): 334–44. https://doi.org/10.1017/S1930297500009219.Google Scholar
Graefe, Andreas. 2021. “Of Issues and Leaders: Forecasting the 2020 US Presidential Election.” PS: Political Science & Politics 54 (1): 7072. https://doi.org/10.1017/S1049096520001390.Google Scholar
Graefe, Andreas. 2022a. “Combining Forecasts for the 2021 German Federal Election: The PollyVote.” PS: Political Science & Politics 55 (1): 6972. https://doi.org/10.1017/S1049096521000962.Google Scholar
Graefe, Andreas. 2022b. “Combining Forecasts for the 2022 French Presidential Election: The PollyVote.” PS: Political Science & Politics 55 (4): 726–29. https://doi.org/10.1017/S1049096522000555.Google Scholar
Graefe, Andreas. 2023. “Embrace the Differences: Revisiting the PollyVote Method of Combining Forecasts for U.S. Presidential Elections (2004 to 2020).” International Journal of Forecasting 39 (1): 170–77. https://doi.org/10.1016/j.ijforecast.2021.09.010.Google Scholar
Graefe, Andreas. 2024. “Limits of Domain Knowledge in Election Forecasting: A Comparison of Poll Averages and Expert Forecasts.” International Journal of Public Opinion Research 36 (1): edae002. https://doi.org/10.1093/ijpor/edae002.Google Scholar
Graefe, Andreas, and Armstrong, J. Scott. 2012. “Predicting Elections from the Most Important Issue: A Test of the Take-the-Best Heuristic.” Journal of Behavioral Decision Making 25 (1): 4148. https://doi.org/10.1002/bdm.710.Google Scholar
Graefe, Andreas, Armstrong, J. Scott, Jones, Randall J., and Cuzán, Alfred G.. 2014. “Accuracy of Combined Forecasts for the 2012 Presidential Election: The PollyVote.” PS: Political Science & Politics 47 (2): 427–31. https://doi.org/10.1017/S1049096514000341.Google Scholar
Graefe, Andreas, Jones, Randall J., Armstrong, J. Scott, and Cuzán, Alfred G.. 2016. “The PollyVote Forecast for the 2016 American Presidential Election.” PS: Political Science & Politics 49 (4): 687–90. https://doi.org/10.1017/S1049096516001281.Google Scholar
Graefe, Andreas, Küchenhoff, Helmut, Stierle, Veronika, and Riedl, Bernhard. 2015. “Limitations of Ensemble Bayesian Model Averaging for Forecasting Social Science Problems.” International Journal of Forecasting 31 (3): 943–51. https://doi.org/10.1016/j.ijforecast.2014.12.001.Google Scholar
Green, Kesten C., and Armstrong, J. Scott. 2015. “Simple versus Complex Forecasting: The Evidence.” Journal of Business Research 68 (8): 1678–85. https://doi.org/10.1016/j.jbusres.2015.03.026.Google Scholar
Gruca, Thomas S., and Rietz, Thomas A.. 2024. “Iowa Electronic Markets: Forecasting the 2024 US Presidential Election.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000921.Google Scholar
Holbrook, Thomas M., and DeSart, Jay A.. 1999. “Using State Polls to Forecast Presidential Election Outcomes in the American States.” International Journal of Forecasting 15 (2): 137–42. https://doi.org/10.1016/S0169-2070(98)00060-0.Google Scholar
Lawrence, Michael, Goodwin, Paul, O’Connor, Marcus, and Önkal, Dilek. 2006. “Judgmental Forecasting: A Review of Progress over the Last 25 Years.” International Journal of Forecasting 22 (3): 493518. https://doi.org/10.1016/j.ijforecast.2006.03.007.Google Scholar
Lichtman, Allan J. 2008. “The Keys to the White House: An Index Forecast for 2008.” International Journal of Forecasting 24 (2): 301–9. https://doi.org/10.1016/j.ijforecast.2008.02.004.Google Scholar
Lindsay, Spencer, and Allen, Levi. 2024. “A Dynamic Forecast: An Evolving Prediction of the 2024 Presidential Election.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000878.Google Scholar
Lockerbie, Brad. 2024. “The Challenge of Forecasting the 2024 Presidential and House Elections: Economic Pessimism and Election Outcomes.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S104909652400091X.Google Scholar
Mongrain, Philippe, Nadeau, Richard, Jérôme, Bruno, and Jérôme, Véronique. 2024. “State-Level Forecasts for the 2024 US Presidential Election: Trump Back with a Vengeance?” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S104909652400088X.Google Scholar
Norpoth, Helmut. 2014. “The Electoral Cycle.” PS: Political Science and Politics 47: 332– 35. https://doi.org/10.1017/S1049096514000146.Google Scholar
Saeki, Manabu. 2024. “Forecasting Popular Vote and Electoral College Vote Results: Partisan-Bounded Economic Model.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000891.Google Scholar
Soll, Jack B., and Larrick, Richard P.. 2009. “Strategies for Revising Judgment: How (and How Well) People Use Others’ Opinions.” Journal of Experimental Psychology: Learning, Memory, and Cognition 35 (3): 780805. https://doi.org/10.1037/a0015145.Google Scholar
Tien, Charles, and Lewis-Beck, Michael S.. 2024. “The Political Economy Model: Presidential Forecast for 2024.” PS: Political Science & Politics (this issue). https://doi.org/10.1017/S1049096524000908.Google Scholar
Figure 0

Table 1 Presidential Election Forecasts by the Pollyvote and Its Component Methods