Book contents
- Frontmatter
- Contents
- List of Tables and Figures
- Acknowledgments
- HOW VOTERS DECIDE
- I Theory and Methods
- II Information Processing
- III Politics
- IV Conclusion
- Appendix A Detailed Examples of Decision Strategies in Action
- Appendix B How the Dynamic Information Board Works
- Appendix C Overview of Experimental Procedures
- Appendix D Detailed Decision Scripts
- Appendix E Calculating the On-line Evaluation Counter
- References
- Index
- Titles in the series
Appendix E - Calculating the On-line Evaluation Counter
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- List of Tables and Figures
- Acknowledgments
- HOW VOTERS DECIDE
- I Theory and Methods
- II Information Processing
- III Politics
- IV Conclusion
- Appendix A Detailed Examples of Decision Strategies in Action
- Appendix B How the Dynamic Information Board Works
- Appendix C Overview of Experimental Procedures
- Appendix D Detailed Decision Scripts
- Appendix E Calculating the On-line Evaluation Counter
- References
- Index
- Titles in the series
Summary
To test the on-line model, we must first specify an on-line evaluation counter that incorporates the information voters encountered as they proceeded through the election. Three key questions have to be asked. First, what information should actually be counted in determining the evaluation counter? Second, should we weight some information more heavily than other information, since voters presumably do not consider everything to be of the same import? And third, how do we integrate this wide range of disparate information into a single running-tally evaluation of each candidate?
To begin with, we consider what information goes into a counter and how that information is evaluated. We incorporate four specific types of information in our counters: issues, group endorsements, candidate personality, and party identification. Candidate–voter agreement on issue stands was calculated using the directional model (Rabinowitz and MacDonald, 1989), with the mean rating of seven experts providing an objective rating of where the candidates actually stood on the issues. Whenever a voter learned a candidate's stand on an issue, and that voter had expressed an opinion on that issue in our initial questionnaire, agreement or disagreement (rescaled to range from −1 to +1) was added into the candidate's summary evaluation. Group endorsements learned by a subject were scored +1 if a subject liked the group doing the endorsing (i.e., rated that group above the mean of all groups evaluated, and above the midpoint of the scale) and −1 if the subject disliked the group.
- Type
- Chapter
- Information
- How Voters DecideInformation Processing in Election Campaigns, pp. 307 - 312Publisher: Cambridge University PressPrint publication year: 2006