Article contents
Mass Political Attitudes and the Survey Response*
Published online by Cambridge University Press: 01 August 2014
Abstract
Students of public opinion research have argued that voters show very little consistency and structure in their political attitudes. A model of the survey response is proposed which takes account of the vagueness in opinion survey questions and in response categories. When estimates are made of this vagueness or “measurement error” and the estimates applied to the principal previous study, nearly all the inconsistency is shown to be the result of the vagueness of the questions rather than of any failure by the respondents.
- Type
- Articles
- Information
- Copyright
- Copyright © American Political Science Association 1975
Footnotes
I would like to express my thanks to John Aldrich, Lloyd Etheredge, Richard Katz, Gerald Kramer, David Mayhew, Richard Niemi, Douglas Rae, and Charles Whitmore for helpful comments on earlier drafts. Remaining errors are, of course, the responsibility of the author. The data used were collected by the Survey Research Center at the University of Michigan and made available through the Inter-University Consortium for Political Research. Neither body is responsible for the analysis and interpretation presented here.
References
1 For detailed arguments on this point, see Dahl, Robert A., After the Revolution? (New Haven: Yale University Press, 1970), pp. 40–56 Google Scholar; and de Jouvenal, Bertrand, “The Chairman's Problem,” American Political Science Review, 55 (06, 1961), 368–72Google Scholar.
2 A helpful discussion of the many versions of “representation” is Pitkin, Hanna, The Concept of Representation (Berkeley: University of California Press)Google Scholar.
3 Even a constituent choosing a legislator in the Burkean tradition requires some political opinions, lest he have no means of evaluating the candidate's political judgment.
4 See especially Campbell, Angus et al., The Voter Decides (Evanston: Row, Peterson, 1954)Google Scholar; Campbell, Angus et al., The American Voter (New York: Wiley)Google Scholar; Campbell, Angus et al., Elections and the Political Order (New York: Wiley)Google Scholar; Converse, Philip E. et al., “Electoral Myth and Reality: The 1964 Election,” American Political Science Review, 59 (06, 1965), 321–36CrossRefGoogle Scholar.
5 Campbell, et al., The American Voter, pp. 227–34Google Scholar.
6 Campbell et al., The American Voter, ch. 5. A comprehensive review of related literature is Natchez, Peter B., “Images of Voting: The Social Psychologists,” Public Policy, 18 (Summer, 1970), 553–88Google Scholar. Analyses of voting that give less weight to party identification include RePass, David E., “Issue Salience and Party Choice,” American Political Science Review, 65 (06, 1971), 389–400 CrossRefGoogle Scholar; and Boyd, Richard W., “Popular Control of Public Policy: A Normal Vote Analysis of the 1968 Election,” American Political Science Review, 66 (06, 1972), 429–49CrossRefGoogle Scholar.
7 Stokes, Donald and Miller, Warren, “Party Government and Saliency of Congress,” in Campbell, et al., Elections and the Political Order, p. 204 Google Scholar.
8 Converse, Philip, “The Nature of Belief Systems in Mass Publics,” Ideology and Discontent, ed. Apter, David E. (New York: Free Press, 1964), pp. 206–61Google Scholar. See also Converse, Philip E., “Attitudes and Non-Attitudes: Continuation of a Dialogue,” The Quantitative Analysis of Social Problems, ed. Tufte, Edward R. (Reading, Massachusetts: Addison-Wesley, 1970), pp. 168–89Google Scholar.
9 Axelrod, Robert, “The Structure of Public Opinion on Policy Issues,” Public Opinion Quarterly, 31 (Spring, 1967), 51–60 CrossRefGoogle Scholar; Dreyer, Edward C., “Change and Stability in Party Identification,” Journal of Politics, 35 (08, 1973), 712–22CrossRefGoogle Scholar; and Searing, Donald D. et al., “The Structuring Principle: Political Socialization and Belief Systems,” American Political Science Review, 67 (06, 1973), 415–32CrossRefGoogle Scholar. See also McClosky, Herbert, “Consensus and Ideology in American Politics,” American Political Science Review, 58 (06, 1964), 361–82CrossRefGoogle Scholar. For contrary views on related matters, see Luttbeg, Norman R., “The Structure of Beliefs among Leaders and the Public,” Public Opinion Quarterly, 32 (Fall, 1968), 398–409 CrossRefGoogle Scholar; and Pomper, Gerald M., “From Confusion to Clarity: Issues and American Voters, 1956–1968,” American Political Science Review, 66 (06, 1972), 415–28CrossRefGoogle Scholar.
10 Brown, Steven R., “Consistency and the Persistence of Ideology: Some Experimental Results,” Public Opinion Quarterly, 34 (Spring, 1970), 60–68 CrossRefGoogle Scholar. Since this article was written, Pierce, John C. and Rose, Douglas D. have published “Nonattitudes and American Public Opinion: The Examination of a Thesis,” American Political Science Review, 68 (06, 1974), 626–49CrossRefGoogle Scholar, which is also an interesting reexamination of Converse's findings. Its idiosyncratic statistical methods and restrictive assumptions, however, make possible several different interpretations of some of its key results, not all of them favorable to the authors' thesis. Many of these difficulties are pointed out by Converse in his powerful reply, pp. 650–60, in the same issue. (See also the rejoinder by Rose and Pierce, pp. 661–66.)
11 Two useful reviews of the individual choice literature that discuss stochastic choice and the evidence for it are Edwards, Ward, “The Theory of Decision Making,” Psychological Bulletin, 51 (1954), 380–417 CrossRefGoogle ScholarPubMed, and Edwards, Ward, “Behavioral Decision Theory,” Annual Review of Psychology, 12 (1961), 473–98CrossRefGoogle ScholarPubMed. Both are reprinted in Decision Making, ed. Edwards, Ward and Tversky, Amos (Baltimore: Penguin, 1967)Google Scholar, which also includes other discussions of the same topic.
12 Coombs, C. H., A Theory of Data (New York: Wiley, 1964), pp. 106–18Google Scholar. Reprinted in Edwards and Tversky, pp. 319–333.
13 For examples of the detection of interviewer bias, see Michael J. Shapiro, “Discovering Interviewer Bias in Open-Ended Survey Responses,” and Collins, W. Andrew, “Interviewers' Verbal Idiosyncrasies as a Source of Bias,” both in Public Opinion Quarterly, 34 (Fall, 1970), 412–15 and 416–22, respectivelyCrossRefGoogle Scholar.
14 Estimates obtained from the model, given the assumptions, meet the conditions required for consistency, i.e., asymptotic convergence in probability to the true values. See Dhrymes, P. J., Econometrics (New York: Harper and Row, 1970), pp. 112–13Google Scholar. Two violations of the assumptions seem probable. First, the fixed number of ordinal categories insures that the extreme opinions will always be observed with measurement error toward the center of the scale: a respondent with the true opinion, “strongly agree,” can be observed erroneously only by recording him as lower in agreement than he is. The reverse is true of those with opinion, “strongly disagree.” Hence a negative correlation between opinion and observation error exists. With five categories available for the policy questions, however, and seven for party ID, this error is likely to be slight. In fact, tests were run in which the response categories for these questions were collapsed to three, which should have exacerbated the problem. Instead, the final results of applying the model were virtually identical.
Second, because respondents with extreme opinions have measurement errors in only a single direction, the amount of possible error depends on the distribution of voters' opinions. Hence the assumption in the model of constant error variance implicitly assumes constant voter distributions. Distributions at the three different time periods, however, were so nearly similar for all questions that this difficulty was ignored. I am indebted to John Aldrich for bringing the latter point to my attention.
15 It is not possible to assume that p 1 and u 2 are independent without u 1 controlled, though that is a mote obvious assumption. Results (3)−(9) may be used to show that
Strict independence of p 1 and u 2 requires E(p 1 u 2) = E(p 1)E(u 2) and therefore also E(x 2 x 1) − E(x 2 x 1) E(x 1)E(x 3 − x 2). Inspection of the data showed this to be consistently false, but even if it had been true, the fact that the . value of E(p 1 u 2) can be derived from (3)−(9) means that the new assumption adds nothing to the model. Statistically speaking, E(p 1 u 2) is “identified” without Assumption Set 2 or any of its substitutes.
16 See, for example, Lord, Frederick M. and Novick, Melvin R., Statistical Theories of Mental Test Scores (Reading, Massachusetts: Addison-Wesley, 1968)Google Scholar. The approach set out in the text also bears a resemblance to that developed in David E., Wiley and Wiley, James A., “The Estimation of Measurement Error in Panel Data,” in Causal Models in the Social Sciences, ed. Blalock, H. M. (Chicago: Aldine-Atherton, 1971), pp. 364–74Google Scholar. The models were developed independently, however, and for different purposes; the Wileys have ratio scale data and their results depend on the existence of a meaningful zero point, while we have ordinal or interval level information only. The assumptions and computations involved are therefore somewhat different. The Wileys' article is also important for its statistical critique of Heise, David R., “Separating Reliability and Stability in Test-Retest Correlation,” American Sociological Review, 34 (02, 1969), 93-101, which is reprinted in the Blalock volume, pp. 348–63CrossRefGoogle Scholar.
17 See Abelson, Robert P. and Tukey, John W., “Efficient Conversion of Non-Metric Information into Metric Information,” in Tufte, , ed., The Quantitative Analysis of Social Problems, pp. 407–17Google Scholar. For exact statements of the attitude questions, see Converse, “The Nature of Belief Systems in Mass Publics,” footnote 21.
18 Interestingly, some recognition of the weakness of the questions appears in Converse himself: he notes in “The Nature of Belief Systems in Mass Publics,” footnote 21, that his questions had to be revised when presented to congressmen, since their simplistic wording made responses so difficult. The difference in survey questions put to congressmen and to the mass public reduces the impact of another of Converse's findings, that correlations among attitudes are higher for congressmen than for ordinary citizens. It is well known that small changes in the wording of survey questions can make a considerable difference in response patterns.
19 This expression can be simplified a good deal for computational purposes, but it is listed here as it appeared in the derivations from the model, where its form eased the exposition.
20 The problem is particularly acute here because the dependent variable in the regression equation is itself a statistical estimate and incorporates a certain amount of measurement error. This will repress both the multiple correlations and the significance levels of the regression coefficients. Both are so low, however, that even very substantial upward revision would not alter the conclusions in the text. In addition, neither the regression coefficients nor the predicted values graphed in Figures 2 and 3 are biased by measurement error, and both of them indicate that the differences among the population in their understanding of the survey questions is slight. Unfortunately, the unknown distributional form of true and observed opinions and the single observation on measurement error variance per respondent make it impossible to estimate the size of the measurement error variance in the dependent variable without additional assumptions.
21 Converse, , “The Nature of Belief Systems in Mass Publics,” pp. 228–29Google Scholar.
22 In order to maximize sample sizes, estimates of reliabilities are based on the per cent of the sample with attitudes on the variable in question at all three time periods, while intercorrelations are based on those 1958 respondents who had an opinion on both the variables correlated. A corrected correlation, which is a function of two reliabilities plus an intercorrelation, thus depends upon three different samples of respondents. If some of the parameters being estimated differ among these samples, errors will be introduced. As one would expect, however, the three samples always overlap heavily, and the errors introduced are almost surely negligible.
These relatively high intercorrelations make it unlikely that the large over-time correlations found earlier are due to response biases in favor of “agree” or “disagree” answers. To give liberal answers on both the civil rights questions, for example, respondents had to agree to one and disagree with the other. The same is true for the Foreign Aid and Isolationism questions. Yet these two pairs of issues had the highest corrected r's.
23 See Bennett, W. Lance, “Political Attitudes and Social Life” (Ph.D. dissertation, Yale University, 1974)Google Scholar, who argues that uninformed citizens may develop stable opinions precisely because of their lack of information: they become subject to symbolic appeals by political leaders.
24 Two interesting attempts along these lines, using Markov chains, are Coleman, James S., Models of Change and Response Uncertainty (Englewood Cliffs, N.J.: Prentice-Hall, 1964)Google Scholar; and Ginsberg, Ralph B., “Critique of Probabilistic Models: Application of the Semi-Markov Model to Migration” and “Incorporating Causal Structure and Exogenous Information with Probabilistic Models: With Special Reference to Choice, Gravity, Migration, and Markov Chains,” Journal of Mathematical Sociology, 2 (01, 1972), 63-82 and 83–103, respectivelyCrossRefGoogle Scholar.
- 353
- Cited by
Comments
No Comments have been published for this article.