Published online by Cambridge University Press: 01 August 2014
This paper utilizes the 1956–58–60 SRC panel study to examine the degree to which Americans hold attitudes on issues of public policy. The conclusions reject the thesis that only 20 to 30 per cent of the American public have true attitudes and that the remainder either refuse to take a position or respond randomly. The nonattitude thesis is rejected on the basis of: (1) a conceptualization of attitudes which allows for variation in responses through time without necessarily indicating the absence of attitudes or their random fluctuation; (2) an evaluation of the major statement of the nonattitude thesis; (3) a probability model for measuring attitudes in a panel study based on the assumption of twin samples, i.e., a sample of the population at one point in time, and a sample of the individual's attitude through time; and (4) the application of the probability model, leading to the conclusion that the number of individuals with attitudes has been severely underestimated. The implications of that finding are drawn for the relation of responses to attitudes and for democratic elitism.
1 Two decades ago, Bernard Berelson assessed the state of knowledge about public opinion and related it to the demands of democratic theory. See Berelson, Bernard, “Democratic Theory and Public Opinion,” Public Opinion Quarterly, 16 (Fall, 1952), 313–330 CrossRefGoogle Scholar.
2 For related studies examining the degree to which Americans believe in democratic norms see Stouffer, Samuel A., Communism, Conformity and Civil Liberties (Garden City, New York: Doubleday, 1955)Google Scholar; Prothro, James W. and Grigg, Charles M., “Fundamental Principles of Democracy: Bases of Agreement and Disagreement,” Journal of Politics, 22 (Spring, 1960), 276–294 CrossRefGoogle Scholar; and McClosky, Herbert, “Consensus and Ideology in American Politics,” The American Political Science Review, 58 (06, 1964), 361–382 CrossRefGoogle Scholar.
3 Berelson, Bernard, Lazarsfeld, Paul F., and McPhee, William N., Voting (Chicago: University of Chicago Press, 1954), p. 309 Google Scholar.
4 Campbell, Angus, Converse, Philip E., Miller, Warren E., and Stokes, Donald E., The American Voter (New York: John Wiley and Sons, 1960), p. 188 Google Scholar.
5 Stokes, Donald E. and Miller, Warren E., “Party Government and the Saliency of Congress,” in Campbell, Angus, Converse, Philip E., Miller, Warren E., and Stokes, Donald E., Elections and the Political Order (New York: John Wiley and Sons, 1966), p. 199 Google Scholar.
6 Converse, Philip E., “Attitudes and Non-Attitudes: Continuation of a Dialogue” (Survey Research Center, The University of Michigan, 11, 1963), p. 15 Google Scholar. See also Converse, Philip E., “The Nature of Belief Systems in Mass Publics,” in Ideology and Discontent, ed. Apter, David E. (Glencoe: The Free Press, 1964), pp. 238–245 Google Scholar.
7 Butler, David and Stokes, Donald, Political Change in Britain (New York: St. Martins Press, 1969), p. 178 Google Scholar.
8 Butler and Stokes, p. 179.
9 Sniderman, Paul M. and Citrin, Jack, “Psychological Sources of Political Belief: Self-Esteem and Isolationist Attitudes,” The American Political Science Review, 65 (06, 1971), 415 CrossRefGoogle Scholar.
10 Hennessy, Bernard, “A Headnote on the Existence and Study of Political Attitudes,” in Political Attitudes and Public Opinion, ed. Nimmo, Dan D. and Bonjean, Charles M. (New York: David McKay Company, 1972), p. 36 Google Scholar.
11 Key, V. O. Jr., The Responsible Electorate (Cambridge: Harvard University Press, 1966)CrossRefGoogle Scholar.
12 Key, p. 8.
13 Pomper, Gerald M., Elections in America (New York: Dodd Mead & Company, 1968), p. 92 Google Scholar.
14 See, for example, Flanigan, William H., Political Behavior of the American Electorate (Boston: Allyn and Bacon, Inc., 1968), p. 62 Google Scholar; and Sorauf, Frank J., Party Politics in America (Boston: Little, Brown and Company, 1968), p. 163 Google Scholar, for general discussions. Also see the findings in Converse, Philip E., Miller, Warren E., Rusk, Jerrold G., and Wolfe, Arthur C., “Continuity and Change in American Politics: Parties and Issues in the 1968 Election,” The American Political Science Review, 63 (12, 1969), 1083–1105 CrossRefGoogle Scholar; Boyd, Richard W., “Popular Control of Public Policy: A Normal Vote Analysis of the 1968 Election,” The American Political Science Review, 66 (06, 1972), 429–449 CrossRefGoogle Scholar; and Pomper, Gerald M., “From Confusion to Clarity: Issues and American Voters, 1956–1968,” The American Political Science Review, 66 (06, 1972), 415–428 CrossRefGoogle Scholar.
15 RePass, David E., “Issue Salience and Party Choice,” The American Political Science Review, 65 (06, 1971), 400 Google Scholar.
16 Ibid., 400.
17 Ibid.
18 Kramer, Gerald H., “Short-Term Fluctuations in U.S. Voting Behavior, 1896–1964,” The American Political Science Review, 65 (03, 1971), 140 CrossRefGoogle Scholar.
19 Fishbein, Martin and Coombs, Fred S., “Basis for Decision: An Attitudinal Approach Toward an Understanding of Voting Behavior,” paper delivered at Sixty-Seventh Annual Meeting of the American Political Science Association (Chicago: 09 7–11, 1971), p. 2 Google Scholar.
20 Weisberg, Herbert F. and Rusk, Jerrold G., “Dimensions of Candidate Evaluation,” The American Political Science Review, 64 (12, 1970), 1167–1185 CrossRefGoogle Scholar.
21 Converse, “The Nature of Belief Systems in Mass Publics.”
22 Rosenberg, Milton J. and Hovland, Carl I., “Cognitive, Affective and Behavioral Components of Attitudes,” in Attitude Organization and Change, ed. Hovland, Carl I. and Rosenberg, Milton J. (New Haven: Yale University Press, 1960), p. 1 Google Scholar.
23 Rosenberg and Hovland, p. 3.
24 Rosenberg, Milton J., “An Analysis of Affective-Cognitive Consistency,” in Hovland, and Rosenberg, , pp. 15–63 Google Scholar.
25 See the general review by Zajonc, Robert B., “The Concepts of Balance, Congruity, and Dissonance,” in Attitude Change: The Competing Views, ed. Suedfeld, Peter (Chicago/New York: Aldine-Atherton, 1971), p. 66 Google Scholar. See also: Rosenberg, Milton J., Verba, Sidney, and Converse, Philip E., Vietnam and the Silent Majority: The Dove's Guide (New York: Harper and Row, 1970), p. 86 Google Scholar; and Festinger, Leon, A Theory of Cognitive Dissonance (Stanford, California: Stanford University Press, 1957), p. 3 Google Scholar.
26 Rosenberg, , “An Analysis of Affective-Cognitive Consistency,” p. 22 Google Scholar; and, Festinger, , A Theory of Cognitive Dissonance, p. 16 Google Scholar.
27 Converse, Philip E., “The Concept of the Normal Vote,” in Campbell, et al., Elections and the Political Order, p. 15 Google Scholar.
28 The problem of matching the “opinion surveyor's” response categories with the “individual's own categories for evaluation” is discussed in Sherif, Carolyn W., Sherif, Muzafer, and Nebergall, Roger E., Attitude and Attitude Change (Philadelphia: W. B. Saunders and Company, 1965), chapter 4Google Scholar.
29 Ibid.
30 This conception of an attitude as a range on a dimension is akin but not identical to the latitudes of acceptance and rejection in Sherif, Sherif, and Nebergall, pp. 24–25.
31 Technically, only people whose ranges cross the boundaries of question response categories are treated as having discrete point attitudes; because, however the categories themselves can be shifted with regard to referent attitudes, in effect all individuals are treated as having attitudes in the form of discrete points.
32 The opinion response may not correspond to the attitude a researcher wishes to tap because the question and the respondent's attitudes are at different levels. The question may be more precise or more general than the attitudes; characteristically, such questions require the respondent to employ several attitudes in forming a response. For instance, a Goldwater conservative might have difficulty responding to the school integration question posed in SRC surveys, for it requires him to respond to federal government intervention (negative) and school integration (positive) in simple agree/disagree fashion. Over repeated trials, such questions which in effect create temporary cognitive dissonance may produce inconsistent responses, if the attitudes are of relatively equal importance to the subject. Asking pacifists if they would defend their wife or mother from a rapist is a popular form of this pastime. We expect that for such items, the question responses are not a good index of the attitude as it would be exemplified in the respondent's behavior if the hypothetical situation confronted him.
33 Upshaw, Harry S., “Attitude Measurement,” in Methodology in Social Research, ed. Blalock, Hubert M. and Blalock, Ann (New York: McGraw-Hill, 1968), pp. 80–81 Google Scholar, notes that Thurstone, in his development of “The Law of Comparative Judgment,”
assumed that each presentation of a stimulus arouses in the respondent some undesignated perceptual process which is, in principle, quantifiable. Furthermore, he assumed that the perception of a particular stimulus on any given presentation is distorted some-what by random and independent factors associated with the experimental procedures and with the respondents. Because the assumed distorting factors are random and independent, the perceptual processes aroused by the repeated presentations of a given stimulus would, if quantified, be distributed normally. If this hypothetical distribution were available as data, its central tendency would be taken as the best estimate of the true position….
The same randomizing process may apply to the expression of the true attitude in response to that perception.
34 An alternative means which has been employed in determining attitudes from a single interview is the correlation of separate items. For some of the problems in this method, see our discussion of attenuated correlations presented in the “implications” section of this paper.
35 The source for these data is the authors' analysis of the SRC panel data. This computation includes only those interviewees who expressed an identification in both samples. For subsequent computations we omit nonsubstantive responses. The panel data were collected by the Survey Research Center at the University of Michigan and made available through the Inter-University Consortium for Political Research. Neither the SRC nor the ICPR are responsible for the analysis and interpretations presented in this paper. The proportion stable varies with the accounting procedure. If all codes are counted, 59 per cent of the total panel is unstable on party identification in 1956–1958. If noninterviewed respondents are dropped, then unstability is 45 per cent. When, additionally, the sample is reduced by omitting “don't know,” minor parties, apoliticals, etc., the unstability remains 45 per cent. When the remaining respondents are limited to five categories by ignoring the leanings of the independents—and this is the calculation reported in the text—then 39 per cent are unstable. If the “strong-weak” distinction for partisans is ignored, then only 20 per cent are unstable, i.e., shift between the broad Democrat-Independent-Republican categories. If respondents who at either time answer “Independent” are dropped, the unstability (Democrat, Republican) is limited to 5 per cent. By this point, however, most of the sample and categories have been omitted. The party identification turnover tables are presented on the bottom of page 632.
36 Converse, “The Nature of Belief Systems in Mass Publics,” and Converse, “Attitudes and Non-Attitudes.”
37 Converse, , “The Nature of Belief Systems in Mass Publics,” p. 241 Google Scholar.
38 If individuals' opinions reflect a response set, then the responses are not random, for they represent a predisposition to respond and one which is consistent across attitude objects. Also, see Appendix II for a discussion of response set as it applies to the panel study. Response set as it is discussed here is not an agreement bias, but rather a “strong” bias and hence will not influence Converse's results. The only impact such a conceptualization will have on the model we subsequently present will be to increase the potential for error, rather than minimizing it.
39 Converse, , “The Nature of Belief Systems in Mass Publics,” p. 245 Google Scholar.
40 Ibid., p. 244.
41 Coleman, James S., “The Mathematical Study of Change,” in Blalock, and Blalock, , Methodology in Social Research, pp. 453–456 Google Scholar.
42 Converse, , “Attitudes and Non-Attitudes: Continuation of a Dialogue,” p. 15 Google Scholar; and Converse, , “The Nature of Belief Systems in Mass Publics,” p. 245 Google Scholar. In the Black-and-White model, the total sample of 1514 is divided as follows: no interview exists at one or more trials for about 420 respondents, so these are excluded, leaving a base of about 1100 respondents. Another 80 respondents always reply “no opinion” or “don't know” and are counted as not having attitudes; additionally, about 420 respondents who give both substantive and “no opinion” or “don't know” responses are counted as not having attitudes, leaving a group of about 600 respondents consistently giving substantive responses; of these, about 140 at some time say “depends” and are thus counted as nonattitude respondents. The remaining 460 or so respondents—those taking sides on the issues in all three trials—are analyzed by the Black-and-White model into a group of about 250 without attitudes and a group of somewhat more than 200 assumed to have attitudes. Thus, of the base group of about 1100, less than 20 percent are estimated by Converse to have attitudes. There are a number of other ways of dividing up the sample, and for each of these Converse's percentage estimates must be recast, though the basic calculations remain constant. Of the 1038 respondents who take sides in 1960, at most 275 have attitudes in the Black-and White model, yet these “attitudes” are insufficient to explain the vast marginal differences, so the surplus of “strongly agree” responses (over half) and the shortage of “weakly disagree” answers (9 per cent) remain unexplained.
43 For instance, when the full five categories are used, as in Table 1B, stable responses are 141 per cent of the expected (marginally) stable responses, while with a two category collapsed table (1c), stable responses are only 127 per cent of marginally expected stable responses; one-third of the extra stability has been removed by collapsing categories. For random expectations not based on the marginals, grouping has the same effect of discounting stable responses. In the full five by five table, anywhere from 100 to 254 responses can be counted as random, depending on the admission of “special” explanations for unequal numbers of responses in the unstable cells; even with special explanations, there are more unexplained stable responses in the full table than in the collapsed two by two table, so the net effect of grouping responses is to count more stable responses as nonattitudes.
44 By chance, some individuals will give the same response twice; thus some respondents without attitudes fall into the stable group and the size of this group is, according to Converse, larger than the size of the attitude-holding population. See Converse, , “The Nature of Belief Systems in Mass Publics,” p. 259 Google Scholar (footnote 39).
45 The presented data are neither the most significant nor the least significant findings. They are a cluster of three variables for the same time period simply to indicate that they are not isolated cases.
46 Converse, , in “The Nature of Belief Systems in Mass Publics,” p. 228 Google Scholar, employs a similar correlational measure to compare attitudinal constraint among an elite sample and a mass sample.
47 For a discussion of the possibilities and complications of a revised “nonattitude” method see Appendix Two.
48 For a basic discussion of methods of treating measurement error see Siegel, Paul M. and Hodge, Robert W., “A Casual Approach to the Study of Measurement Error,” in Blalock, and Blalock, , pp. 28–59 Google Scholar; also see Coleman, James S., “The Mathematical Study of Change,” in Blalock, and Blalock, Google Scholar; Lazarsfeld, Paul and Henry, Neil, Latent Structure Analysis (Boston: Houghton Mifflin, 1968)Google Scholar; Johnston, J., Econometric Methods, 2nd ed. (New York: McGraw-Hill, 1972), chapters 7 and 9Google Scholar; Lord, F. M. and Novick, M. R., Statistical Theories of Mental Test Scores (Reading, Mass.: Addison-Wesley, 1968)Google Scholar; and, Costner, Herbert, “Theory, Deduction and Rules of Correspondence,” American Journal of Sociology, 75 (09, 1969), 245–263 CrossRefGoogle Scholar.
49 Coleman, , “The Mathematical Study of Change,” in Blalock, and Blalock, , p. 472 Google Scholar.
50 Ibid., p. 439.
51 The conception of attitudes can be formalized in regression notation in the following manner: Yij = Xi + eij where Yij is the response of the ith individual at the jth trial, Xi is the true attitude of the ith individual, and eij is the sum of the ith individual's nonattitude response influences at the jth trial. If the nonattitude response influences (eij ) are random with regard to the true attitude, then . That is, across sufficient trials the opinion responses will directly reflect the underlying attitude. Given an infinite number of trials, with e random with regard to the attitude, then an individual's attitudinal and nonattitudinal components or bases of response can be recovered for each trial.
Nonattitude components of response, while treated as error, are simply independent variables which we do not want to study precisely. Sometimes—if they are of sufficient magnitude, are non-random with regard to attitude, or are easily identifiable—they should be brought into the equation in the following manner: Yij = bi1Xi1 + bi2Xi2 + … + bikXik , where Xik are variables (across k) which compose the ith individual's response at trial j. The bit are weights assigned by the ith individual to the variables going into the nonattitude response component. Response variation in this model arises from (1) variation across trials, individually and collectively, in the values of the variables, and (2) variation across individuals in the weights assigned to determining factors. When individuals' variable scores for several factors are constant across trials (Xi11 = Xi1j , Xi21 = Xi2j , etc.), the total of these factors can be treated as an individual's constant (his attitude): ai = bi1jXi1j + bi2Xi1j . This fuller model can be solved only with sufficient and detailed information at the individual level about the values of variables.
The above model is limited in two serious ways: (1) it is static, and (2) it makes no provision for perceptual screening. This screening has the effect of limiting or changing the values for short-term forces, change which is based on the values of the longer term variables and weights. Thus, some variables are partly dependent on other variables, yet independent in their effects on the response. Xij = BihYiht + γ ikt , Zik where Xit is the vector of the responses of the ith individual across t trials, Yiht is the matrix of h shortterm forces for the ith individual across t trials, Bih is the vector of h short-term force weights for the ith individual, Zik is the matrix of k long-term forces for the ith individual, and γ ikt is the vector of k weights for the long-term forces of individual i at t trials. Thus, the short-term forces depend on attitude, Yih = γijtZik , and predisposing events such as questionnaire bias. Short-term forces have impact, but only after they have been screened.
Attitude change results not only from the direct addition of a new predisposing factor, but also from the impact of some screened variables. In attitude change, the change factor often is originally treated as just another short-term force. If, after screening, the change factor appears to be of sufficient duration and magnitude, it modifies the treatment of the predetermined variables (attitude) γ ijt + 1 = γ ikt Zik + BigYigt . So, the importance attached to the long-term attitude components is a function of previous components and their importance plus short-term impacts.
52 Variations in the stability of categories should, in our model, depend on the distance from the mean and the size of the category. The expected difference in stability between categories can be calculated. However, other differences in variation can occur. In these cases, variation in individual standard deviations is related to the responses given. The likely sources of the category variation-individual variation association all involve more than one attitude being tapped by the question. Generally, this second attitude has to do with a “strong-weak” dimension, while the dominant dimension remains agree-disagree on the particular issue involved. To illustrate this point, some individuals' attitudes are minimally differentiated—they simply agree with the particular issue position. No more complex response could be accurately elicited. When these individuals are forced into a strong-weak distinction, that distinction may be made on the basis of a second attitude. For many questions, the source of this second attitude appears to be found in a culturally determined preference for strength, as in “strongly agree.” As a result, many give strong responses which may or may not be related to their substantive preferences on the issue. This is of particular concern for individuals who are ambivalent in their true attitude—who through the normal variation we already have noted may come down one time on the agree and another on the disagree. Some people who both are ambivalent in their preference of agree or disagree and have a very minimal amount of differentiation in their choice are coded as strong agreement on the first trial and a certain percentage of the time give a disagree response and are coded as strong disagreement because of the second attitude involved. Thus, the actual differences in response are magnified, and each time the variation from the true attitude is considerable. “Weak” responses, which include only the single attitude dimension (no second attitudinal preference for weakness), thus show more stability than the “strong” responses (relative to the stability predicted from the distribution). If this strength dimension occurs, then the true estimate of attitudes should be geared mainly toward accounting for the weak categories. For example, assume the attitude continuum on public power control as shown below:
An individual's true attitude may be found at the “depends” point. On trial one, the response may be at the 6 point on the dimension as the result of short-term forces, but because of the second attitude biased toward strength he may in the end be coded at the 9 or 10 position. At the second trial, his response may be true at the 4 position, again because of short-term forces (still basically within his tolerance for inconsistency and relatively stable with his earlier 6 response—both varying only one point from his true attitude at position 5), yet the bias for strength may place him at the 1 or 0 position of strong disagreement. In the recorded response, his inconsistency appears to be extremely highmthe difference between position 9 and position 1, while his true response varied only from 6 to 4, and his true attitude remained completely stable. The impact of the potential intrusion of a second, and possibly even third attitudinal dimension into the one the research is attempting to measure will certainly vary with the topic. Thus, for party identification, the strong-weak distinction appears to tap a politically more substantive attitude, in which the strength may be more closely tied to the actual attitude toward a respondent's own party.
53 Heise, David R., “Separating Reliability and Stability in Test-Retest Correlation,” American Sociological Review, 34 (02, 1969), 93–101 CrossRefGoogle Scholar.
54 When attitude change is not involved in response variation, the timing of surveys is not an essential item. Because individual variation around the mean is unsequenced, the sequence of surveys is irrelevant, and measures of change over time—such as Markov chains and Coleman's techniques—are inapplicable.
55 Tufte, E. R., “Improving Data Analysis in Political Science,” in Quantitative Analysis of Social Problems, ed. Tufte, E. R. (Reading, Mass.: Addison-Wesley, 1970), esp. pp. 440–441 Google Scholar; and Wilson, Thomas P., “Critique of Ordinal Variables,” in Causal Models in the Social Sciences, ed. Blalock, , esp. p. 417 and notes 2 and 3Google Scholar.
56 The particular measure is the absolute deviation of observed from predicted responses, divided by two. The division is necessitated by all deviations from prediction being counted twice, once positively and once negatively, producing error which potentially runs from zero to two; division by two results in error potentially running from zero to one (an alternative computational method is to count only positive [negative] error). No squaring procedure is used with the error for several reasons. First, it involves weighting cases, which we have no grounds to do. Second, it confuses interpretation; the percentage of cases deviating from prediction is easily understood. Third, the squaring overemphasizes large errors, but these are precisely the errors most likely to result from response bias and thus the errors which provide no true test of the hypothesis.
57 There is no simple test for the adequacy of the model, but our operating criterion of 90 per cent accuracy was chosen with the following considerations in mind. First, there is some imprecision in the percentage of respondents and the predicted frequencies. The sample averages 1000 or less respondents per table, so accuracy beyond the .1 per cent level is impossible; similarly, our 100 intervals of the normal distribution average a precision of about .2 per cent. From these imprecisions, we would expect a total of about 2 per cent inaccuracy across the 25 cells. Second, sampling theory would lead us to expect that if the predictions were correct for the entire population, half the time the deviation of the sample from the predictions (or population) would be more than 25–30 per cent across the 25 cells. Because our predictions are based on the actual sample frequencies, however, many of the deviations would have been taken account of in arriving at the predictions. Correspondingly, we note that if every cell were below its predicted error, total deviation would be 9 per cent—a more reasonable estimate. Third, actual change, response-set factors limited to one year, etc., cannot be accounted for by the prediction if they are unidirectional in net effect across the sample (otherwise they simply get counted as “nonattitude forces”—the fate of most School Integration and Job Guarantee change in 1956–1958). These range from 2 through 7 per cent for any pair of consecutive samples.
58 Short-term forces operating on both responses in one interview may increase the association between attitudes, however, so some care is required in falsely correcting correlations. For specific guidelines on dealing with these problems see Costner, “Theory, Deduction and Rules of Correspondence”; Heise, “Separating Reliability and Stability in Test-Retest Correlation”; and Coleman, “The Mathematical Study of Change.”
59 Bachrach, Peter, The Theory of Democratic Elitism: A Critique (Boston: Little, Brown and Company, 1967), p. 8 Google Scholar.
60 Weisberg, and Rusk, , in “Dimensions of Candidate Evaluation,” p. 1185 Google Scholar, state that “party and issues thus provide two basic mechanisms of candidate evaluation.”
61 As Charles F. Cnudde and Deane E. Neubauer point out, “One of the most important relationships supporting democratic stability involves citizens' preferences, leadership awareness of those preferences, and policy outcomes”; see “New Trends in Democratic Theory” in Empirical Democratic Theory, ed. Cnudde, and Neubauer, (Chicago: Markham, 1969), p. 529 Google Scholar. This relationship, of course, is central to the work of Miller, Warren E. and Stokes, Donald E., in “Constituency Influence in Congress,” The American Political Science Review, 57 (03, 1963), 45–56 CrossRefGoogle Scholar, in which leadership responsiveness is shown to vary across issue domains.
62 Lipsitz, Lewis, in “Forgotten Roots,” in Frontiers of Democratic Theory, ed. Kariel, Henry S. (New York: Random House, 1970), pp. 402–403 Google Scholar, discusses the impact of empirical findings such as those we discussed early in the paper and concludes that in relationship to thinking about the attitudinal foundations of democracy, “these are empirical questions that have not been asked clearly enough.”
Comments
No Comments have been published for this article.