Published online by Cambridge University Press: 27 January 2009
Movements of the heavenly bodies are not affected in any discernible way by the fact that there are people on earth recording the apparent movement. Similarly it is almost inconceivable that the planets would alter their orbits because of Kepler's discovery and publication of the laws of planetary motion. The social and behavioural sciences are different in that the objects under investigation may behave differently as a result of the research process. This is particularly true when the method involves naturalistic observation, surveys or experiments. When people behave differently because of being research subjects, this is called a Hawthorne effect.
1 This is different from the principle of limited measurability identified by Heisenberg in physics. It holds that the position and momentum of an electron cannot be measured simultaneously. This is quite different from asserting that the characteristics or movement of an electron might be affected by the fact that it is being observed.
2 See French, John R. P. Jr, ‘Experiments in Field Settings’ in Festinger, L. and Katz, D., eds, Research Methods in the Social Sciences (New York: Holt, Rinehart and Winston, 1953),pp. 98–135Google Scholar; Roethlisberger, F. J. and Dickson, William J., Management and the Worker (Cambridge, Mass.: Harvard University Press, 1939)Google Scholar; Ross, John and Smith, Perry, ‘Orthodox Experimental Designs’ in Blalock, H. and Blalock, A., eds, Methodology in Social Research (New York: McGraw-Hill, 1968), pp. 333–89.Google Scholar Exactly what was found in the Hawthorne Western Electric Studies is a matter of continuing controversy in the social sciences. See, for example, Franke, R. H., ‘The Hawthorne Experiments: Re-view’, American Sociological Review, 44 (1979), 861–7CrossRefGoogle Scholar; Bramel, Dana and Friend, Ronald, ‘Hawthorne, the Myth of the Docile Worker, and Class Bias in Psychology’, American Psychologist, 36 (1981), 867–78CrossRefGoogle Scholar; Parsons, H. M., ‘What Happened at Hawthorne?’, Science, 183 (1974), 922–32.CrossRefGoogle ScholarPubMed This issue cannot be pursued here as our interest is only in the general possibility, which nearly everyone would concede, that people may behave differently because of participating in research.
3 See Gergen, Kenneth, ‘Social Psychology as History’, Journal of Personality and Social Psychology, 26 (1973), 309–20.CrossRefGoogle Scholar This was an unfortunate choice of terms by Gergen in that it implies the sanguine view that only beneficial or progressive use will be made of published materials in the social sciences. This seems unwarranted. Presumably the knowledge of social science could also be used for nefarious purposes.
4 See Rosenberg, Milton J., ‘The Conditions and Consequences of Evaluation Apprehension’, in Rosenthal, R. and Rosnow, R., eds, Artifact in Behavioral Research (New York: Academic Press, 1969), pp. 279–349.Google Scholar
5 See Aronson, Elliot, Ellsworth, Phoebe C., Carlsmith, J. Merrill and Gonzales, Marti Hope, Methods of Research in Social Psychology (New York: McGraw-Hill, 1990).Google Scholar
6 See Clausen, Aage R., “Response Validity: Vote Report’, Public Opinion Quarterly, 32 (1968), 588–606.CrossRefGoogle Scholar
7 This effect is analytically different from any alleged effects of canvassing by political campaigns. In the pre-election interviews in the election studies, people are not directly encouraged to vote.
8 See Kraut, Robert E. and McConahay, John B., ‘How Being Interviewed Affects Voting: An Experiment’, Public Opinion Quarterly, 37 (1973), 398–406CrossRefGoogle Scholar; Yalch, Richard F., ‘Pre-Election Interview Effects on Voter Turnout’, Public Opinion Quarterly, 40 (1976), 331–6CrossRefGoogle Scholar; Anderson, Barbara A., Silver, Brian D. and Abramson, Paul R., ‘The Effects of Race of the Interviewer on Measures of Electoral Participation by Blacks in SRC National Election Studies’, Public Opinion Quarterly, 52 (1988), 53–83.CrossRefGoogle Scholar
9 See Anderson, Barbara A. and Silver, Brian D., ‘Measurement and Mismeasurement of the Validity of the Self-Reported Vote’, American Journal of Political Science, 30 (1986), 771–85.CrossRefGoogle Scholar
10 See Swaddle, Kevin and Heath, Anthony, ‘Official and Reported Turnout in the British General Election of 1987’, British Journal of Political Science, 19 (1989), 537–51CrossRefGoogle Scholar; Marsh, Catherine, ‘Prediction of Voting Behaviour from a Pre-election Survey’, Political Studies, 33 (1985), 642–8.CrossRefGoogle Scholar
11 See Granberg, Donald and Holmberg, Sören, The Political System Matters: Social Psychology and Voting Behavior in Sweden and the United States (Cambridge: Cambridge University Press, 1988).Google Scholar
12 See Gilljam, Mikael, Holmberg, Sören, Asp, Kent, Bennulf, Martin, Esaiasson, Peter and Oskarson, Maria, Rött Blått Grönt: En Bok om 1988 Riksdagsval (Stockholm: Bonniers, 1990)Google Scholar; Holmberg, Sören, ‘Election Studies: The Swedish Way’ (paper presented at a conference on the Comparative History of Election Studies at the University of Twente, the Netherlands, June 1990).Google Scholar
13 In all of the Swedish election studies, people who were interviewed before or after the election voted at a higher rate than people who were chosen as part of the original sample but were, for whatever reason, not interviewed. In addition to the people who were members of the panel from the preceding election, we excluded those who were assigned to be interviewed before the election but were not interviewed until after the election. Also excluded were people who agreed to participate only in a short or a very short interview. This was done to control for the fact that more time is available for interviewing after the election and a greater effort is made to interview people after the election, even if they agree to only a short interview. Even though we consider our method of exclusion to be a fairer test, the results would not be altered significantly if all the non-panel people who were interviewed in a given year were included. Incidentally, the turnout percentages were 89 and 91 respectively for those given the short and very short interviews. Thus, the turnout rate for these people was slightly lower than for people who were given the full interview but higher than for people who were not interviewed at all. The turnout for the people who were selected as part of the original sample but who were not interviewed was 80 per cent. See Granberg, Donald and Holmberg, Sören, ‘Self-Reported Turnout and Voter Validation’, American Journal of Political Science', 35 (1991), 448–59.CrossRefGoogle Scholar
14 See Zeisel, Hans, Say it with Figures (New York: Harper and Row, 1968).Google Scholar
13 Clausen's data on the stimulus hypothesis involved a turnout of 77.7 per cent for the Survey Research Center's respondents who were interviewed before the election, compared to turnouts of 71.2 per cent and 72 per cent for the Census and Economic Survey respondents who were not interviewed before the election. If we subtract the average of the latter two from the former, the stimulus effect is just over 6 percentage points. We can then take the turnout for the post-election interviewees as the baseline and subtract it from 100 to find the maximum amount of change. If we divide the amount of change (6.1 per cent) by the maximum (28.4 per cent), the result implies that in 1964 about 21 per cent of the erstwhile non-voters in the US survey were stimulated by the pre-election interview to vote. This is similar to our estimate of the relative size of the stimulus effect in Sweden. These estimates could be net effects. That is, it is possible that some people may have become bored or upset by the political content of the pre-election survey, causing them to become demobilized, i.e. not to vote whereas they would have voted without the pre-election interview. If so, they must be outweighed by those who were stimulated to vote by being asked the battery of pre-election interview questions.
16 Clausen, , ‘Response Validity: Vote Report’, p. 604.Google Scholar
17 While people were randomly assigned to be interviewed before or after the election, they were categorized as high or low in political interest on the basis of self-selection. This could introduce a bias in that voting or not voting could have an effect on one's level of political interest. Bem's self-perception theory holds that people infer their own psychological characteristics (attitudes, or in this case, level of political interest) from observing their own behaviour (see Bern, Daryl R., ‘Self-Perception: An Alternative Interpretation of Cognitive Dissonance Phenomena’, Psychological Review, 74 (1967), 183–200).Google Scholar If this applies here, people might infer high interest from voting and low interest from not voting. This implies the hypothesis that, compared to voters interviewed before the election, voters interviewed after the election would express a higher level of interest. At the same time, compared to non-voters interviewed before the election, non-voters interviewed after the election would express a lower level of interest in politics. The evidence, however, shows that the distribution on the interest question is nearly the same for voters interviewed before and after the election and for non-voters interviewed before and after the election.
18 See Lazarsfeld, Paul, Berelson, Bernard and Gaudet, Hazel, The People's Choice: How the Voter Makes Up His Mind in a Presidential Campaign (New York: Columbia University Press, 1944).Google Scholar
19 While our interpretation is plausible and consistent with the facts, it is by no means the only one that could be made. We cannot claim to have direct knowledge of the underlying psychological process which occurs as a result of being interviewed during the pre-election period. The most harsh interpretation of the stimulus effect might be that people somehow feel intimidated, coerced or frightened into voting out of a feeling that their behaviour is going to be under continuing surveillance and monitoring in the future. We do not favour such an interpretation and do not think it applies to the Swedish election studies we have analysed. We point it out here merely to indicate an awareness of our ignorance of the underlying process associated with the stimulus effect.
20 See Sherman, Steven J., ‘On the Self-Erasing Nature of Errors of Prediction’, Journal of Personality and Social Psychology, 39 (1980), 211–21CrossRefGoogle Scholar; Greenwald, Anthony, Carnet, Catherine, Beach, Rebecca and Young, Barbara, ‘Increasing Voting Behavior by Asking People if They Expect to Vote’, Journal of Applied Psychology, 72 (1987), 315–18.CrossRefGoogle Scholar