We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the 1980s and 90s in psychology, many cross-cultural comparisons were made concerning individualism and collectivism with questionnaires and experiments. The largest number of them compared “collectivistic” Japanese with “individualistic” Americans. This chapter reviewed 48 such empirical comparisons and found that Japanese were no different from Americans in the degree of collectivism. Both questionnaire studies and experimental studies showed essentially the same pattern of results. Many researchers who believed in “Japanese collectivism” suspected flaws in those empirical studies. However, none of the suspected flaws was consistent with empirical evidence. For example, although it was suspected that “Japanese collectivism” was not supported because college students provided data as participants, the studies with non-student adults did not support this common view either. It is thus unquestionable that as a whole the empirical studies disproved the reality of “Japanese collectivism.”
Researchers are increasingly reliant on online, opt-in surveys. But prior benchmarking exercises employ national samples, making it unclear whether such surveys can effectively represent Black respondents and other minorities nationwide. This paper presents the results of uncompensated online and in-person surveys administered chiefly in one racially diverse American city—Philadelphia—during its 2023 mayoral primary. The participation rate for online surveys promoted via Facebook and Instagram was .4%, with White residents and those with college degrees more likely to respond. Such biases help explain why neither our surveys nor public polls correctly identified the Democratic primary’s winner, an establishment-backed Black Democrat. Even weighted, geographically stratified online surveys typically underestimate the winner’s support, although an in-person exit poll does not. We identify some similar patterns in Chicago. These results indicate important gaps in the populations represented in contemporary opt-in surveys and suggest that alternative survey modes help reduce them.
Foreign language can either enhance decision-making by triggering more deliberation or worsen it due to cognitive overload. We tested these two hypotheses in one response bias: acquiescence. In three experiments, 413 participants made dichotomous decisions about whether 100 personality traits described them or not. Participants showed more acquiescence in a foreign language (vs. native), giving more certifying responses when deciding on known traits. Reaction time results suggest that a foreign language particularly impacts rejection more than certification of their comprehension. These findings support the cognitive overload hypothesis and provide valuable insights for the influence of language on response bias.
A recent survey of inequality (Norton and Ariely, Perspectives on Psychological Science, 6, 9–12) asked respondents to indicate what percent of the nation’s total wealth is—and should be—controlled by richer and poorer quintiles of the U.S. population. We show that such measures lead to powerful anchoring effects that account for the otherwise remarkable findings that respondents reported perceiving, and desiring, extremely low inequality in wealth. We show that the same anchoring effects occur in other domains, namely web page popularity and school teacher salaries. We introduce logically equivalent questions about average levels of inequality that lead to more accurate responses. Finally, when we made respondents aware of the logical connection between the two measures, the majority said that typical responses to the average measures, indicating higher levels of inequality, better reflected their actual perceptions and preferences than did typical responses to percent measures.
Major depression has become one of the most frequent diagnoses in Germany. It is also quite prominent in cases referred for medicolegal assessment in insurance, compensation or disability claims. This report evaluates the validity of clinicians’ diagnoses of major depression in a sample of claimants. In 2015, n = 127 consecutive cases were examined for medicolegal assessment. All had been diagnosed with major depression by clinicians. All testees underwent a psychiatric interview, a physical examination, they answered questionnaires for depressive symptoms according to DSM-5, embitterment disorder, post-concussion syndrome (PCS) and unspecific somatic complaints. Performance and symptom validity tests were administered. Only 31% of the sample fulfilled the diagnostic criteria for DSM-5 major depression according to self-report, while none did so according to psychiatric assessment. Negative response bias was found in 64% of cases, feigned neurologic symptoms in 22%. Symptom exaggeration was indiscriminate rather than depression-specific. By self-report (i.e. symptom endorsement in questionnaires), 64% of the participants qualified for embitterment disorder and 93% for PCS. In conclusion, clinicians’ diagnoses of depression seem frequently erroneous. The reasons are improper assessment of the diagnostic criteria, confusion of depression with bereavement or embitterment and a failure to assess for response bias.
How can we elicit honest responses in surveys? Conjoint analysis has become a popular tool to address social desirability bias (SDB), or systematic survey misreporting on sensitive topics. However, there has been no direct evidence showing its suitability for this purpose. We propose a novel experimental design to identify conjoint analysis’s ability to mitigate SDB. Specifically, we compare a standard, fully randomized conjoint design against a partially randomized design where only the sensitive attribute is varied between the two profiles in each task. We also include a control condition to remove confounding due to the increased attention to the varying attribute under the partially randomized design. We implement this empirical strategy in two studies on attitudes about environmental conservation and preferences about congressional candidates. In both studies, our estimates indicate that the fully randomized conjoint design could reduce SDB for the average marginal component effect (AMCE) of the sensitive attribute by about two-thirds of the AMCE itself. Although encouraging, we caution that our results are exploratory and exhibit some sensitivity to alternative model specifications, suggesting the need for additional confirmatory evidence based on the proposed design.
Declining telephone response rates have forced several transformations in survey methodology, including cell phone supplements, nonprobability sampling, and increased reliance on model-based inferences. At the same time, advances in statistical methods and vast amounts of new data sources suggest that new methods can combat some of these problems. We focus on one type of data source—voter registration databases—and show how they can improve inferences from political surveys. These databases allow survey methodologists to leverage political variables, such as party registration and past voting behavior, at a large scale and free of overreporting bias or endogeneity between survey responses. We develop a general process to take advantage of this data, which is illustrated through an example where we use multilevel regression and poststratification to produce vote choice estimates for the 2012 presidential election, projecting those estimates to 195 million registered voters in a postelection context. Our inferences are stable and reasonable down to demographic subgroups within small geographies and even down to the county or congressional district level. They can be used to supplement exit polls, which have become increasingly problematic and are not available in all geographies. We discuss problems, limitations, and open areas of research.
Invalid responding is an important consideration in mental health assessment. Given that most assessment data are gathered from self-report methods, accurate diagnostic and clinical impressions can be compromised by various forms of response bias. In this chapter, we review the ways in which evaluations of psychopathology, neurocognitive symptoms, and medical/somatic presentations can be compromised due to noncredible responding and invalidating test-taking approaches. We cover a variety of strategies and measures that have been developed to assess invalid responding. Further, we discuss evaluation contexts in which invalid responding is most likely to occur. We conclude with some remarks regarding cultural considerations as well as how technology can be incorporated into the assessment of response bias.
Recent years have seen a renaissance of conjoint survey designs within social science. To date, however, researchers have lacked guidance on how many attributes they can include within conjoint profiles before survey satisficing leads to unacceptable declines in response quality. This paper addresses that question using pre-registered, two-stage experiments examining choices among hypothetical candidates for US Senate or hotel rooms. In each experiment, we use the first stage to identify attributes which are perceived to be uncorrelated with the attribute of interest, so that their effects are not masked by those of the core attributes. In the second stage, we randomly assign respondents to conjoint designs with varying numbers of those filler attributes. We report the results of these experiments implemented via Amazon's Mechanical Turk and Survey Sampling International. They demonstrate that our core quantities of interest are generally stable, with relatively modest increases in survey satisficing when respondents face large numbers of attributes.
In recent years, political and social scientists have made increasing use of conjoint survey designs to study decision-making. Here, we study a consequential question which researchers confront when implementing conjoint designs: How many choice tasks can respondents perform before survey satisficing degrades response quality? To answer the question, we run a set of experiments where respondents are asked to complete as many as 30 conjoint tasks. Experiments conducted through Amazon’s Mechanical Turk and Survey Sampling International demonstrate the surprising robustness of conjoint designs, as there are detectable but quite limited increases in survey satisficing as the number of tasks increases. Our evidence suggests that in similar study contexts researchers can assign dozens of tasks without substantial declines in response quality.
Acquiescence response bias is the tendency to agree to questionnaires irrespective of item content or direction, and is problematic for both researchers and clinicians. Further research is warranted to clarify factors relating to the confounding influence of acquiescence. Building on previous research that investigated the interaction between acquiescence, age, and secondary education, the current study has considered the role of adult higher educational achievement and acquiescence. Using the Big Five Inventory (BFI), acquiescence scores were calculated for a sample of 672 Australian adults (age M = 41.38, SD = 12.61). There was a significant inverse relationship between the variance in acquiescence scores and formal education. The greatest difference was found between the lowest education groups and the highest education groups, with the variance of the lower groups more than twice as large as the higher groups. The confounding influence of acquiescence was demonstrated using the BFI and targeted rotation to an ideal matrix, where worse model fit was found in the lower education group compared to the higher group. Implications for both researchers and clinicians are explored.
Symptom validity testing (SVT) has become a major theme of contemporary neuropsychological research. However, many issues about the meaning and interpretation of SVT findings will require the best in research design and methods to more precisely characterize what SVT tasks measure and how SVT test findings are to be used in neuropsychological assessment. Major clinical and research issues are overviewed including the use of the “effort” term to connote validity of SVT performance, the use of cut-scores, the absence of lesion-localization studies in SVT research, neuropsychiatric status and SVT performance and the rigor of SVT research designs. Case studies that demonstrate critical issues involving SVT interpretation are presented. (JINS, 2012, 18, 1–11)
This paper examines research on three hypnotic phenomena: suggested amnesia, suggested analgesia, and “trance logic.” For each case a social-psychological interpretation of hypnotic behavior as a voluntary response strategy is compared with the traditional special-process view that “good” hypnotic subjects have lost conscious control over suggestion-induced behavior. I conclude that it is inaccurate to describe hypnotically amnesic subjects as unable to recall the material they have been instructed to forget. Although amnesics present themselves as unable to remember, they in fact retain control over retrieval processes and accommodate their recall (or lack of it) to the social demands of the test situation. Hypnotic suggestions of analgesia do not produce a dissociation of pain from phenomenal awareness. Nonhypnotic suggestions of analgesia and distractor tasks that deflect attention from the'noxious stimuli are as effective as hypnotic suggestions in producing reductions in reported pain. Moreover, when appropriately motivated, subjects low in hypnotic suggestibility report pain reductions as large as those reported by highly suggestible hypnotically analgesic subjects. Finally, the data fail to support the view that a tolerance for logical incongruity (i.e., trance logic) uniquely characterizes hypnotic responding. So-called trance-logic-governed responding appears to reflect the attempts of “good” subjects to meet implicit demands to report accurately what they experience.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.