Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-09T01:32:38.077Z Has data issue: false hasContentIssue false

Trust in motives, trust in competence: Separate factors determining the effectiveness of risk communication

Published online by Cambridge University Press:  01 January 2023

Matt Twyman*
Affiliation:
Department of Psychology, University College London
Nigel Harvey
Affiliation:
Department of Psychology, University College London
Clare Harries
Affiliation:
Department of Psychology, University College London
*
*Address: Nigel Harvey, Department of Psychology, University College London, Gower Street, London, WC1E 6BT, UK. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

According to Siegrist, Earle and Gutscher’s (2003) model of risk communication, the effect of advice about risk on an agent’s behavior depends on the agent’s trust in the competence of the advisor and on their trust in the motives of the advisor. Trust in competence depends on how good the advice received from the source has been in the past. Trust in motives depends on how similar the agent assesses the advisor’s values to be to their own. We show that past quality of advice and degree of similarity between advisors’ and judges’ values have separate (non-interacting) effects on two types of agent behavior: the degree of trust expressed in a source (stated trust) and the weight given to the source’s advice (revealed trust). These findings support Siegrist et al.’s model. We also found that revealed trust was affected more than stated trust by differences in advisor quality. It is not clear how this finding should be accommodated within Siegrist et al.’s (2003) model.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2008] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Throughout the social sciences, trust is recognized as an important factor that mediates many aspects of human behavior (Camerer, Reference Camerer2003; Fukuyama, 1995; Kramer and Tyler, Reference Kramer and Tyler1996; Markova, Reference Markova2004). Definitions of trust vary but a widely accepted one is that it is “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” (Rousseau, Sitkin, Burt and Camerer, Reference Rousseau, Sitkin, Burt and Camerer1998). Thus, a person (the trustor) who depends on someone else (the trustee) expects to reduce the likelihood or size of a negative outcome in some situation: when that dependence is misplaced, the expected value of the outcome is lower. Experimental work on trust has been carried out in various contexts, including behavioral game theory (Camerer, Reference Camerer2003), on-line commerce (Grabner-Kräutner and Kaluscha, Reference Grabner-Kräutner and Kaluscha2003; Riegelsberger, Sasse and McCarthy, Reference Riegelsberger, Sasse and McCarthy2005), and risk communication. The work that we report here falls into the last of these three domains.

Risk communication provides information that is fallible: it gives people advice about levels of risk associated with hazards. Reliance on advisors signals an acceptance of vulnerability based on expectations that those advisors are competent and well-meaning (when, in fact, they may not be). Such reliance provides evidence of trust in the sense encapsulated by the above definition. When people rely more on certain advisors, we can say that their behavior reveals that they have more trust in those advisors (Twyman, Harvey and Harries, Reference Twyman, Harvey and Harries2006).

Recent research into trust indicates that it is determined by a number of factors (Mayer, Davis and Schoorman, Reference Mayer, Davis and Schoorman1995; Renn and Levine, Reference Renn, Levine, Kasperson and Stallen1991). These factors can be broadly categorized into two groups. The first concerns the competence of the trustee (ability, competence, expertise, knowledge). The second concerns the motives of the trustee (benevolence, integrity, honesty, fairness). On the basis of findings such as these, Siegrist, Earle and Gutscher (Reference Siegrist, Earle and Gutscher2003) and Siegrist, Gutscher and Earle (Reference Siegrist, Gutscher and Earle2005) developed their trust-confidence-cooperation (TCC) model. A simplified version of it is shown in Figure 1.Footnote 1

Figure 1: Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model of risk communication.

According to this model, two different types of trust determine the degree to which people cooperate with their advisors. The first is trust in motives (also known simply as “trust” or “social trust”) and the second is trust in competence (known as “confidence”). The cooperative intention produced by these two types of trust results in cooperative behaviors of various types. For example, people may express trust in their advisors, they may use advice from them to form their own judgments, or they may act on the basis of their advice.

Trust in competence is determined by past history of the quality of advice produced by the source. This type of information has already been shown to affect advice-taking: people place greater weight on information received from sources who have been more accurate in the past (e.g., Fischer and Harvey, Reference Fischer and Harvey1999; Harvey and Fischer, Reference Harvey and Fischer1997). Trust in motives is determined by how similar the judge assesses the advisor’s values to be to their own. Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model predicts that people will take more advice from advisors whose values they judge to be more similar to their own. To date, this prediction of their two-route model has not been tested. Our aim here is to provide such a test.

In the past, researchers into advice-taking have measured trust in advice by using behavioral measures. For example, given two different pieces of advice for the value of a numerical variable (e.g., a risk level), a judgment closer to the first than to the second indicates greater influence of the first. Hence, relative proximity of judgments to advice from different sources provides a behavioral means of assessing the relative influence of those sources of advice.

In contrast, researchers into trust have required people to make verbal or numerical estimates of their trust in different sources of information — typically by using rating scales. However, O’Neill (Reference O’Neill2002) has argued that behavioral and verbal measures of trust may not always coincide. For example, people may state that they do not trust an agent when their behavior reveals that they do. They may do this because the behavioral placement of trust relies on implicit (intuitive, nonconscious) processes that are not easily accessed by the explicit processes required for the verbal expression of trust. Indeed, there is some evidence that stated and revealed trust do dissociate under certain conditions (Twyman, Harries and Harvey, Reference Twyman, Harvey and Harries2006). Given these results, we shall measure trust in both ways in the present study.

Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model predicts that similarity of values, intentions and goals will increase trust in motives and thereby increase the influence of an advisor on a judge. However, it does not make any prediction about the effects of physical similarity of the advisor and the judge. Nevertheless there is good reason to expect that people will be more influenced by advisors who are the same sex as they are or who are approximately the same age as they are. This is because there is persuasive evidence that attitudes and values, particularly those relating to risk and technology, are more likely to be similar in people of the same sex and similar age (Deakin, Aitken, Robbins and Sahakian, Reference Deakin, Aitken, Robbins and Sahakian2004; Morris and Venkatesh, Reference Morris and Venkatesh2000; Morris, Venkatesh and Ackerman, Reference Morris, Venkatesh and Ackerman2005; Rosen, Reference Rosen2003; Siegrist et al., Reference Siegrist, Gutscher and Earle2005). As a result, physical (i.e., age, sex) similarity may imply a degree of value similarity. If it does, advisors who are physically similar to judges are likely to be trusted more. Also advisors who are similar to judges in both respects are likely to be trusted more than those who are similar to them in just one respect.

2 Experiment

To test these predictions, we required people to make risk estimates for a variety of hazards by using advice from two sources. The hazards were of four types: occupational, drug-taking, transport, and recreational. The advisors belonged to a government agency and a consumer organization appropriate to the hazard.

For half the participants, the government agency was the better advisor and the consumer organization was the worse one; for the other half, this mapping of advisor type on to advice quality was reversed. In the learning phase of the experiment, feedback about the “true” risk levels was provided to allow participants the opportunity to learn about the relative competence of their advisors. In the test phase, feedback was removed so that effects of prior learning could be examined uncontaminated by current learning. According to Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model, confidence (trust in competence) should be higher in the better advisor. Thus, participants should be more influenced by that advisor during the test phase. Furthermore, if they have insight into this, they should also say that they trust the better advisor more. Hence, at the end of the experiment, we asked them to rate the level of trust that they had in their advisors.

Before the start of the experiment, participants specified their age and sex. They also responded to a five-item values questionnaire. This contained statements such as “money matters more than most things”. Participants were asked to agree or disagree with each statement. Information gathered at this stage was used to assign them into one of four similarity groups. In the first group, government agency advisors were physically similar and had similar values to the participant but consumer organization advisors were physically dissimilar and had dissimilar values to the participant. In the second group, government agency advisors were physically similar but had dissimilar values to the participant while consumer organization advisors were physically dissimilar but had similar values to the participant. In the third group, government agency advisors were physically dissimilar but had similar values to the participant whereas consumer organization advisors were physically similar but had dissimilar values to the participant. In the fourth group, government agency advisors were physically dissimilar and had dissimilar values to the participant but consumer organization advisors were physically similar and had similar values to the participant.

According to Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model, trust in motives should be higher in advisors whose values are similar to those of the participant. Furthermore, if physical similarity is used as proxy for similarity of values, trust in motives should also be higher in advisors who are physically similar to the participant. Thus we could expect trust in motives to be highest in advisors who are similar to the judge both physically and in terms of values and lowest in advisors who are dissimilar to the judge in both physically and in terms of values. We would expect trust in motives of advisors who were similar to judges in one respect but dissimilar in the other to lie between these two extremes. Higher trust in motives should, like higher trust in competence, be reflected by more influence of advice from the more trusted source during the test phase and in a higher rating of trust in that source at the end of the experiment.

2.1 Method

Participants. One hundred and fifty-two students from University College London took part in the experiment. Their ages ranged from 17 to 44 years (median 19 years). One hundred and seventeen of them were female.

Design. For half the participants, government agencies were more accurate advisors; for the other half, consumer organizations were more accurate. Each of these groups of 76 participants was divided into four similarity subgroups, each comprising 19 people. We described the characteristics of these four groups above. To assign participants to these subgroups, we asked them for their age and sex and to agree or disagree with five statements: politics are important, people should do more for the environment, money matters more than most things, people should be more moral, being sociable is a good thing.

Advisors were portrayed in photographs. Participants were told that the advisor was representative of the source of advice. Next to the photograph was a short body of text describing the advisor’s values. Physically similar advisors were of the same sex and in the same age category (under 30 years or over 30 years) as the participant. Physically dissimilar advisors were of the other sex and in the other age category. Advisors with similar values to the participant were accompanied by text showing that they had responded in the same way as the participant to all five of the statements listed above. Advisors with dissimilar values were accompanied by text showing that they had responded in the opposite way to the participant to all five of the statements.

Stimulus materials. Annual risks of mortality associated with taking part in each of thirty-two hazardous activities were obtained from two sources. These activities were equally divided into recreational, occupational, transport, and drug-use domains. The eight recreational risks were those associated with scuba diving, rock climbing, canoeing, hang-gliding, fishing, playing soccer, fairground rides, and horse riding. Transport risks were traveling by car, bus or coach, rail, bicycle, water transport, foot, air, and motorbike. Occupational risks were working in mining and quarrying; construction; agriculture, forestry, hunting and freshwater fishing; manufacturing; metal-working; extractive and utility supply industries; electrical and optical device production. Drug-use risks were those associated with taking methadone, heroin, amphetamine, cocaine, ecstasy, LSD, alcohol, and tobacco.

Recreational risks were obtained from the UK government’s Department of Culture, Media and Sport and from the Institute of Leisure and Amenity Management. Transport risks were obtained from the UK government’s Department of Transport, Local Government and the Regions and from Transafe UK (Working for Transport Safety). Occupational risks were obtained from the UK government’s Department of Trade and Industry and from the Occupational Health and Safety Information Group. Drug-use risks were provided by the UK government’s Health and Safety Executive and the British Legalise Cannabis Campaign.

To produce the notional “actual” risk level for each activity, risk levels provided by the two sources were averaged. To produce advice from each of the sources, a value was picked at random from a normal distribution centered on the “actual” risk value and having a standard deviation of 5% of that value for the more accurate advisor and 20% of that value for the less accurate advisor. Thus, on average, the two sources were unbiased but differed in accuracy. Risk levels were expressed as a numerator and a denominator (e.g., 12 deaths per million).

Procedure. The task was controlled by computer. For each participant, four activities from each of the four behavioral domains were selected at random for presentation during the learning phase. On each of the resulting 16 trials, participants viewed advice from the government and consumer advice source, made their risk judgment, and then saw the actual risk, together with the error in their judgment. In the test phase, participants judged the annual risk of death for the remaining 16 different behaviors on the basis of the advice that they received from their two advisors. However, they received no feedback after each judgment.

After the experiment, participants were asked to state how much they trusted government agencies and consumer organizations as sources of advice about risk. They did this by indicating their level of trust on seven-point visual analogue rating scales anchored on the left-hand side with “Trust completely” and on the right-hand side with “Don’t trust at all”. To increase measurement reliability, each participant completed five pairs of these scales, the first for government agencies and consumer organizations in general and the remaining four for government agencies and consumer organizations in each of the four specific risk domains (recreational, transport, occupational, drug-use). We used the mean of the five trust ratings for each type of advisor in our analyses of stated trust (see next section).

Participants also completed Earle and Cvetkovich’s (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) six-item value similarity questionnaire for each type of advisor. This involved marking a position on seven-point scales that had the following left and right anchors: values (share my values, different values), direction (in line with me, wrong direction), goals (same goals as me, different goals), views (supports my views, opposes my views), action (acts as I would, acts against me), thought (thinks like me, thinks unlike me). Completion of this questionnaire allowed us to perform a manipulation check to ensure that the method we used to manipulate value similarity was effective.Footnote 2

Analysis. Relative measures of stated and revealed trust in the test phase of the experiment were calculated for each advisor. These measures, shown in Table 1, produce a value between zero and one, where a higher number indicates greater trust in the better advisor and a value of 0.5 indicates equal trust in the two advisors.

Table 1: Definitions of trust measures.

If these measures reflect trust-in-competence, they should be significantly above 0.5 because that type of trust should be higher for those advisors who have performed better in the past (as revealed via feedback during the learning phase). If these measures also reflect trust-in-motives, they should be higher for advisors whose values are similar to those of participants than for advisors whose values are different from those of participants.

2.2 Results

As a manipulation check, we extracted the similarity-of-values index from the participants’ post-experiment responses to Earle and Cvetkovich’s (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) scale. This index had a value of 5.54 for advisors whose values we intended to be similar to those of participants and of 3.29 for advisors whose values we intended to be dissimilar to those of participants (t(151) = 14.12; p < 0.001). Thus, our method of manipulating similarity of values was effective.

In the test phase of the experiment, relative trust in the better advisor was significantly greater than 0.5 for both types of trust and both accuracy groups: government more accurate, stated trust (M = .57; s.d. = .11; t(77) = 5.77; p < 0.001), government more accurate, revealed trust (M = .61; s.d. = .19; t(77) = 5.37; p < 0.001), consumer organization more accurate, stated trust (M = .55; s.d. = .12; t(72) = 3.54; p < 0.01); consumer organization more accurate, revealed trust (M = .58; s.d. = .20; t(73) = 3.60; p < 0.01). These results show that people placed more trust in sources that had produced better advice in the past.

We performed a four-way analysis of variance on the relative trust scores from the test phase of the experiment. Trust type (stated versus revealed) was a within-participants variable. Physical relation between participant and advisors (better advisor physically similar and worse advisor physically dissimilar versus better advisor physically dissimilar and worse advisor physically similar), relation between values of participants and of advisors (better advisor’s values similar and worse advisor’s values dissimilar versus better advisor’s values dissimilar and worse advisor’s values similar), and advisor type-to-accuracy mapping (government advisor better and consumer organization advisor worse versus government advisor worse and consumer organization advisor better) were between-participant variables. There were significant main effects of three of the four variables. No interactions were significant.

Figure 2 shows the main effects of trust type and advisor type-to-accuracy mapping. Relative scores for revealed trust were higher: this measure of trust was better able to distinguish between good and poor advisors than stated trust (F(1,143) = 4.71; p < 0.05). (Although Figure 2 suggests that people showed higher trust in good advisors from government agencies than in equally good advisors from consumer organizations, this effect failed to attain significance.).

Figure 2: Relative measures of revealed and stated trust in the better advisor for each advisor type-to-accuracy mapping. (The ordinate scale in this and later figures ranges between 0.50, the value corresponding to equal trust in the two advisors, and 0.66, the value corresponding to twice as much trust in the government agency as in the consumer organization.)

Figure 3 shows the main effect of the relation between values of participants and advisors (F(1,143) = 9.29; p < 0.01) both for the group in which the government source was the better advisor and for the group in which the consumer organization was the better advisor. As predicted by Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model, people placed more trust in advisors who shared their values.

Figure 3: Mean values of relative trust in the better advisor when that advisor had similar values and dissimilar values to the participant. Data are averaged across revealed and stated trust and shown for each advisor type-to-accuracy mapping.

Figure 4 shows the main effect of the physical relation between participants (F(1,143) = 5.62; p < 0.025). Though this figure suggests that this effect was larger when the consumer organization was the better advisor and the government agency was the worse one, the interaction was not statistically significant.Footnote 3

Figure 4: Mean values of relative trust in the better advisor when that advisor was physically similar and physically dissimilar to the participants. Data are averaged across revealed and stated trust and shown for each advisor type-to-accuracy mapping.

2.3 Discussion

First we shall consider our results in the context of Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) TCC model and then we shall discuss their relevance to recent debates about the role of intuition and affect in risk perception and decision making.

2.3.1 Effects of advisor accuracy and similarity

Our results show separate effects of advisor accuracy and similarity of values on both stated and revealed trust. This provides support for Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model (Figure 1). A past history of providing better advice (demonstrated via feedback provision during the learning phase of our experiment) increased trust in competence. Evidence that advisors hold similar values to judges making use of them (provided via textual confirmation in our experiment) increased trust in motives. Thus, within the terms of the model, we can say that both types of trust separately increased cooperative intentions. As a result, cooperative behavior increased: judges used rating scales to express higher levels of trust (stated trust) and they made more use of the advice provided when formulating their own judgments (revealed trust).

Trust in motives and trust in competence had quite independent effects on cooperative behavior; there was no interaction between advisor accuracy and similarity of values. This is interesting because, as Figure 1 shows, the model does allow the trust-in-motives route to have some limited influence on processing in the trust-in-competence route. Specifically, trust in motives can act to filter the performance information that determines level of confidence (trust in competence) of the source. In other words, people may judge poor performance by an advisor less harshly when they trust the advisors’ motives. Siegrist et al. (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) included this feature in their model because work on impression formation (De Bruin and Van Lange, Reference De Bruin and Van Lange1999, Reference De Bruin and Van Lange2000) has shown that morality information tends to dominate performance information. In our experiment, however, we found no evidence for this type of effect

Previous work supporting Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) two-route model has been based on questionnaire studies. For example, Earle and Cvetkovich (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) reported two studies examining the correlation between people’s stated trust in a nuclear waste management agency and how similar a questionnaire showed their values to be to those of that agency. This correlation was 0.66 in the first study and 0.68 in their second one. Within the TCC model, statements of trust are cooperative behaviors and cooperation is mediated partly by trust in motives, which, in turn, depends on an assessment of value similarity. Hence, Earle and Cvetkovich’s (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) results are consistent with this model.

We used Earle and Cvetkovich’s (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) similarity-of-values questionnaire to provide us with a manipulation check to confirm that our way of manipulating similarity of values had been effective. However, as we also measured stated trust, we could extract the correlation derived by Earle and Cvetkovich (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) to determine whether we could replicate their results. We found the correlation between stated trust and the similarity-of-values index obtained from Earle and Cvetkovich’s (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) questionnaire to be 0.25 when government agencies provided better advice and 0.30 when consumer organizations did. These correlations, while still highly significant (p < 0.01), are lower than those that Earle and Cvetkovich (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) reported. (This difference may be related to the much wider variety of hazards that our participants judged.) Nevertheless, they still broadly confirm Earle and Cvetkovich’s (Reference Earle, Cvetkovich, Cvetkovich and Löfstedt1999) results.

Other questionnaire studies have used separate groups of questions to measure the various constructs in the TCC model and structural equation modeling to measure the strengths of the paths between them. Broadly speaking, these studies have confirmed the structure of the model. However, a few anomalies have been reported. For example, Siegrist et al. (Reference Siegrist, Earle and Gutscher2003) found that the link between past performance and confidence (trust in competence) was weak and that there was a direct effect of social trust (trust in motives) on competence. Siegrist et al. (Reference Siegrist, Earle and Gutscher2003) suggested that these deviations from the model occurred because their respondents were insufficiently familiar with the electromagnetic field risks that were the subject of the questionnaire to use past performance of the source to assess confidence in it. As a result, they used trust-in-motives as a proxy for it.

Earle and Siegrist (Reference Earle and Siegrist2006, Study 1) tested this account in their questionnaire study. They used a similar approach but used one risk with which respondents were familiar and one risk with which they were not familiar. With the familiar risk, there was indeed a stronger link between past history and trust in competence. With the unfamiliar risk, however, Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003) pattern of results was not precisely replicated. Instead of a weak link between past performance and confidence, there was a weak link between confidence and cooperation. Despite these few anomalies in studies using unfamiliar risks, questionnaire studies generally support the TCC model.

People trusted advisors who were physically similar to them (in terms of sex and age range) more than those who were physically dissimilar from them. This is what we had expected given past research that has shown that these demographic factors predict attitudes to risk and technology. However, showing that such factors can reinforce effects of value similarity is different from showing that they can act as a proxy for it. That would require a demonstration that physical similarity increases trust in people who are given no information about their advisors’ values.

2.3.2 Revealed trust: A role for intuition?

As O’Neill (Reference O’Neill2002) emphasizes, what people say about who they trust in answer to questions may not be reflected in their behavior. Our results are important because they are the first to provide behavioral as well as verbal evidence to support the two-route TCC model. We have shown that the extent to which judges use advice from a source is independently affected by the previous accuracy of the source and by the similarity of the judge’s values to those of the source.

We obtained the same pattern of results for stated trust: ratings of trust were also independently affected by the previous accuracy of the source and by the similarity of the rater’s values to those of the source. There was, however, a significant difference between stated and revealed trust (Figure 2). People’s behavior revealed that their relative trust in the good advisor (compared to the poor one) was greater than their ratings indicated. Dissociations such as this have been reported before (Twyman et al., Reference Twyman, Harvey and Harries2006). There are two ways of explaining them.

One possibility is that the behavioral placement of trust relies on intuition. In other words, revealed trust reflects implicit processing. People may have some but not full insight into this implicit processing: for example, they may have some awareness that they attend to one advisor more than the other and use this to infer a difference in trust (Harries, Evans and Dennis, Reference Harries, Evans, St and Dennis2000). As stated trust relies on these imperfect explicit processes, it only partially reflects the difference in revealed trust between the two advisors.

Alternatively, stated and revealed trust may reflect the same underlying cognitive processes (be they implicit or explicit) but stated trust measures may provide a less efficient way of monitoring them. There are a number of reasons why this might be so (Harries and Harvey, Reference Harries and Harvey2000). Our measure of revealed trust was derived from 16 trials whereas our measure of stated trust comprised the mean of just five ratings. The latter would, therefore, have been more subject to measurement noise. Furthermore, stated trust was measured after the end of the trials in which participants placed trust in their advisors: unlike revealed trust, it therefore relied on people’s memory of the judgments that they had made. Finally, there was greater compatibility between advisors’ risk estimates and participants’ risk judgments (on which revealed trust depended) than between advisors’ risk estimates and participants’ trust ratings (on which stated trust depended).

The dissociation would have more severe consequences for Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model if the former account were true. However, its broader implications remain whichever account provides the better explanation of it. The fact that stated trust fails to accurately reflect revealed trust means that asking people to verbally assess their trust in different sources (as questionnaires do) provides a misleading way of determining how they actually place their trust in those sources.

2.3.3 Two-route models of trust: Roles for affect

Our data support a particular two-route model of trust — the one proposed by Siegrist et al. (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) that distinguishes trust in motives from trust in competence. However, it is important to recognize that other two-route models of trust have been proposed. For example, McAllister (Reference McAllister1995) proposed a model in which affect-based trust is distinguished from cognition-based trust and his approach has been developed by Rousseau et al (Reference Rousseau, Sitkin, Burt and Camerer1998). The dichotomy between affective and cognitive processing of information has been recognized for some time (e.g., Zajonc, Reference Zajonc1980), but its importance in modeling processes underlying judgment and decision-making has become evident relatively recently (Finucane, Alhakami, Slovic and Johnson, Reference Finucane, Alhakami, Slovic and Johnson2000; Finucane and Holup, Reference Finucane and Holup2006; Loewenstein, Weber, Hsee and Welch, Reference Loewenstein, Weber, Hsee and Welch2001; Slovic, Finucane, Peters and MacGregor, Reference Slovic, Finucane, Peters, MacGregor, Gilovich, Griffin and Kahneman2002).

What is the relation between McAllister’s (Reference McAllister1995) two-route model of trust and Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) two-route model of trust? Within McAllister’s (Reference McAllister1995) model, the cognitive route processes information about the knowledge, competence, reliability, and dependability of the source. If therefore bears a strong resemblance to Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) trust-in-competence processing route. However, McAllister’s (Reference McAllister1995) affect-based trust depends on emotional bonds: “People make emotional investments in trust relationships, express genuine care and concern for the welfare of partners, believe in the intrinsic value of such relationships, and believe that these sentiments are reciprocated” (McAllister, Reference McAllister1995, p. 26). These emotional bonds are strengthened by behavior that is personally chosen rather than role-prescribed and by actions that demonstrate interpersonal care and concern rather than enlightened self-interest. Repeated encounters demonstrating such behavior and actions are needed to develop affect-based trust.

It seems to us that McAllister’s (Reference McAllister1995) affect-based trust is rather different from Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) trust in motives. Trust in motives depends on judges’ perceptions that an agent’s values are similar to theirs. This depends on narrative information provided by the agent (Figure 1) and will be stronger if more narrative information enables these perceptions to be produced with greater confidence. However, Siegrist et al. (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) do not argue that perception of similarity of values is primarily an emotional process of the sort described by McAllister (Reference McAllister1995). This is probably because their concern was to develop a model of trust in risk communicators whereas McAllister (Reference McAllister1995) and Rousseau et al. (Reference Rousseau, Sitkin, Burt and Camerer1998) aimed to produce a model of interpersonal trust between managers and professionals in organizations. Development of interpersonal emotional bonds is less likely and less possible in the former case than in the latter one.

Keren and Schul (Reference Keren and Schul2006) have argued that one of the problems with two-route models is that the dichotomies in different models are often treated as aligned when there is insufficient evidence for alignment. For example, Kahneman and Frederick (Reference Kahneman, Frederick, Gilovich, Griffin and Kahneman2002, p. 51) align dichotomies from many different models and refer to the resulting processing systems as System 1 and System 2. Similarly, Juslin, Olsson and Olsson (Reference Juslin, Olsson and Olsson2003) distinguish a system subserving explicit (verbalizable), rule-based, and analytical processing from one responsible for implicit, associative and experiential processing, thereby aligning three different dichotomies.Footnote 4 Our inclination is to accept Keren and Schul’s (Reference Keren and Schul2006) caution: our data can be taken as support for Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) two-route model of trust but not as support for McAllister’s (Reference McAllister1995) two-route model of trust. For the reasons that we outlined above, the dichotomy between trust-in-motives and trust in competence should not be aligned with the dichotomy between affect-based trust and cognition-based trust.

Our position here does not mean that we consider Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model an affect-free zone. Values may be regarded as high-level attitudes: like attitudes (Krech and Crutchfield, Reference Krech and Crutchfield1948; Bem, Reference Bem1970), they have cognitive, affective, and volitional components. Thus deciding whether someone else shares one’s values involves an assessment of whether their feelings (as well as their thoughts) about how life should be approached are similar to one’s own. So, although the assessment itself may be a purely cognitive process, the factors it takes into account are likely to include affect.

2.3.4 Intuition and affect

It is important to recognize that Keren and Schul’s (Reference Keren and Schul2006) caution against aligning different two-route models also applies to models that distinguish cognitive and affective processing and those that distinguish deliberative and intuitive processing. Without evidence, affective and intuitive processes should not be conflated (and neither should cognitive and deliberative ones). In the previous two sections, we have treated them as quite separate topics, addressable via different types of data. In this respect, our approach has been conventional: historically, the psychological (as opposed to psychodynamic) literature on implicit (intuitive, unconscious) processing (e.g., Berry, Reference Berry1997; French and Cleeremans, Reference French and Cleeremans2002) has been quite separate from that on affective processes (e.g., Ekman and Davidson, Reference Ekman and Davidson1995; Lazarus, Reference Lazarus1991, Mandler, Reference Mandler1984).

More recently, a more psychodynamic approach has been adopted. It has been assumed that affective processing is unconscious — and vice versa (e.g., Epstein, Reference Epstein1994; Slovic et al., Reference Slovic, Finucane, Peters, MacGregor, Gilovich, Griffin and Kahneman2002). The reasons for this assumption appear to be that affective and intuitive processing are both fast (whereas deliberative processing is slow) and that we have little insight into our emotional processes. For us, similar speed of mental processes is an insufficient reason for assuming that they are carried out by the same system. We also suspect that some emotional reactions take a long time to build up and that some deliberative processing, particularly for easy problems, is fast. Finally, the claim that we have little insight into our emotional processes warrants more thorough investigation.

3 Summary

Most aspects of our findings fit well with the TCC model: past advice quality and degree of similarity of values between trustors and trustees had separate effects on both stated and revealed trust. Other results, such as the differential effect of past advice quality on stated and revealed trust, can be given an interpretation that is consistent with the model. However, further research is needed to determine whether this interpretation is correct.

Footnotes

*

This research was supported by Economic and Social Research Council Grant R000230114.

1 Figure 1 is a simplified version of Siegrist et al.’s (Reference Siegrist, Earle and Gutscher2003, Reference Siegrist, Gutscher and Earle2005) model because it depicts only features of the trustee that affect cooperative intentions and behavior. The complete model also allows features of the trustor to influence cooperation. For example, it allows people to vary in their general willingness to trust others. Additionally, other factors not explicitly included in this model could influence trust. For instance, it may be affected not just by trustors’ assessments of how similar trustees’ values are to their own but also by trustors’ assessments of how similar trustees’ values are to an ideal (e.g., honest, altruistic, unbiased).

2 We also asked participants to make various other post-experiment ratings. They included bias in advice and accuracy of advice from different sources. These ratings are not reported here.

3 Survey studies (e.g., Frewer, Howard, Hedderley and Shepherd, Reference Frewer, Howard, Hedderley and Shepherd1999, Frewer, Scholderer and Bredahl, Reference Frewer, Scholderer and Bredahl2003) have shown that people say that they trust consumer organizations more than they trust government agencies. Thus, for cases in which both types of advisor are equally accurate, we might have expected greater trust in consumer organizations. However, as Figure 3 shows, this is not what we found.

4 Price and Norman (Reference Price and Norman2008) provide a more general critique and analysis of two-systems theory.

References

Bem, D. J. (1970). Beliefs, Attitudes, and Human Affairs. Belmont, CA: Brooks/Cole.Google Scholar
Berry, D. C. (1997). How Implicit is Implicit Learning? Oxford: Oxford University Press.CrossRefGoogle Scholar
Camerer, C. F. (2003). Behavioral Game Theory. New York, N.Y: Russell Sage Foundation.Google Scholar
Cooksey, R. W. (1996). Judgment Analysis: Theory, Methods, and Applications. San Diego: Academic Press.Google Scholar
De Bruin, E. N. M. and Van Lange, P. A. M. (1999). Impression formation and cooperative behaviour. The European Journal of Social Psychology, 29, 305328.3.0.CO;2-R>CrossRefGoogle Scholar
De Bruin, E. N. M. and Van Lange, P. A. M. (2000). What people look for in others: Influences of the perceiver and the perceived on information selection. Personality and Social Psychology Bulletin, 26, 206219.CrossRefGoogle Scholar
Deakin, J., Aitken, M., Robbins, T. and Sahakian, B. J. (2004). Risk taking during decision making in normal volunteers increases with age. Journal of the International Neuropsychological Society, 10, 590598.CrossRefGoogle Scholar
Earle, T. C. and Cvetkovich, G. (1999). Social trust and culture in risk management. In Cvetkovich, G. and Löfstedt, R. E. (Eds), Social Trust and the Management of Risk. London: Earthscan, pp 921.Google Scholar
Earle, T. C., and Siegrist, M. (2006). Morality information, performance information, and the distinction between trust and confidence. Journal of Applied Social Psychology 36, 383416.CrossRefGoogle Scholar
Ekman, P. and Davidson, R. J. (1995). The Nature of Emotion: Fundamental Questions. Oxford,UK: Oxford University Press.Google Scholar
Epstein, S. (1994). Integration of the cognitive and psychodynamic unconscious. American Psychologist, 48, 709724.CrossRefGoogle Scholar
Finucane, M. L. and Holup, J. L. (2006). Risk as value: Combining affect and analysis in risk judgments. Journal of Risk Research, 9, 141164.CrossRefGoogle Scholar
Finucane, M. L., Alhakami, A., Slovic, P. and Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Social Psychology, 29, 305328.Google Scholar
Fischer, I. and Harvey, N. (1999). Combining forecasts: What information do judges need to outperform the simple average? International Journal of Forecasting, 15, 227246.CrossRefGoogle Scholar
French, R. M. and Cleeremans, A. (2002). Implicit Learning and Consciousness: An Empirical, Philosophical, Computational Consensus in the Making. Hove, UK: Psychology Press.Google Scholar
Frewer, L. J., Howard, L., Hedderley, D. and Shepherd, R. (1999). What determines trust in information about food-related risks? Underlying psychological constructs. Risk Analysis, 16, 473486.CrossRefGoogle Scholar
Frewer, L. J., Scholderer, J. and Bredahl, L. (2003). Communicating about the risks and benefits of genetically modified foods: The mediating role of trust. Risk Analysis, 23, 11171133.CrossRefGoogle ScholarPubMed
Fukuyama, F.(1996). Trust: The Social Virtues and the Creation of Prosperity. London, UK: Penguin Books.Google Scholar
Grabner-Kräutner, S. and Kaluscha, E. A. (2003). Empirical research in on-line trust: A review and critical assessment. International Journal of Human-Computer Studies, 58, 783812,CrossRefGoogle Scholar
Harries, C. and Harvey, N. (2000). Taking advice, using information and knowing what you are doing. Acta Psychologica, 104, 399416.CrossRefGoogle ScholarPubMed
Harries, C., Evans, J. St, B. T. and Dennis, I. (2000). Measuring doctors’ self-insight into their treatment decisions. Applied Cognitive Psychology, 14, 455477.3.0.CO;2-V>CrossRefGoogle Scholar
Harvey, N. and Fischer, I. (1997). Taking advice: Accepting help, improving judgment, and sharing responsibility. Organizational Behavior and Human Decision Processes, 70, 117130.CrossRefGoogle Scholar
Juslin, P., Olsson, H. and Olsson, A.-C. (2003). Exemplar effects in categorization and multiple-cue judgment. Journal of Experimental Psychology: General, 132, 133156.CrossRefGoogle ScholarPubMed
Kahneman, D. and Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In Gilovich, T., Griffin, D. and Kahneman, D. (Eds), Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press, (pp. 4981).CrossRefGoogle Scholar
Keren, G. and Schul, Y. (2006). On the veracity and explanatory value of two-system models. Workshop on Intuition and Affect in Risk Perception and Decision Making. Bergen, November 2006.Google Scholar
Kramer, R. M. and Tyler, T. R. (1996). Trust in Organizations: Frontiers of Theory and Research. London, UK: Sage.CrossRefGoogle Scholar
Krech, D. and Crutchfield, R. S. (1948). Theory and Problems of Social Psychology. New York: MacGraw-Hill.CrossRefGoogle Scholar
Lazarus, R. S. (1991). Emotion and Adaptation. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
Loewenstein, G. F., Weber, E. U., Hsee, C. K. and Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127, 267286.CrossRefGoogle ScholarPubMed
Mandler, G. (1984). Mind and Body: Psychology of Emotion and Stress. New York: W.W. Norton and Co.Google Scholar
Markova, I. (Ed.) (2004). Trust and Democratic Transition in Post-communist Europe. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
Mayer, R. C., Davis, J. H. and Schoorman, F. P. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709734.CrossRefGoogle Scholar
McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Academy of Management Journal, 38, 2459.CrossRefGoogle Scholar
Morris, M. G. and Venkatesh, V. (2000). Age differences in technology adoption decisions: Implications for a changing workforce. Personnel Psychology, 53, 375403.CrossRefGoogle Scholar
Morris, M., Venkatesh, V. and Ackerman, P. L. (2005). Gender and age differences in employee decisions about technology: An extension to the theory of planned behavior. IEEE Transactions on Engineering Management, 52, 6984.CrossRefGoogle Scholar
O’Neill, O. (2002). A Question of Trust. Cambridge: Cambridge University Press.Google Scholar
Price, M. C., & Norman, E. (2008). Intuitive decisions on the fringes of consciousness: Are they conscious and does it matter? Judgment and Decision Making, 3, 2841.CrossRefGoogle Scholar
Renn, O. and Levine, D. (1991). Credibility and trust in risk communication. In Kasperson, R. E. and Stallen, P. J. M. (Eds), Communicating Risks to the Public. Dordrecht: Kluwer, pp 175218.CrossRefGoogle Scholar
Riegelsberger, J, Sasse, A. M. and McCarthy, J. D. (2005). The mechanics of trust: A framework for research and design. International Journal of Human-Computer Studies, 62, 381422.CrossRefGoogle Scholar
Rosen, A. B. (2003). Variations in risk attitude across gender, and education. Medical Decision Making, 23, 511517.CrossRefGoogle ScholarPubMed
Rousseau, D. M., Sitkin, S. B., Burt, R. S. and Camerer, C. (1998). Not so different after all: A cross-discipline view of trust. Academy of Management Review, 23, 393404.CrossRefGoogle Scholar
Siegrist, M. Gutscher, H. and Earle, T. C. (2005). Perception of risk: the influence of general trust, and general confidence. Journal of Risk Research, 8, 145156.CrossRefGoogle Scholar
Siegrist, M., Earle, T. and Gutscher, H. (2003). Test of a trust and confidence model in the applied context of electromagnetic field (EMF) risks. Risk Analysis, 23, 705716.CrossRefGoogle ScholarPubMed
Slovic, P., Finucane, M. L., Peters, E. and MacGregor, D. G. (2002). The affect heuristic. In Gilovich, T., Griffin, D. and Kahneman, D. (Eds). Intuitive Judgment: Heuristics and Biases. Cambridge: Cambridge University Press (pp. 397–420).Google Scholar
Twyman, M., Harvey, N. and Harries, C. (2006). Learning to use and assess advice about risk. Forum: Qualitative Social Research, 7, Article 25.Google Scholar
Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35, 151175.CrossRefGoogle Scholar
Figure 0

Figure 1: Siegrist et al.’s (2003, 2005) model of risk communication.

Figure 1

Table 1: Definitions of trust measures.

Figure 2

Figure 2: Relative measures of revealed and stated trust in the better advisor for each advisor type-to-accuracy mapping. (The ordinate scale in this and later figures ranges between 0.50, the value corresponding to equal trust in the two advisors, and 0.66, the value corresponding to twice as much trust in the government agency as in the consumer organization.)

Figure 3

Figure 3: Mean values of relative trust in the better advisor when that advisor had similar values and dissimilar values to the participant. Data are averaged across revealed and stated trust and shown for each advisor type-to-accuracy mapping.

Figure 4

Figure 4: Mean values of relative trust in the better advisor when that advisor was physically similar and physically dissimilar to the participants. Data are averaged across revealed and stated trust and shown for each advisor type-to-accuracy mapping.