We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It has become a staple in American politics that in just about every speech or debate, presidential candidates manage to work in a story about the struggles of Mr. and Mrs. John Smith from a swing state. Candidates talk to thousands of voters on the campaign trail. But these are the stories that they remember and choose to retell because, to them, they represent the stories of the larger population.
It is easy to understand why politicians latch on to these anecdotes. On a daily basis, teams of advisors and crowds of voters share their stories and offer their opinions on everything from taxes to foreign policies to healthcare reform. Even what they wear comes under scrutiny and often garners volumes of unsolicited feedback. How do politicians and other decision makers parse through all of these suggestions to identify the handful of opinions that are truly important and relevant to the larger population? Put bluntly, how do we know that the average American cares about Mr. and Mrs. John Smith’s stories?
Occasionally, we find ourselves in situations where we express an opinion that doesn’t perfectly represent the opinion that we actually hold. You don’t really like the sweater your aunt gave you last Christmas, but you tell her how much you love it and wear it anyway. Your boss’s jokes just aren’t that funny, but you at least let out a little chuckle. These are just some of the situations we find ourselves in when social norms contribute to our putting forth a viewpoint that isn’t entirely consistent with what we actually think.
In some cases, we’re just being polite when we pay someone a compliment, or we are simply choosing the path of least resistance. Even when we do hold strong opinions about a particular topic, we may temper what we say based on how we think someone else might react. We adjust our opinions to better conform to the social contexts in which we find ourselves.
In the previous chapter we discussed how environmental cues can affect whether or not we express any opinion at all. In this chapter, we discuss how our environment affects what opinion we express; in particular, we focus on the effects that others in our environment have on our opinion expression behavior.
In his book Crossing the Chasm, Geoffrey Moore argues that for a product to succeed in the mass market, it must cross the chasm that separates the innovative consumers from the rest of the market, the previously discussed imitators. But simply getting the approval of the innovators is not enough to penetrate the mass market. Instead, a few influential individuals among the innovator population can play a critical role in bridging the gap between the innovator population and the mass market.
Is this the only path toward success for a new product? In a study of how information is transmitted across a social network, Watts and Dodd demonstrate that it is not always the influential power of a few that leads to the diffusion of information. An alternative path for diffusion of information exists if there is “a critical mass of easily influenced individuals” who can fuel the viral takeoff of an product, idea, or opinion.
In the world of Facebook, Twitter, and Yelp, water-cooler conversations with co-workers and backyard small talk with neighbors have moved from the physical world to the digital arena. Previous exchanges with familiar and trusted individuals have been replaced by large-scale chatter accessible to acquaintances and strangers. Discussions that once went unrecorded now leave traces that can be explored years later. The way in which we share information and opinions has changed irrevocably.
In this new landscape, organizations ranging from Fortune 500 companies to government agencies to political campaigns continuously monitor online opinions in an effort to guide their actions. Are consumers satisfied with our product? How are our policies being perceived? Do voters agree with our platform? Brand managers, marketers, and campaign managers can potentially find answers to these questions by monitoring the opinions shared through social media.
But measuring online opinion is more complex than just reading a few posted reviews. In this book, we move beyond the current practice of social media monitoring and introduce the concept of social media intelligence. While social media monitoring is an essential step in developing a social media intelligence platform, it is by nature descriptive and retrospective. That is, social media monitoring describes what has already happened. It does not prescribe or guide an organization’s next steps.
The current state of social media intelligence is one where organizations are investing in social media monitoring but drowning in social media data and metrics. In an effort to make sense of the seemingly infinite volume of data that social media produces on a daily basis, analysts are computing an equally overwhelming number of metrics. The problem is that organizations are measuring what is easy to measure with the data. Twitter data are easy to collect and volume metrics are easy to compute, so metrics like the number of Twitter mentions or the number of Twitter followers are over-emphasized. Rather than going after the low hanging fruit, we need to shift our focus from measuring what’s easy to measure to measuring what matters. In other words, what are the metrics that will influence our strategic decision making? And our ability to define these metrics will depend on a firm understanding of opinion science.
Organizations have also struggled with integrating the intelligence gathered from social media data with other sources of data that marketing researchers have relied on for decades. Many organizations are faced with multiple research reports produced from traditional focus groups, customers surveys, in-store sales data, and social media. In several cases, the social media reports don’t align with other studies, especially when the social media metrics are not adjusted to accommodate the various biases we know exist from the opinion science research. When faced with these conflicting reports, organizations tend to favor the tried and tested offline methods over the very new and untested social media metrics. But organizations shouldn’t give up on social media intelligence quite that quickly, especially while social media tools are in their infancy. An integrated research approach that includes both the traditional offline methods and social media intelligence can be very effective, timely, and cost-efficient. The key is to track the right social media metrics and integrate social media intelligence efforts with other marketing research programs. Integration would involve the alignment of social media metrics with the offline metrics in such a way that the multiple sources of marketing intelligence complement one another. Social media intelligence can be used as an early indicator of general problem areas and offline methods can used to investigate further as a follow-up study.
Let’s say you just had a great dining experience at a new restaurant that opened down the street. Or you just saw the worst movie of your life. Many people with these experiences turn to social media to talk about their experiences and share their opinions (we discuss why people do this in Chapters 2–4). Some may write a lengthy review on their blog. Others may write shorter reviews and post to a review site (like Yelp or Rotten Tomatoes). And still others may choose to engage in a lengthy back-and-forth discussion about the merits and pitfalls of the experience in an online discussion forum.
Expressing opinions in this way is not new behavior. In the past, we referred to this as word-of-mouth behavior. Neighbors talking to neighbors about their new cars. Co-workers having conversations around the water cooler about a new computer or the latest events unfolding in their favorite television programs. But there are two fundamental differences between offline word-of-mouth activity and online conversations occurring in social media.
In the previous chapter, we focused on opinion formation and the various factors that influence that process. In this chapter and the next, we look at what happens after you have formed an opinion. Do you share it with others? And, if so, why do you share it and what specifically do you say?
By “opinion expression,” we refer to an individual’s decision to communicate his or her opinions to others. This decision stage serves as a filter between an individual’s underlying opinions and the sharing of those opinions. As with any filter, not everything that encounters it is going to pass through. An individual may decide not to share any opinion at all or to voice a modified (or moderated) version of his or her actual opinion. In other words, the opinion expression decision can be broken down into two components: (1) whether to share an opinion and (2) what opinion to share.
Imagine you are chatting with some new friends, co-workers, or neighbors whom you have met fairly recently. In such unfamiliar situations, most people err on the side of caution and steer the conversation to safe topics like the weather, local restaurants, new movies , or weekend plans. That way, you don’t run much risk of inadvertently offending the people whom you just met by expressing a potentially contentious opinion.
Before we jump into how organizations can build their social media intelligence capabilities, we first need to understand the science of opinions. Behind every social media comment posted is a person with an opinion. However, not everyone with an opinion chooses to share that opinion online. The opinions we see posted to social media are an outcome of a two-stage process. In the first stage, we form our opinions (opinion formation). Then, in the second stage, we share our opinions with others (opinion expression). However, we don’t share all of our opinions. Instead, the opinions that we ultimately post online are those that somehow merit sharing. In other words, the opinion expression stage can be thought of as an opinion filter that allows some of our opinions to pass through to be posted on social media while our other opinions remain unshared (see Figure 2.1).
To illustrate the distinction between opinion formation and expression, here’s an offline example that most of us are familiar with: the Monday morning quarterback. We all know one, and chances are we know more than one. The Monday morning quarterback has well-developed opinions about Sunday’s football games and wants to share those opinions with anyone willing to listen: Which plays should have been run. Which players should have been substituted. Which calls were blown by the officials and should have been challenged.
In the world of Facebook, Twitter and Yelp, water-cooler conversations with co-workers and backyard small talk with neighbors have moved from the physical world to the digital arena. In this new landscape, organizations ranging from Fortune 500 companies to government agencies to political campaigns continuously monitor online opinions in an effort to guide their actions. Are consumers satisfied with our product? How are our policies perceived? Do voters agree with our platform? Measuring online opinion is more complex than just reading a few posted reviews. Social media is replete with noise and chatter that can contaminate monitoring efforts. By knowing what shapes online opinions, organizations can better uncover the valuable insights hidden in the social media chatter and better inform strategy. This book can help anyone facing the challenge of making sense of social media data to move beyond the current practice of social media monitoring to a more comprehensive use of social media intelligence.
The truly world-wide reach of the Web has brought with it a new realisation of the enormous importance of usability and user interface design. In the last ten years, much has become understood about what works in search interfaces from a usability perspective, and what does not. Researchers and practitioners have developed a wide range of innovative interface ideas, but only the most broadly acceptable make their way into major web search engines. This book summarizes these developments, presenting the state of the art of search interface design, both in academic research and in deployment in commercial systems. Many books describe the algorithms behind search engines and information retrieval systems, but the unique focus of this book is specifically on the user interface. It will be welcomed by industry professionals who design systems that use search interfaces as well as graduate students and academic researchers who investigate information systems.
Experiments that require the use of human participants are time consuming and costly: it is important to get the process right the first time. Planning and preparation are key to success. This practical book takes the human-computer interaction researcher through the complete experimental process, from identifying a research question to designing and conducting an experiment, and then to analysing and reporting the results. The advice offered in this book draws on the author's twenty years of experience running experiments. In describing general concepts of experimental design and analysis she refers to numerous worked examples that address the very real practicalities and problems of conducting an experiment, such as managing participants, getting ethical approval, pre-empting criticism, choosing a statistical method and dealing with unexpected events.
As mentioned previously, experimental methods can be a matter of dispute: there
can be as many views of the “correct” way to run an experiment as
there are experimenters. Such disagreements are most obvious in the approach
taken to statistical analysis of data: everyone has their own favourite method,
there can be many different valid ways to analyse data, and even statisticians
do not always agree on the best approach.
This chapter is not intended to be a statistics primer: it simply describes the
statistics tests that I find most useful in analysing data and shows examples of
their application. It does not discuss any theoretical aspects of these tests or
why they “work.” Rather, it is a practical guide that will enable
an experimenter to make considerable headway with some simple analyses, and to
be able to consult a statistics text for more information with confidence.
In most cases, these tests will be sufficient for answering the type of research
questions discussed so far. Other analyses may require reference to a good
statistics book or guidance from a statistics consultant.
This book describes the process that takes a researcher from identifying a
human–computer interaction (HCI) research idea that needs to be tested,
to designing and conducting a test, and then analysing and reporting the
results. This first chapter introduces the notion of an “HCI idea”
and different approaches to testing.
Assessing the worth of an HCI idea
Imagine that you have an HCI idea, for example, a novel interaction method, a new
way of visualising data, an innovative device for moving a cursor, or a new
interactive system for building games. You can implement it, demonstrate it to a
wide range of people, and even deploy it for use – but is it a
“good” idea? Will the interaction method assist users with their
tasks? Will the visualisation make it easier to spot data trends? Will the new
device make cursor movement quicker? Will users like the new game building
system?
It is your idea, so of course you believe that it is wonderful; however, your
subjective judgement (or even the views of your friends in the research
laboratory) is not sufficient to prove its general worth. An objective
evaluation of the idea (using people not involved in the research) is required.
As Zhai (2003) says in his controversial article, “Evaluation is the
worst form of HCI research except all those other forms that have been
tried,” the true value of the idea cannot be determined simply by
“subjective opinion, authority, intimidation, fashion or fad.”
The definition of the conditions, tasks, and experimental objects is the initial
focus of the experimental design, and must be carefully related to the research
question, as described in Chapter 2. The experiment itself could be described
simply as presenting the stimuli to human participants and asking them to
perform the tasks. There are, however, still many other decisions to be made
about the experimental process, as well as additional supporting materials and
processes to be considered.
This chapter focuses on the nature of the participant experience, that is, what
each participant will do between the start and end times of the experiment
– a lot more happens than simply presenting the trials.
Allocating participants to conditions
As highlighted in Chapter 1, the key issue when running experiments is the
comparison of performance between the conditions: does one condition produce
better or worse performance than another? To determine “performance with
a condition,” human participants will need to perform tasks associated
with the HCI idea being investigated, and measurements of the overall
performance for each condition will be taken. Recall that we want to produce
data like that in Figure 2.1, which summarises performance according to each
experimental condition, with no explicit reference to tasks or experimental
objects.
To illustrate the two approaches to factor analysis, consider a
within-participant experiment that aims to answer the research question,
“Which visual form of an image best supports visual search?” The
independent variable is the visual form of an image with three conditions: Black
and White (BW), Colour (C), and Grey-scale (GS).
Each screen presents forty items, and there is only one task – identify
the largest image. To ensure generalisability of the results, there are three
experimental objects, each using a different type of image: images of the
environment (photographs, P), paintings (photographs of paintings, PP), and
graphics (images created using a digital imaging tool, G). Error and response
time data are collected, but only error data are analysed here. Data for this
experiment (fabricated for the purposes of illustration) are shown in Table
A3.1.
The primary independent variable primary independent variable is
visual form (BW, C, GS) because this is directly related to the research
question. A secondary independent variable is image type (with
three secondary conditions, P, PP, G).