A range of methodologies can be used to evaluate search systems both in the laboratory and in the real world. Each has advantages and disadvantages. Methodologies range from offline test collections and simulations of people's search behavior, to living laboratories, instrumented panels, ethnography, and large-scale log analysis. Specific objectives determine the metrics and methodologies that are employed.
There are at least three important dimensions of the available methods: (1) stage: the point in the design process that the method is used (i.e., formative, as the experimental system is being designed – with outcomes being able to inform design decisions, and summative, once design is complete and the system has been developed); (2) scale: the scale at which the method is employed (small, medium, or large); and (3) participants: the people involved, e.g., searcher simulations, user study subjects, internal users (company employees test their own products; a process known as dogfooding [Harrison, 2006]), or external customers (either via parallel flighting or interleaving of the output of alternative algorithms). Dumais et al. [2014] divided a subset of experimental methodologies into two variants: (1) observational (where people may be observed searching naturally) and (2) experimental (where the search experience may be intentionally manipulated per an experimental design) (see Table 11.1).
The various methods provide different perspectives on search system performance. Although the methods are listed separately in Table 11.1 and discussed separately in this chapter, combinations of different methods are valuable to help search providers develop a more complete understanding of system effectiveness.
The focus in search system evaluation has been on designing experiments that are: insightful and able to assess successfully the attributes on which they focus; affordable with respect to the cost of creating and running the experiments; repeatable so that others can build on results; and explainable so that they can guide subsequent improvements (Liu and Oard, 2006). The primary focus has been component evaluation, involving the isolation and control of experimental variables that can interfere with the reliability of conclusions drawn.
Current methods are insufficient for evaluating complex systems for a number of reasons (Kelly, 2009), including the inadequacy of user models and task models for capturing all types of information-seeking tasks, activities, and situations. Web and search engine indices are constantly changing.