from Part III - Evaluation
Published online by Cambridge University Press: 05 March 2016
Measuring the performance of search systems is essential in improving their effectiveness. Computing measures (or metrics, used synonymously in this chapter) lets search providers benchmark current performance, as well as quantify the impact of any changes. Some measures target the outcomes of the search process (e.g., the relevance of the found items), while some are more focused on the search process itself (e.g., the efficiency or cognitive load of the search process). Although numerous measures of search system performance have been proposed, none can fully evaluate search systems from all perspectives. As search systems become more sophisticated and support a broader range of tasks, new evaluation metrics and metric combinations will be needed.
Engelbart (1962, p. 1) suggested that the increased capability attributable to augmenting human intellect would likely lead to: “more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that could not previously be solved.” Systems that offer such opportunities cannot simply be evaluated using traditional retrieval measures such as precision and recall, which only consider the relevance of the found content and how much of the relevant content is found. In this case, we would need metrics that assess the quality of the solution and assess the impact of the search process on people's understanding of the subject matter.
There are two groups of metrics considered in this chapter: (1) those that assess the search process in which the searcher was engaged; and (2) those that target the outcomes attained as a result of that process. For completeness, I cover some of the traditional metrics, but many of those discussed draw on research in other communities, such as psychology. Irrespective of the target for the metric computation, with enhancements in next-generation search systems, evaluation metrics (and methods, discussed in the Chapter 11) need to cater to a diverse range of searchers, tasks, and interactivity.
Traditionally, the unit of retrieval evaluation is the search query. Next-generation search systems, however, place an emphasis on supporting the completion of complete search tasks end-to-end (rather than considering queries independently and satisfying task-relevant information needs one query at a time).
To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.