Book contents
- Frontmatter
- Contents
- List of contributors
- 1 Multimodal signal processing for meetings: an introduction
- 2 Data collection
- 3 Microphone arrays and beamforming
- 4 Speaker diarization
- 5 Speech recognition
- 6 Sampling techniques for audio-visual tracking and head pose estimation
- 7 Video processing and recognition
- 8 Language structure
- 9 Multimodal analysis of small-group conversational dynamics
- 10 Summarization
- 11 User requirements for meeting support technology
- 12 Meeting browsers and meeting assistants
- 13 Evaluation of meeting support technology
- 14 Conclusion and perspectives
- References
- Index
13 - Evaluation of meeting support technology
Published online by Cambridge University Press: 05 July 2012
- Frontmatter
- Contents
- List of contributors
- 1 Multimodal signal processing for meetings: an introduction
- 2 Data collection
- 3 Microphone arrays and beamforming
- 4 Speaker diarization
- 5 Speech recognition
- 6 Sampling techniques for audio-visual tracking and head pose estimation
- 7 Video processing and recognition
- 8 Language structure
- 9 Multimodal analysis of small-group conversational dynamics
- 10 Summarization
- 11 User requirements for meeting support technology
- 12 Meeting browsers and meeting assistants
- 13 Evaluation of meeting support technology
- 14 Conclusion and perspectives
- References
- Index
Summary
Meeting support technology evaluation can broadly be considered to be in three categories, which will be discussed in sequence in this chapter, in terms of goals, methods, and outcomes, following a brief introduction on methodology and undertakings prior to the AMI Consortium (Section 13.1). Evaluation efforts can be technology-centric, focused on determining how specific systems or interfaces performed in the tasks for which they were designed (Section 13.2). Evaluations can also adopt a task-centric view, defining common reference tasks such as fact finding or verification, which directly support cross-comparisons of different systems and interfaces (Section 13.3). Finally, the user-centric approach evaluates meeting support technology in its real context of use, measuring the increase in efficiency and user satisfaction that it brings (Section 13.4).
These aspects of evaluation differ from the component evaluation that accompanies each of the underlying technologies described in Chapters 3 to 10, which is often a black-box evaluation based on reference data and distance metrics (although task-centric approaches have been adopted for summarization evaluation, as shown in Chapter 10). Rather, the evaluation of meeting support technology is a stage in a complex software development process for which the helix model was proposed in Chapter 11. We think back on this process in the light of evaluation undertakings, especially for meeting browsers, at the end of this chapter (Section 13.5).
Approaches to evaluation: methods, experiments, campaigns
The evaluation of meeting browsers, as pieces of software, should be related (at least in theory) to a precise view of the specifications they answer.
- Type
- Chapter
- Information
- Multimodal Signal ProcessingHuman Interactions in Meetings, pp. 218 - 231Publisher: Cambridge University PressPrint publication year: 2012