Article contents
Assessing user simulation for dialog systems using human judges and automatic evaluation measures
Published online by Cambridge University Press: 01 February 2011
Abstract
While different user simulations are built to assist dialog system development, there is an increasing need to quickly assess the quality of the user simulations reliably. Previous studies have proposed several automatic evaluation measures for this purpose. However, the validity of these evaluation measures has not been fully proven. We present an assessment study in which human judgments are collected on user simulation qualities as the gold standard to validate automatic evaluation measures. We show that a ranking model can be built using the automatic measures to predict the rankings of the simulations in the same order as the human judgments. We further show that the ranking model can be improved by using a simple feature that utilizes time-series analysis.
- Type
- Articles
- Information
- Copyright
- Copyright © Cambridge University Press 2011
References
- 1
- Cited by