This paper is a case study of user involvement in the requirements specification for project
ISLE: Interactive Spoken Language Education. Developers of Spoken Language Dialogue
Systems should involve users from the outset, particularly if the aim is to develop novel
solutions for a generic target application area or market. As well as target end-users, SLDS
developers should identify and consult ‘meta-level’ domain experts with expertise in human-to-human
dialogue in the target domain. In our case, English language teachers and publishers
provided generic knowledge of learners' dialogue preferences; other applications have analogous
domain language experts. These domain language experts can help to pin down a
domain-specific sublanguage which fits the constraints of current speech recognition technology:
linguistically-naive end-users may expect unconstrained conversational English, but in
practice, dialogue interactions have to be constrained in vocabulary and syntax. User consultation
also highlighted a need to consider how to integrate speech input and output with
other modes of interaction and processing; in our case the input speech signal is processed
by speech recogniser, stress and mispronunciation detectors, and output responses are text
and graphics as well as speech. This suggests a need to revisit the definition of ‘dialogue’:
other SLDS developers should also consider the merits of multimodality as an adjunct to
pure spoken language dialogue, particularly given that current systems are not capable of
accurately handling unconstrained English.