from Part III - Machine Synthesis of Social Signals
Published online by Cambridge University Press: 13 July 2017
Introduction
Recent years have seen a growing interest in the development of affective interfaces. At the heart of these systems is an ability to capture social signals and analyse them in a way that meets the requirements and characteristics of the application. There has been a concerted effort in devising a principled approach which could benefit from the interdisciplinary nature of the affective computing endeavour. One common strategy has been to seek conceptual models of emotion that could mediate between the social signals to be captured and the knowledge content of the application. Early systems have largely relied on the intrinsic, natural properties of emotional communication with, for instance, an emphasis on facial expressions both on the output and the input side, together with some theoretical foundation for the acceptance of computers as interaction partners (Reeves & Nass, 1996). Without downplaying the importance of these systems in the development of the field or the practical interest of the applications they entailed (Prendinger & Ishizuka, 2005), it soon appeared that not all interactions could be modelled after interhuman communication, in particular when considering interaction with more complex applications. This complexity can be described at two principal levels: the interaction with complex data, knowledge, or task elements and the nature of the emotions themselves (and their departing from universal or primitive emotions to reflect more sophisticated ones). On the other hand, part of the problem rests with various simplifications that were necessary to get early prototypes off the ground. A good introduction to the full complexity of an affective response can be found in Sander and Scherer (2014), in particular its description of the various levels and components to be considered. These have not always been transposed into affective computing systems; however, as is often the case, the original description frameworks may not be computational enough to support direct and complete transposition.
Dimensional models of emotions have been enthusiastically adopted in affective interfaces for their flexibility in terms of representation as well as the possibility of mapping input modalities onto their dimensions to provide a consistent representation of input.
To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.