Published online by Cambridge University Press: 24 January 2018
The study of multimodality in communication has attracted the attention of researchers studying online multimodal environments such as virtual worlds. Specifically, 3D virtual worlds have especially attracted the interest of educators and academics due to the multiplicity of verbal channels, which are often comprised of text and voice channels, as well as their 3D graphical interface, allowing for the study of non-verbal modes. This study offers a multilayered transcription method called the Multi-Modal MUVE Method or 3M Method (Palomeque, 2016; Pujolà & Palomeque, 2010) to account for the different modes present in the 3D virtual world of Second Life. This method works at two levels: the macro and the micro level. The macro level is a bird’s-eye view representation of the whole session as it fits into one page. This enables the researcher to grasp the essence of the class and to identify interesting sequences for analysis. The micro level consists of three transcripts to account for the different communication modes as well as the interface activity that occurs in the virtual world of Second Life. This paper will review the challenges when dealing with multimodal analysis in virtual worlds and how the multimodal data were analyzed and interpreted by using a multilayered multimodal method of analysis (3M transcription). Examples will be provided in the study to show how different modes of communication were used by participants in the virtual world of Second Life to create meaning or to avoid communication breakdowns.