Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-24T18:25:09.810Z Has data issue: false hasContentIssue false

Pedagogical Approaches in Music and Audio Education for Deaf and Hard-of-Hearing Students

Published online by Cambridge University Press:  09 October 2024

Lee Cheng
Affiliation:
Anglia Ruskin University, Cambridge Email: [email protected]
Iain Mcgregor
Affiliation:
Edinburgh Napier University, Edinburgh Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Learning about music, sound or audio can present significant challenges for individuals who are deaf and hard of hearing (DHH). Given the advancements in technology and the increasing emphasis on equality, diversity and inclusion (EDI) in education, this article proposes pedagogical approaches aimed at facilitating the learning process for DHH students in the areas of music and audio production. These approaches encompass sound visualisation, haptic feedback, automated transcription, tactics in non-linear editing and digital signal processing. Importantly, these approaches do not necessitate advanced technical skills or substantial additional resources, thus lowering barriers for DHH students to overcome challenges in music and audio production. Furthermore, these strategies would enable content creation and editing for individuals with DHH, who may have previously been excluded from participating in music and audio production. Recommendations are provided for the implementation of these approaches in diverse educational settings to promote the integration of EDI in music and audio education.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. INTRODUCTION

Recent advancements in information and communication technology have led to the widespread integration of technology in workplace environments, along with a growing emphasis on fostering equality, diversity and inclusion (EDI). Collaborative efforts involving creative technologists, communication specialists and practitioners from various disciplines have sought to merge these two aspects through the development of assistive technologies and adaptive systems. By drawing upon insights from diverse fields such as neuroscience, psychology and artificial intelligence to facilitate inclusive design (Lim, Giacomin and Nickpour Reference Lim, Giacomin and Nickpour2021; Nketia, Amso and Brito Reference Nketia, Amso and Brito2021; Skowronek, Gilberti, Petro, Sancomb, Maddern and Jankovic Reference Skowronek, Gilberti, Petro, Sancomb, Maddern and Jankovic2022), these innovations help promote EDI values and enhance individuals’ quality of life by empowering them to become more active contributors to society. Among these endeavours, there have been technological advancements that address the needs of audio perception and music enjoyment for individuals with varying degrees of hearing loss. The most common ones are hearing aids and cochlear implants, enabling deaf and hard-of-hearing (DHH) children to understand speech, attend mainstream schools and utilise spoken language as their primary means of communication (Zanin and Rance Reference Zanin and Rance2016). There are also endeavours to prototype non-audio assistive technologies, such as speech-to-text applications (Shezi and Ade-Ibijola Reference Shezi and Ade-Ibijola2020) and haptic-assistive technologies (Fletcher Reference Fletcher2021), for assisting DHH individuals in sound creation and music performance (Trivedi, Alqasemi and Dubey Reference Trivedi, Alqasemi and Dubey2019). These devices facilitate the translation of musical notes into alternative sensory inputs and vice versa, with the ultimate goal of making music accessible and meaningful to everyone.

The history of music education for DHH individuals predates the advent of any modern technology. As early as 1802, during the time when Beethoven completed his second symphony, a French otologist and deaf educator utilised various musical instruments intended to develop auditory discrimination of students with DHH (Hummel Reference Hummel1971; Solomon Reference Solomon1980). Notable scholarly contributions to the field of music education for DHH were made by William Wolcott Turner and David Ely Bartlett (Turner and Bartlett Reference Turner and Bartlett1848), pioneers who advocated for music education and documented methods for teaching music to individuals with DHH (Darrow and Heller Reference Darrow and Heller1985). A significant milestone in the field of education for individuals with DHH came with the introduction of the Total Communication (TC) approach in the 1970s by Holcomb (Reference Holcomb1970). This educational philosophy emphasises the use of all communication modalities based on the specific needs of an individual child at a particular stage of development (Moores Reference Moores, Marschark and Spencer2012), paving the way for more adaptive pedagogical approach and ensuring DHH students can better engage with and benefit from music learning (Knapp Reference Knapp1980).

Existing literature has proposed various pedagogical approaches facilitating DHH students to engage in music appreciation (Chao-Fernandez, Román-García and Chao-Fernandez Reference Chao-Fernandez, Román-García and Chao-Fernandez2017), develop instrumental skills (Hash Reference Hash2003), participate in ensemble activities and learn music as an academic subject in school (Fawkes and Ratnanather Reference Fawkes and Ratnanather2009). These innovative pedagogical approaches and music practices continue to evolve, offering learning opportunities for individuals with varying levels of hearing loss and nurturing talented musicians who excel beyond their hearing (in)abilities (Cleall Reference Cleall1983; Glennie, Gilman and Kim Reference Glennie, Gilman, Kim, Kim and Gilman2019).

While efforts have been made to promote the integration of EDI in music education, these initiatives have primarily focused on conventional musicianship training within the classical music tradition. To our knowledge, there is limited documentation of attempts to develop competencies in contemporary music and audio production, which encompass areas such as sound design and theory, digital signal processing, audio recording, editing, mixing, and practical skills related to the use of digital audio workstations (DAWs). These knowledge and skills are closely linked to the use of computer technology and are essential for the career development of musicians in this digital age. To advocate for the integration of EDI in electronic music practices and shed light on the implementation of critical pedagogies in music education, this article presents pedagogical approaches developed by the authors to facilitate the learning process of students with DHH in music and audio production. Based on the authors’ teaching experiences and scholarly efforts, with supporting literature that aligns with the proposed pedagogy, these approaches encompass sound visualisation, haptic feedback, automated transcription, tactics in non-linear editing, and digital signal processing. Importantly, these approaches do not require technical expertise and can be implemented without the need for additional resources. Suggestions for the adoption of these approaches in the context of electroacoustic music and other educational settings will also be provided.

2. CONCEPTUAL FRAMEWORK

The pedagogical approaches presented in this article are underpinned by the social model of disability. According to this model, DHH is viewed as an interaction ‘between impairment and the surrounding social world, rather than being an individual medical problem’ (Emens Reference Emens2012: 214). Restrictions faced by individuals with DHH in their social interactions are a result of institutional forms of exclusion and cultural attitudes deeply rooted in the society, which fails to provide equitable social and structural support meeting their specific needs (Terzi Reference Terzi2004). In contrast to the dominant medical model, which considers disability as a problem to be remedied to conform with normative standards (Paley Reference Paley2002), the social model perceives disability as a societal issue rather than solely an individual problem (Cameron Reference Cameron2009). The social model is also different from the cultural model of disability, with the latter advocating for a ‘similar and different’ view that recognises and celebrates disability as part of our lives (Devlieger, Rusch and Pfeiffer Reference Devlieger, Rusch, Pfeiffer, Devlieger, Rusch and Pfeiffer2003). Apart from these models of disability, there are scholarly efforts and perspectives from DHH communities advocating that deafness should not be considered a form of disability (Lane Reference Lane2002; Harvey Reference Harvey2013). A spectrum of alternative terms, including ‘marginal’, ‘bicultural’ and ‘handicapped’, are used to describe the identity of individuals with DHH (Stumer, Hickson and Worrall Reference Stumer, Hickson and Worrall1996; Chapman and Dammeyer Reference Chapman and Dammeyer2017).

Historically informed performance practice in music necessitates physical capabilities and sensorimotor skills to manipulate musical instruments, which can pose challenges for individuals with certain forms of functional diversity (Howe Reference Howe, Howe, Jensen-Moulton, Lerner and Straus2016). Moreover, established instrumental designs and rigid school music curricula have limited flexibility to accommodate specific needs (Challis Reference Challis, Norman and Kirakowski2018; Rizzo Reference Rizzo2022), further exacerbating the inequality in music and audio education for DHH learners. As a result, these learners may lack the ability to conform to socially constructed aesthetic values in certain musical cultures and contexts (Lubet and Hofmann Reference Lubet and Hofmann2006).

Meeting the needs of DHH students in the music classroom or teaching studio may require substantial changes in both the social and the physical environment (Siebers Reference Siebers2008). In addition to modifying the physical space, it is essential to address the social dynamics between students and teachers to counteract the perpetuation of socially constructed injustices. Lubet (Reference Lubet2009) argues that music teachers should actively engage in political actions against systemic ableism in various educational contexts. Bell (Reference Bell2017) suggests exploring alternative approaches in the design of music teaching activities. Rather than adhering strictly to historically informed instrumental practices, music education can be reimagined to incorporate digital musical instruments (DMIs) and hacking activities (Bell Reference Bell2015; Bell, Bonin, Pethrick, Antwi-Nsiah and Matterson Reference Bell, Bonin, Pethrick, Antwi-Nsiah and Matterson2020), which offer increased accessibility for all learners. This sentiment is echoed by Landy (Reference Landy2012), who advocates for music education to embrace and celebrate the diversity of musical expressions.

3. RELATED WORKS

Music instruction for individuals with different levels and types of hearing loss continues to face practical challenges, largely due to misconceptions surrounding the ability of individuals with DHH to hear and appreciate music (Darrow Reference Darrow1985). Earlier studies have highlighted that instrumental instructors often exhibited hesitancy in actively recruiting DHH students, primarily stemming from a lack of familiarity with the musical capabilities of deaf learners (Darrow and Gfeller Reference Darrow and Gfeller1991), or concerns that these musicians may have a negative impact on ensemble performance quality (Sheldon Reference Sheldon1997). Music educators have often perceived individuals with hearing loss, alongside other behaviourally or emotionally disadvantaged students, as the most challenging and exceptional populations within mainstream classroom music education (Gfeller, Darrow and Hedden Reference Gfeller, Darrow and Hedden1990).

Darrow (Reference Darrow1993) discovered that individuals with hearing loss derive enjoyment from singing or signing songs, listening to music, and engaging in movement and dance with music, similar to their hearing counterparts. She suggested that individuals with hearing loss often exhibit a stronger sense of rhythm than pitch-related abilities, and are more proficient in discriminating lower frequency ranges compared with higher ranges (Darrow Reference Darrow2007). While DHH students perceive and communicate music differently from others, they can still find pleasure in music (Chen-Hafteck and Schraer-Joiner Reference Chen-Hafteck and Schraer-Joiner2011; Vaisberg, Martindale, Folkeard and Benedict Reference Vaisberg, Martindale, Folkeard and Benedict2019). Research has shown that individuals with DHH are more capable of recognising rhythm and tone duration than pitch and melody, while their ability to differentiate between timbres is generally diminished (McDermott Reference McDermott2004; Drennan and Rubinstein Reference Drennan and Rubinstein2008; Looi, McDermott, McKay and Hickson Reference Looi, McDermott, McKay and Hickson2008). Vaisberg, Beaulac, Glista, Macpherson and Scollie (Reference Vaisberg, Beaulac, Glista, Macpherson and Scollie2021) investigated the preferred frequency-gain shaping for speech and music perception in individuals with cochlear implants. Their study found that low-frequency gain was significantly increased relative to the prescription for speech and music stimuli, and that gain adjustments varied among listeners. The findings also revealed that music preferences were driven by changes in perceived fullness and sharpness, while speech preferences were driven by changes in perceived intelligibility and loudness. The study suggested that prescribed amplification to optimise speech intelligibility and alternative amplification for music perception should be recommended for most listeners. This implies that a prescribed and personalised audio workstation could be a viable approach to facilitating the learning process of DHH students in music and audio production.

Current practices often incorporate other sensory inputs, such as visual and tactile cues, to address the challenges faced by individuals with DHH (Trivedi et al. Reference Trivedi, Alqasemi and Dubey2019; Deja, Torre, Lee, Ciriaco and Eroles Reference Deja, Torre, Lee, Ciriaco and Eroles2020; Hopkins, Maté-Cid, Fulford, Seiffert and Ginsborg Reference Hopkins, Maté-Cid, Fulford, Seiffert and Ginsborg2021). These approaches have shown effectiveness in musicianship training across different levels of hearing loss. However, the competencies required for music producers and sound designers extend beyond the conventional musicianship framework and encompass skills such as design thinking, computer literacy and studio practices (Alsop and Berry Reference Alsop and Berry2009). This indicates that teaching and learning approaches in music production and sound design can differ significantly from those in general music education (Hug and Kemper Reference Hug and Kemper2014). While some existing practices in music education can be adopted, the strategies proposed in this article focus on sound processing and audio editing within DAWs and other related hardware and software to enhance the learning and workflow of music production and sound design.

4. SOUND VISUALISATION

Sound visualisation has proven to be a valuable assistive tool for audio recording and editing, particularly in optimising volume and timbre (Lima, Dos Santos, and Meiguins Reference Lima, Dos Santos and Meiguins2021). It also serves as a useful tool to enhance situational awareness for people with DHH (Azar, Saleh, and Al-Alaoui Reference Azar, Saleh and Al-Alaoui2015), compensating for the absent of or reduced auditory perception. To facilitate the workflow of DHH audio engineers, guidance can be prepared by other teammates providing information about the desired levels, relative distances between sources and the microphone, reflective surfaces and background noise sources prior to recording. This allows a DHH audio engineer to choose appropriate recording devices and estimate input levels and gain without relying on real-time auditory monitoring. Background noise levels can be visually displayed on a meter to ensure they remain below the desired sound sources. If audio metering is not available in the hardware device being used, there are numerous assistive mobile apps that can serve as substitutes. Many of these apps are free and can be connected to the audio source via line or auxiliary (Aux) audio output, with display format configurations typically available without much effort. A test recording is always recommended, so that recorded output can be visually assessed to identify any issues such as a high noise floor and excessive reverberation. Impulsive sounds are recommended for testing purposes, particularly indoors, to identify and minimize any undesired eigentones using equalisation or downward expansion techniques during post-production processing and sound-effect editing. Visual cues can also assist in identifying suitable edit points and confirm the source, intensity and nature of interactions depicted in filmed content. When using an external microphone, the best setup can be achieved by maintaining a hand’s distance between the microphone and the sound source. This ensures off-axis delivery and helps prevent plosives, which can be tested by pronouncing the letter ‘P’ in front of the microphone.

In different stages of production, various types of sound visualisation can provide valuable information. The peak programme meter (PPM) is commonly used to measure volume levels and identify instances of clipping. Waveforms, displayed in the sound editing software, illustrate the dynamic range of audio, aiding in determining the appropriate amount of compression to apply. Spectrograms, although less commonly used in everyday audio engineering, can be helpful in identifying unwanted sounds and enhancing specific resonances.

Visual matching of content can also be employed to ensure consistency with other takes or reference audio files. Some software offers visual analysis of audio clips, providing measurements such as root mean square (RMS) and peak and loudness unit full scale (LUFS). This information is useful for ensuring consistent volume levels among different sources, particularly when working with voices of different genders and age groups, as the spectral content may vary significantly, making it challenging to hear certain voices in comparison.

5. HAPTIC FEEDBACK

Haptic feedback refers to the use of mechanical vibrations to provide alternative sensory feedback. An early example of this approach can be found in the rumoured story of Beethoven, who modified his piano by removing its legs to allow for the perception of sound through tactile vibrations felt at floor level (Wallace Reference Wallace2018). Technological advancements nowadays have made haptic feedback more approachable, and numerous affordable haptic devices have been made available that can translate sound into vibrations. These devices can be particularly useful in sound editing and mixing, as they enhance the perception of timing and facilitate the differentiation and similarity of signals in a temporal manner akin to sound itself (Beattie, Frier, Georgiou, Long and Ablart Reference Beattie, Frier, Georgiou, Long and Ablart2020). The field of haptic technology is continuously evolving, and the use of ultrasound for haptics has opened up new possibilities, including the potential for depth perception (Morales, Marzo, Freeman, Frier and Georgiou Reference Morales, Marzo, Freeman, Frier and Georgiou2021).

In addition to haptic devices, haptic feedback can also be achieved through the vibrations of loudspeaker cones. Touching the loudspeakers is not uncommon in the audio production process, which allows DHH audio engineers to perceive the relative timing and intensity of different signals, particularly for the extended bass content. Furthermore, with the growing trend of science, technology, engineering, art and mathematics (STEAM) and hacking, individuals with DHH have the opportunity to develop their own do-it-yourself (DIY) haptic feedback devices (Andreotti and Frans Reference Andreotti and Frans2019; Bell et al. Reference Bell, Bonin, Pethrick, Antwi-Nsiah and Matterson2020). This DIY approach empowers individuals to explore and customise haptic solutions according to their specific needs and preferences.

6. NON-LINEAR EDITING

Non-linear editing involves the manipulation and arrangement of audio clips or tracks in a non-sequential or non-linear manner, providing flexibility in editing and manipulating audio recordings beyond on-site and real-time production processes. This approach allows for the integration of assistive tools to facilitate the workflow of audio engineers and sound designers. Such tools include context-aware editing platforms for text-based speech (Morrison, Rencker, Jin, Bryan, Caceres and Pardo Reference Morrison, Rencker, Jin, Bryan, Caceres and Pardo2021), algorithms for sound source recognition (Yang Reference Yang2021), and neural networking-based plug-ins for automatic compression (Singh, Bromham, Sheng and Fazekas Reference Singh, Bromham, Sheng and Fazekas2021). However, these assistive tools may not fully capture the underlying emotions conveyed in the text or the applied sound effects in the soundtracks. To address this issue, editors can utilise mirrors to confirm the mouth shapes of voice actors and interpret the emotional intent of the dialogue through lip reading. In cases where non-linear editing involves voiceover recordings with separate sound and motion, editors can keep track of word start and stop times and sentence completion to facilitate editing and ensure natural speech pacing. Additionally, visual references can be employed to verify the timing of audio effects. By capturing video simultaneously with sound-effect recording, the audiovisual material can be imported into the video editing software, allowing for concurrent viewing of visual content alongside sound. The visual reference provides valuable information for the sound designer regarding the timing and source of sound effects, which can be hidden in the final export.

While many of the aforementioned approaches are applicable to individuals with different types of DHH, there are specific strategies that can enhance the workflow for those who are hard of hearing but not completely deaf. These individuals may have significant hearing loss but can still be able to understand speech through auditory processes and/or special adaptations (Heward Reference Heward2006). In non-linear and non-destructive editing, for instance, temporary pitch shifting can be employed to lower the audio content down into an audible range for a sound editor with upper frequency hearing loss. Once the editing process is complete, the audio content can then be restored to its original frequency ranges. Destructive editing procedures can be logged in the pitch-shifted copy and then applied to the original audio content to avoid degradation.

7. AUTOMATED TRANSCRIPTION

After the non-linear editing process, the clarity of the dialogue can be reaffirmed through automated transcription. Any discrepancies between the script and the transcribed text can then be identified for further editing. However, cultural and environmental factors can impact the accuracy of transcription. For example, the English dialogue may sound correct locally in a non-English-speaking country but may not be accurately transcribed in a particular transcription platform. Additionally, the choices for non-English languages may be limited in those transcription services. This is coupled with other issues such as the differences in speech and writing, and the impact of background sounds on the accuracy of transcription.

Many online video sharing and social media platforms, as well as certain video editing software, offer the feature of automatic captioning for user-uploaded videos. Numerous studies in the existing literature have explored the application and effectiveness of automatic captioning in areas such as language learning and deaf culture (e.g. Smith, Crocker and Allman Reference Smith, Crocker and Allman2017; Perez Reference Perez2022). DHH sound editors can utilise auto-captioning services to identify poorly pronounced dialogue and take follow-up actions. This process is particularly valuable in editing dialogue in animation and games, where retakes of the voice-overs are common.

In addition to speech-to-text captioning, automated transcription is also available for certain musical materials. Established techniques in music information retrieval (MIR) enable accurate transcription of recorded music into organised and structured musical data, such as MIDI and XML files (Benetos, Dixon, Duan and Ewert Reference Benetos, Dixon, Duan and Ewert2019). These technologies have the capability to separate music clips into distinct components such as instrumental tracks (Kumar, Biswas and Roy Reference Kumar, Biswas, Roy, Johri, Verma and Paul2020), extract melodies sourced from different timbres (Hernandez-Olivan, Zay Pinilla, Hernandez-Lopez and Beltran Reference Hernandez-Olivan, Zay Pinilla, Hernandez-Lopez and Beltran2021), and detect the genre of a particular piece of music (Estolas, Malimban, Nicasio, Rivera, Pablo and Takahashi Reference Estolas, Malimban, Nicasio, Rivera, Pablo and Takahashi2020). Many of these applications are commercially available as standalone software or DAW plug-ins (Benetos et al. Reference Benetos, Dixon, Duan and Ewert2019). Parameters of these musical elements can then be identified through visual or haptic feedback, enabling DHH sound engineers to edit music based on their technical knowledge. In some cases, it may even be possible to make changes to the musical content of a recorded performance. Operating systems, such as those from Apple and Microsoft, are gradually incorporating sound analysis and peripheral functions into their integrated development environments (IDEs), which will lead to the availability of more accessible technologies beyond their current applications primarily in the industrial and healthcare sectors.

8. DIGITAL SIGNAL PROCESSING

Noise reduction can pose challenges as the sensitivity to noise varies among individuals with different levels of hearing loss (Heinonen-Guzejev et al. Reference Heinonen-Guzejev, Jauhiainen, Vuorinen, Viljanen and Rantanen2011). DHH audio engineers and sound designers can employ various techniques to address this issue. They can record sample audio clips of background ambience beforehand and compare them to the recording of the actual take to identify the noise floor and spectral balance of typical sound sources through sound visualisation tools. Compression and downward expansion can be applied to optimise levels and reduce noise with visual reference to waveform displays. Similarly, sound equalisation can be optimised by comparing spectrograms. De-essing can be achieved using a similar approach to the aforementioned noise reduction techniques. Pronunciation of sibilant letters such as ‘F’, ‘X’ and ‘S’, as well as other soft consonants, can lead to a significant increase in levels that require correction. These peaks can be identified through waveform displays and by identifying candidate words using subtitles.

While mastering is often outsourced to specialists, smaller studios and projects may involve audio engineers handling it themselves. In this case, DHH audio engineers may face challenges, as their hearing conditions can affect their precision in critical listening. However, advancements in audio technology have made digital mastering more accessible and affordable compared with analogue mastering. The inclusion of digital signal processing has led to the development of sophisticated algorithms and software tools that automate various mastering processes (Birtchnell Reference Birtchnell2018), including parametric equalisation and compression/expansion. These parameters can be optimised by reviewing and benchmarking the final mix with reference material such as broadcasted programmes and commercially available music products, using spectrograms, waveforms displays and metering.

9. IMPLICATIONS FOR EDUCATORS

In the previous section, we proposed various approaches to facilitate the learning process of DHH students in audio production and sound design, including sound visualisation, haptic feedback, automated transcription, tactics in non-linear editing and digital signal processing. These approaches are not overly demanding in terms of technical skills or resources and can be effectively integrated into different educational contexts and curricula. While there have been efforts to promote EDI in the fields of electroacoustic music and sonic art, it is important to also consider the specific needs of DHH students within the constraints of the mainstream music curriculum in school learning environments. The increasing integration of technology in school music education presents opportunities for the design and implementation of more accessible and inclusive pedagogical approaches (Ruthmann and Manite Reference Ruthmann and Manite2018), as proposed in this article.

Additionally, it is beneficial to select music that incorporates varied rhythms and tone durations, as these elements are particularly accessible to DHH students (McDermott Reference McDermott2004; Drennan and Rubinstein Reference Drennan and Rubinstein2008; Looi et al. Reference Looi, McDermott, McKay and Hickson2008), which can help enhance inclusiveness in the school music teaching and learning environment. If feasible, music with more pronounced drum and bass elements can be options for students with upper frequency hearing loss. The sequence of skills taught in the lesson content can be optimised to better accommodate DHH students. For example, techniques such as pitch shifting and band filtering can be taught prior to other DAW skills, which facilitates a smoother learning progression for those students and nurtures their aesthetic awareness and creativity (Daniel Reference Daniel2020).

In higher education contexts, such as undergraduate and postgraduate degree programmes in sound design and music production, alternative teaching and learning arrangements can be arranged for students with different levels of hearing loss. These include the development of self-directed learning resources that incorporate the aforementioned approaches, tailored practical tasks and assessment methods for DHH students. Detailed rubrics can be created in consultation with students and programme leaders, and advice can be sought from the university’s equality and diversity officers, if necessary. A successful example can be found in the Accessible Podcasting programme ratified by the Canadian Hard of Hearing Association and offered by Seneca Polytechnic,Footnote 1 where the second author serves as an advisory committee member.

In addition to the pedagogical approaches outlined in this article, the successful integration of DHH students into music and audio education can be further facilitated through teacher training and professional development in special needs education (Wolf and Younie Reference Wolf and Younie2019). It is essential to provide instructional support in the classroom environment and ensure that music objectives are clearly defined, with appropriate resources available to support DHH students (Darrow and Gfeller Reference Darrow and Gfeller1991). The development of an inclusive audio education and music curriculum is crucial to enhancing the effectiveness and appropriateness of learning content for students with hearing loss and other special learning needs, thus promoting a broader integration of EDI (Gouge Reference Gouge1990). These aspects are vital in creating an inclusive and supportive learning environment that caters to the diverse needs of all students, including those with hearing loss.

10. CONCLUDING REMARKS

This article contributes to the current body of knowledge by challenging established norms in mainstream music and audio education, presenting pedagogical approaches for inclusive teaching and learning practices. These approaches aim to make music and audio education more accessible and inclusive for DHH students without requiring significant additional resources. By modifying teaching practices, DHH students can learn to become sound engineers, producing audio and musical content for their hearing counterparts, who form the majority of consumers in the music industry and media sector. They can also create content that is meaningful to their community, a contribution that may not be achieved by hearing sound engineers. Moreover, some of these approaches can benefit other students by providing alternative perspectives and ensuring the quality of their work. This fosters mutual understanding between students with and without hearing loss, thereby promoting a more inclusive sonic environment.

In the broader context of electroacoustic music studies, the proposed pedagogical approaches have the potential to empower DHH individuals to advance their sonic practices and promote the wider integration of EDI. This integration should encompass the inclusion and representation of marginalised groups across all dimensions of identity, including individuals with DHH who may have often been left out of the equation when it comes to music and sound (Gouge Reference Gouge1990). By embracing EDI, the fields of electroacoustic music and sonic art can flourish as diverse and vibrant spaces that value and celebrate the contributions of all individuals, ensuring that functional diversity is not overlooked in the realm of music and sound.

References

Alsop, R. and Berry, M. 2009. Sound Design Skills: Exploring a Blended Learning Environment for Developing Practical and Conceptual Skills. Proceedings of Media Art Scoping Study Symposium. Perth, West Australia: Curtin University of Technology, 14–24.Google Scholar
Andreotti, E. and Frans, R. 2019. The Connection between Physics, Engineering and Music as an Example of STEAM Education. Physics Education 54(4): 045016. https://doi.org/10.1088/1361-6552/ab246a CrossRefGoogle Scholar
Azar, J., Saleh, H. A. and Al-Alaoui, M. A. 2015. Sound Visualization for the Hearing Impaired. International Journal of Emerging Technology in Learning 2(1). https://doi.org/10.3991/ijet.v2i1.84 Google Scholar
Beattie, D., Frier, W., Georgiou, O., Long, B. and Ablart, D. 2020. Incorporating the Perception of Visual Roughness into the Design of Mid-air Haptic Textures. Proceedings of ACM Symposium on Applied Perception. New York: ACM. https://doi.org/10.1145/3385955.3407927 CrossRefGoogle Scholar
Bell, A. P. 2015. Can We Afford These Affordances? GarageBand and the Double-edged Sword of the Digital Audio Workstation. Action, Criticism, and Theory for Music Education 14(1): 4465.Google Scholar
Bell, A. P. 2017. (dis)Ability and Music Education: Paralympian Patrick Anderson and the Experience of Disability in Music. Action, Criticism, and Theory for Music Education. 16(3): 108–28.CrossRefGoogle Scholar
Bell, A. P., Bonin, D., Pethrick, H., Antwi-Nsiah, A. and Matterson, B. 2020. Hacking, Disability, and Music Education. International Journal of Music Education 38(4): 657–72. https://doi.org/10.1177/0255761420930428 CrossRefGoogle Scholar
Benetos, E., Dixon, S., Duan, Z. and Ewert, S. 2019. Automatic Music Transcription: An Overview. IEEE Signal Processing Magazine 36(1): 2030. http://doi.org/10.1109/MSP.2018.2869928 CrossRefGoogle Scholar
Birtchnell, T. 2018. Listening without Ears: Artificial Intelligence in Audio Mastering. Big Data and Society 5(2). https://doi.org/10.1177/2053951718808553 CrossRefGoogle Scholar
Cameron, C. 2009. Tragic but Brave or Just Crips with Chips? Songs and Their Lyrics in the Disability Arts Movement in Britain. Popular Music 28(3): 381–96. https://doi.org/10.1017/S0261143009990122 CrossRefGoogle Scholar
Challis, B. 2018. Interfaces for Music. In Norman, K. L. and Kirakowski, J. (eds.) The Wiley Handbook of Human Computer Interaction, Hoboken, NJ: John Wiley & Sons, 579–98. https://doi.org/10.1002/9781118976005.ch25 CrossRefGoogle Scholar
Chao-Fernandez, R., Román-García, S. and Chao-Fernandez, A. 2017. Online Interactive Storytelling as a Strategy for Learning Music and for Integrating Pupils with Hearing Disorders into Early Childhood Education (ECE). Procedia – Social and Behavioral Sciences 237: 1722. https://doi.org/10.1016/j.sbspro.2017.02.005 CrossRefGoogle Scholar
Chapman, M. and Dammeyer, J. 2017. The Significance of Deaf Identity for Psychological Well-Being. Journal of Deaf Studies and Deaf Education 22(2): 187–94. https://doi.org/10.1093/deafed/enw073 CrossRefGoogle ScholarPubMed
Chen-Hafteck, L. and Schraer-Joiner, L. 2011. The Engagement in Musical Activities of Young Children with Varied Hearing Abilities. Music Education Research 13(1): 93106. https://doi.org/10.1080/14613808.2011.553279 CrossRefGoogle Scholar
Cleall, C. 1983. Notes on a Young Deaf Musician. Psychology of Music 11(2): 101–2. https://doi.org/10.1177/0305735683112007 CrossRefGoogle Scholar
Daniel, W. 2020. Blurred Lines: Practical and Theoretical Implications of a DAW-based Pedagogy. Journal of Music, Technology and Education 13(1): 7994. https://doi.org/10.1386/jmte_00017_1 Google Scholar
Darrow, A.-A. 1985. Music for the Deaf. Music Educators Journal 71(6): 33–5. https://doi.org/10.2307/3396472 CrossRefGoogle Scholar
Darrow, A.-A. 1993. The Role of Music in Deaf Culture: Implications for Music Educators. Journal of Research in Music Education 41(2): 91110. https://doi.org/10.2307/3345402 CrossRefGoogle Scholar
Darrow, A.-A. 2007. Teaching Students with Hearing Loss. Journal of General Music Education 20(2): 2730. https://doi.org/10.1177/1048371307020002010 Google Scholar
Darrow, A.-A. and Gfeller, K. 1991. A Study of Public School Music Programs Mainstreaming Hearing Impaired Students. Journal of Music Therapy 28(1): 23–9. https://doi.org/10.1093/jmt/28.1.23 CrossRefGoogle Scholar
Darrow, A.-A. and Heller, G. N. 1985. Early Advocates of Music Education for the Hearing Impaired: William Wolcott Turner and David Ely Bartlett. Journal of Research in Music Education 33(4): 269–79. https://doi.org/10.2307/3345253 CrossRefGoogle Scholar
Deja, J. A., Torre, A. D., Lee, H. J., Ciriaco, J. F. and Eroles, C. M. 2020. ViTune: A Visualizer Tool to Allow the Deaf and Hard of Hearing to See Music with their Eyes. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. Honolulu, HI. https://doi.org/10.1145/3334480.3383046 CrossRefGoogle Scholar
Devlieger, P., Rusch, F. and Pfeiffer, D. 2003. Rethinking Disability as Same and Different! Towards a Cultural Model of Disability. In Devlieger, P., Rusch, F. and Pfeiffer, D. (eds.) Rethinking Disability: The Emergence of New Definitions, Concepts and Communities. Leuven, Belgium: Garant, 916.Google Scholar
Drennan, W. R. and Rubinstein, J. T. 2008. Music Perception in Cochlear Implant Users and its Relationship with Psychophysical Capabilities. Journal of Rehabilitation Research and Development 45(5): 779–90. https://doi.org/10.1682/jrrd.2007.08.0118 CrossRefGoogle ScholarPubMed
Emens, E. F. 2012. Disabling Attitudes: U.S. Disability Law and the ADA Amendments Act. The American Journal of Comparative Law 60(1): 205–34. https://doi.org/10.5131/AJCL.2011.0020 CrossRefGoogle Scholar
Estolas, E. A. L., Malimban, A. F. V., Nicasio, J. T., Rivera, J. S., Pablo, M. F. D. S. and Takahashi, T. L. 2020. Automatic Beatmap Generating Rhythm Game Using Music Information Retrieval with Machine Learning for Genre Detection. Proceedings of IEEE 12th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment, and Management. Manila, Philippines: IEEE. https://doi.org/10.1109/HNICEM51456.2020.9400133 CrossRefGoogle Scholar
Fawkes, W. G. and Ratnanather, J. T. 2009. Music at the Mary Hare Grammar School for the Deaf from 1975 to 1988. Visions of Research in Music Education 14(1): 117.Google Scholar
Fletcher, M. D. 2021. Using Haptic Stimulation to Enhance Auditory Perception in Hearing-Impaired Listeners. Expert Review of Medical Devices 18(1): 6374. https://doi.org/10.1080/17434440.2021.1863782 CrossRefGoogle ScholarPubMed
Gfeller, K., Darrow, A.-A. and Hedden, S. K. 1990. Perceived Effectiveness of Mainstreaming in Iowa and Kansas Schools. Journal of Research in Music Education 38(2): 90101. https://doi.org/10.2307/3344929 CrossRefGoogle Scholar
Glennie, E., Gilman, S. L. and Kim, Y. 2019. Is There Disabled Music? Music and the Body from Dame Evelyn Glennie’s Perspective. In Kim, Y. and Gilman, S. (eds.) The Oxford Handbook of Music and the Body. New York: Oxford University Press, 318–30. https://doi.org/10.1093/oxfordhb/9780190636234.013.25 Google Scholar
Gouge, P. 1990. Music and Profoundly Deaf Students. British Journal of Music Education 7(3): 279–81. https://doi.org/10.1017/S0265051700007890 CrossRefGoogle Scholar
Harvey, E. R. 2013. Deafness: A Disability or a Difference. Health Law and Policy Brief 2(1): 4257.Google Scholar
Hash, P. 2003. Teaching Instrumental Music to Deaf and Hard of Hearing Students. Research and Issues in Music Education 1(1): 5.Google Scholar
Heinonen-Guzejev, M., Jauhiainen, T., Vuorinen, H., Viljanen, A., Rantanen, T., Koskenvuo, et al. 2011. Noise Sensitivity and Hearing Disability. Noise and Health 13(50): 51–8. https://doi.org/10.4103/1463-1741.74000 CrossRefGoogle ScholarPubMed
Hernandez-Olivan, C., Zay Pinilla, I., Hernandez-Lopez, C. and Beltran, J. R. 2021. A Comparison of Deep Learning Methods for Timbre Analysis in Polyphonic Automatic Music Transcription. Electronics 10: 810. https://doi.org/10.3390/electronics10070810 CrossRefGoogle Scholar
Heward, W. L. 2006. Exceptional Children: An Introduction to Special Education (8th ed.). Upper Saddle River, NJ: Prentice Hall.Google Scholar
Holcomb, R. 1970. The Total Approach. Proceedings of the International Conference on Education of the Deaf. Stockholm, Sweden, 104–7.Google Scholar
Hopkins, C., Maté-Cid, S., Fulford, R., Seiffert, G. and Ginsborg, J. 2021. Perception and Learning of Relative Pitch by Musicians Using the Vibrotactile Mode. Musicae Scientiae 27(1): 326. https://doi.org/10.1177/10298649211015278 CrossRefGoogle Scholar
Howe, B. 2016. Disabling Music Performance. In Howe, B., Jensen-Moulton, S., Lerner, N. and Straus, J. (eds.) The Oxford Handbook of Music and Disability Studies. New York: Oxford University Press, 191209. https://doi.org/10.1093/oxfordhb/9780199331444.013.30 Google Scholar
Hug, D. and Kemper, M. 2014. From Foley to Function, A Pedagogical Approach to Sound Design for Novel Interactions. Journal of Sonic Studies 6(1).Google Scholar
Hummel, C. J. M. 1971. The Value of Music in Teaching Deaf Students. Volta Review 73: 224–49.Google Scholar
Knapp, R. A. 1980. A Choir for Total Communication. Music Educators Journal 66(6): 54–5. https://doi.org/10.2307/3395810 CrossRefGoogle Scholar
Kumar, R., Biswas, A. and Roy, P. 2020. Melody Extraction from Music: A Comprehensive Study. In Johri, P., Verma, J. K. and Paul, S. (eds.) Applications of Machine Learning. Singapore: Springer, 141–55. https://doi.org/10.1007/978-981-15-3357-0_10 CrossRefGoogle ScholarPubMed
Landy, L. 2012. Discovered whilst Entering a New Millennium: A Technological Revolution That Will Radically Influence both Music Making and Music Education. Journal of Music, Technology and Education 4(2): 181–8. https://doi.org/10.1386/jmte.4.2-3.181_1 CrossRefGoogle Scholar
Lane, H. 2002. Do Deaf People Have a Disability? Sign Language Studies 2(4): 356–79. https://doi.org/10.1353/sls.2002.0019 CrossRefGoogle Scholar
Lim, Y., Giacomin, J. and Nickpour, F. 2021. What is Psychosocially Inclusive Design? A Definition with Constructs. The Design Journal 24(1): 528. https://doi.org/10.1080/14606925.2020.1849964 CrossRefGoogle Scholar
Lima, H. B., Dos Santos, C. G. R. and Meiguins, B. S. 2021. A Survey of Music Visualization Techniques. ACM Computing Surveys 54(7): 143. https://doi.org/10.1145/3461835 Google Scholar
Looi, V., McDermott, H. J., McKay, C. and Hickson, L. 2008. Music Perception of Cochlear Implant Users Compared with That of Hearing Aid Users. Ear and Hearing 29(3): 421–34. https://doi.org/10.1097/aud.0b013e31816a0d0b CrossRefGoogle ScholarPubMed
Lubet, A. 2009. Disability, Music Education and the Epistemology of Interdisciplinarity. International Journal of Qualitative Studies in Education 22(1): 119–32. https://doi.org/10.1080/09518390802581935 CrossRefGoogle Scholar
Lubet, A. and Hofmann, I. 2006. Classical Music, Disability, and Film: A Pedagogical Script. Disability Studies Quarterly 26(1). https://doi.org/10.18061/dsq.v26i1.657 CrossRefGoogle Scholar
McDermott, H. J. 2004. Music Perception with Cochlear Implants: A Review. Trends in Amplification 8(2): 4982. https://doi.org/10.1177%2F108471380400800203 CrossRefGoogle ScholarPubMed
Moores, D. F. 2012. The History of Language and Communication Issues in Deaf Education. In Marschark, M. and Spencer, P. E. (eds.) The Oxford Handbook of Deaf Studies, Language, and Education (Volume 2). New York: Oxford University Press, 1830. https://doi.org/10.1093/oxfordhb/9780195390032.013.0002 Google Scholar
Morales, F., Marzo, A., Freeman, E., Frier, W. and Georgiou, O. 2021. UltraPower: Powering Tangible and Wearable Devices with Focused Ultrasound. Proceedings of 15th International Conference on Tangible, Embedded and Embodied Interaction. Salzburg, Austria: ACM. https://doi.org/10.1145/3430524.3440620 CrossRefGoogle Scholar
Morrison, M., Rencker, L., Jin, Z., Bryan, N. J., Caceres, J. and Pardo, B. 2021. Context-aware Prosody Correction for Text-based Speech Editing. Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. Toronto, Canada: IEEE. https://doi.org/10.1109/ICASSP39728.2021.9414633 CrossRefGoogle Scholar
Nketia, J., Amso, D. and Brito, N. H. 2021. Towards a more Inclusive and Equitable Developmental Cognitive Neuroscience. Developmental Cognitive Neuroscience 52: 101014. https://doi.org/10.1016/j.dcn.2021.101014 CrossRefGoogle ScholarPubMed
Paley, J. 2002. The Cartesian Melodrama in Nursing. Nursing Philosophy 3(3): 189–92. https://doi.org/10.1046/j.1466-769X.2002.00113.x CrossRefGoogle Scholar
Perez, M. M. 2022. Second or Foreign Language Learning through Watching Audio-Visual Input and the Role of On-screen Text. Language Teaching 55(2): 163–92. https://doi.org/10.1017/S0261444821000501 CrossRefGoogle Scholar
Rizzo, A. L. (ed.) 2022. Teaching a Musical Instrument to Pupils with Special Educational Needs: Inclusion in the Italian School Model. Milano, Italy: FrancoAngeli.Google Scholar
Ruthmann, A. S. and Manite, R. (eds.) 2018. The Oxford Handbook of Technology and Music Education. New York: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199372133.001.0001 Google Scholar
Sheldon, D. A. 1997. The Illinois School for the Deaf Band: A Historical Perspective. Journal of Research in Music Education 45: 580600. https://doi.org/10.2307/3345424 CrossRefGoogle Scholar
Shezi, M. and Ade-Ibijola, A. 2020. Deaf Chat: A Speech-to-text Communication Aid for Hearing Deficiency. Advances in Science, Technology and Engineering Systems Journal 5(5): 826–33. https://doi.org/10.25046/aj0505100 CrossRefGoogle Scholar
Siebers, T. 2008. Disability Theory. Ann Arbor, MI: University of Michigan Press. https://doi.org/10.3998/mpub.309723 CrossRefGoogle Scholar
Singh, S., Bromham, G., Sheng, D. and Fazekas, G. 2021. Intelligent Control Method for the Dynamic Range Compressor: A User Study. Journal of the Audio Engineering Society 69(7–8): 576–85. https://doi.org/10.17743/jaes.2021.0028 CrossRefGoogle Scholar
Skowronek, M., Gilberti, R. M., Petro, M., Sancomb, C., Maddern, S. and Jankovic, J. 2022. Inclusive STEAM Education in Diverse Disciplines of Sustainable Energy and AI. Energy and AI 7: 100124. https://doi.org/10.1016/j.egyai.2021.100124 CrossRefGoogle Scholar
Smith, C., Crocker, S. and Allman, T. 2017. Reading between the Lines: Accessing Information via YouTube’s Automatic Captioning. Online Learning 21(1): 115–31. https://doi.org/10.24059/olj.v21i1.823 CrossRefGoogle Scholar
Solomon, A. L. 1980. Music in Special Education before 1930: Hearing and Speech Development. Journal of Research in Music Education 45: 580600. https://doi.org/10.2307/3345033 Google Scholar
Stumer, J., Hickson, L. and Worrall, L. 1996. Hearing Impairment, Disability and Handicap in Elderly People Living in Residential Care and in the Community. Disability and Rehabilitation 18(2): 7682. https://doi.org/10.3109/09638289609166021 CrossRefGoogle ScholarPubMed
Terzi, L. 2004. The Social Model of Disability: A Philosophical Critique. Journal of Applied Philosophy 21(4): 141–57. https://doi.org/10.1111/j.0264-3758.2004.00269.x CrossRefGoogle Scholar
Trivedi, U., Alqasemi, R. and Dubey, R. 2019. Wearable Musical Haptic Sleeves for People with Hearing Impairment. Proceedings of the 12th ACM International Conference on Pervasive Technologies Related to Assistive Environment. Rhodes, Greece: ACM, 146–51. https://doi.org/10.1145/3316782.3316796 CrossRefGoogle Scholar
Turner, W. W. and Bartlett, D. E. 1848. Music among the Deaf and Dumb. American Annals of the Deaf and Dumb 2: 16.Google Scholar
Vaisberg, J. M., Martindale, A. T., Folkeard, P. and Benedict, C. 2019. A Qualitative Study of the Effects of Hearing Loss and Hearing Aid Use on Music Perception in Performing musicians. Journal of the American Academy of Audiology 30(10): 856–70. https://doi.org/10.3766/jaaa.17019 Google ScholarPubMed
Vaisberg, J. M., Beaulac, S., Glista, D., Macpherson, E. A. and Scollie, S. D. 2021. Perceived Sound Quality Dimensions Influencing Frequency-Gain Shaping Preferences for Hearing Aid-Amplified Speech and Music. Trends in Hearing 25. https://doi.org/10.1177/2331216521989900 CrossRefGoogle ScholarPubMed
Wallace, R. 2018. Hearing Beethoven: A Story of Musical Loss and Discovery. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Wolf, M. and Younie, S. 2019. The Development of an Inclusive Model to Construct Teacher’s Professional Knowledge: Pedagogic Content Knowledge for Sound-based Music as a New Subject Area. Organised Sound 24(3) 274–88. https://doi.org/10.1017/S1355771819000347 CrossRefGoogle Scholar
Yang, M. 2021. Recognition of Sound Source Components in Soundscape Based on Deep Learning. The Journal of the Acoustical Society of America 149: A71. https://doi.org/10.1121/10.0004549 CrossRefGoogle Scholar
Zanin, J. and Rance, G. 2016. Functional Hearing in the Classroom: Assistive Listening Devices for Students with Hearing Impairment in a Mainstream School Setting. International Journal of Audiology 55(12): 723–9. https://doi.org/10.1080/14992027.2016.1225991 CrossRefGoogle Scholar