Hostname: page-component-78c5997874-j824f Total loading time: 0 Render date: 2024-11-08T10:32:17.978Z Has data issue: false hasContentIssue false

Reframing Sound Shapes in Spectromorphological Composition: Notating perspectival space through spherical, Euclidean and Cartesian-coordinate systems

Published online by Cambridge University Press:  24 November 2023

Tiernan Cross*
Affiliation:
The University of Sydney, Sydney, Australia
Rights & Permissions [Opens in a new window]

Abstract

This paper examines Smalley’s preliminary taxonomy of the sound shape and the subsequent application of graphical notation in electroacoustic music. It will demonstrate ways in which spatial categorisations of the morphological sound shape have remained relatively untouched in academia, despite a codependency of frequency, space and time. Theoretical examples and existing visualisations of the sound shape will be considered as a starting point, to determine why the holistic visualisation of space is warranted. A notational system addressing the codependency between spatial and spectral sound shapes will be presented, with reference to its context in Cartesian-coordinate sound environments. This method of electroacoustic notation will incorporate the visualisation of Smalley’s categorisation of spatial sound shapes and ideas of spatial gesture, texture and distribution within Smalley’s composed and listening spaces. This visualisation and notation of composed and listening spaces will demonstrate that audio technologies are imperative drivers in the future analysis and understanding of the sound shape. It will measure the modulation of spatial sound shape properties for Cartesian (height, width, depth) and spherical (azimuth and altitude) across linear temporality, to better represent the complete form of Smalley’s sound shape. This spatial notation will aid the rounded visualisation of Smalley’s morphology, motion, texture, gesture, structure and form. Use of this notational framework will illustrate ways in which a new tool to score electroacoustic sound shapes can inform new practices in computer music composition.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. INTRODUCTION

The visual relationship between spectral and spatial notation is a fundamental component of spectromorphology. Theoretically they are fused (Smalley Reference Smalley and Emmerson1986, Reference Smalley1996, Reference Smalley1997, 2014; Lotis Reference Lotis2003, Reference Lotis and Gorne2011; Holbrook Reference Holbrook2019), but visually they are often treated separately. While spectral properties of the sound shape are representative of modern notational influence (Blackburn Reference Blackburn2009), space is often notated using less linear techniques (Dunkelman 1995, cited in Couprie Reference Couprie and Gorne2011; Harley Reference Harley1996, Reference Harley1997; Berquin Reference Berquin and Gorne2011; Kim-Boyle Reference Kim-Boyle2019), or in a way that strays from the propriety of Smalley’s visual sound shape (Boulez in Planel & Merlier Reference Planel, Merlier and Gorne2011; Justel Reference Justel and Gorne2011; Anderson Reference Anderson2015). More specifically, the framework of the visual sound shape has not been re-evaluated to incorporate the precise placement of sound in spherical and Cartesian coordinate settings, which is becoming extremely more relevant as technology propels the granularity of control composers are able to hold in immersive acoustic settings. Without a system in which to accurately notate the coordinates of sound in these spatial settings, the notation of space will always be loose and general.

Smalley considered space imperative in his categorisations. He used the term spatiomorphology to highlight the properties of space in their own right, maintaining that spatial properties were fundamental to sound shapes and ensuring that these properties constituted their own categorisations of sonic experience (1997: 122). Using sound shapes to demonstrate that spatial perception is an intrinsic part of spectromorphology, he argued that spectromorphology only existed through the medium of space (122). As such, Smalley established a terminology that could address the articulation of sound through space and temporality (89–92). These categorisations established a common vocabulary to describe the relationships between spectra- and spatio-structure, and between growth and behaviour within the temporal flux of electroacoustic music. It is this integrated relationship between space and frequency that has not yet been visually represented in score.

In recent years, researchers have examined the notion of Cartesian spaces to the forefront of discussion (El Raheb, Stergiou, Katifori and Ioannidis Reference El Raheb, Stergiou, Katifori and Ioannidis2020; Oomen, Holleman and De Klerk Reference Oomen, Holleman and De Klerk2016), though further work needs to be done in understanding how to score or notate the spatial properties of coordinate environments. Blackburn (Reference Blackburn2011) addressed how towards the visualisation of space should utilise a “three-dimensional canvas” to present a more accurate picture of spatial sound organisation. Despite this, no visualization currently exists that articulates the actuality or degree of such height, width or depth in Cartesian environments, with studies straying from the notion of Smalley’s sound shape visualisation (Ham Reference Ham2017; Merlier Reference Merlier2018; Bacon Reference Bacon2022; Carinola and Geoffrey Reference Carinola and Geoffroy2022), pivoting towards spatial performance (Gorne Reference Gorne, Martha and Ralph2015) or placing an emphasis on static speaker placement and not the notation of sound across space itself (Couprie 2005, cited in Couprie Reference Couprie and Gorne2011; Duchenne Reference Duchenne and Dhomont1991; Berezan Reference Berezan and Gorne2011; Justel Reference Justel and Gorne2011; Ellberger Reference Ellberger2016). As such, the aim of this journal article is to create an accurate representation of both spectro- and spatiomorphological sound shapes in one, cohesive notational system.

Technologies have been increasingly able to accurately map the omnidirectional placement of sound in spatial settings (Berquin Reference Berquin and Gorne2011; Duchenne Reference Duchenne and Gorne2011; Oomen, Hollemen and De Klerk Reference Oomen, Holleman and De Klerk2016; El Raheb et al. Reference El Raheb, Stergiou, Katifori and Ioannidis2020), justifying the demand for a notational system that accurately scores these spaces. Programs such as 4DSOUND, IRCAM’s SISMO, SPARTA, and ZKM’s Zirkonium demonstrate how modern computing applications can aid the visualisation of the spatial sound-field. Visually, these programs present the real-time implementations of linear and parametric spatial audio reproduction and processing methods, including the visualisation of x and y planes to depict the height and width of sound (McCormack and Politis Reference McCormack and Politis2019). The next challenge is to convert spatial information into a temporal-based, readable notation to match the dominant time frequency analysis of spectral sound shapes.

2. SPECTROMORPHOLOGY and SPATIOMORPHOLOGY

Nyström (Reference Nyström2011) has suggested that acousmatic music is on a trajectory of spatial exploration though discourses of spectromorphology and the space-form. The alignment of spatial propagations to the sound shape has been largely neglected and the visual relationship between spectral and spatial distribution remains inconsistent. Spatiomorphology is not simply a by-product of spectromorphology, the two are intertwined. Spectra cannot exist without space and spatial aspects must be included to notate spectromorphological processes accurately.

This visual relationship between spectra and space is disjointed and fails to represent the taxonomies they are meant to cooperatively symbolise. In sound shape notation, the spatiomorphological elements of sound are largely ignored to make way for the linear, frequential sound shape (see Blackburn Reference Blackburn2011). This may well be the consequence of space not being as linear or as pragmatic as the standardised Western score (Mountain & Dahan Reference Mountain and Dahan2020), alongside the fact that space is harder to quantify without technological interception. Currently, visualisations of the sound shape bear resemblance to modern technological representations of spectrality, such as the spectrogram. A main limitation of this spectrogram is its ineffectiveness to capture any spatial detail for analytical or notational purposes. Space itself carries notation into an additional dimension that is often seen as harder to capture, articulate, measure or quantify. Spatial properties of electroacoustic music are consistently scored in non-linear, or abstract scenarios (Harley Reference Harley1996, Reference Harley1997; Kim-Boyle Reference Kim-Boyle2019), with cartesian environments only really being approached holistically as interactive spaces (El Raheb et al. Reference El Raheb, Stergiou, Katifori and Ioannidis2020). Newly developed technologies, such as the 4D Sound System (Oomen et al. Reference Oomen, Holleman and De Klerk2016) and SPARTA (McCormack and Politis Reference McCormack and Politis2019), help bridge the gap in visualising space and demonstrate that sound shapes can now be deconstructed algorithmically, both spatially and spectrally. Despite these technological advancements, visualisations of Smalley’s sound shape have not been formulated in a way that ties the measurements of frequency and space together in Cartesian sound environments. As Smalley’s categorisation intertwines with space, spatial texture and spatiomorphology, it is imperative to think about how to incorporate space as a medium of score alongside spectral properties. The next challenge is to link the foundations of these studies together, to formulate a notational system that articulates space whilst also including the notion of Smalley’s sound shape visualisation.

3. PERSPECTIVAL SPACE

To notate space, a central point is needed to incorporate the spatial properties of morphological sound objects. Though this paper lends example to realised multichannel environments, it can be assumed that the point of origin could be utilised in future studies for binaural settings, to notate the perception of psychoacoustic locality of sound objects through the use of headphones. In cartesian-coordinate sound systems this is created by physical locality (Cross Reference Cross2019). In stereophonic or headphone-related spaces this could be actualised through depiction of panoramic space, treating the stereophonic properties of left (L) (-90°) and right (R) (+90°) as the minima and maxima of value. Whatever the space composers choose to use, it must be measured or compartmentalised in a way that aligns to the visual parameter, value and scale of frequency. Furthermore, in order represent the parameters of three-dimensional sound shapes in these omnidirectional Cartesian coordinate environments, all three perpendicular axes (x-axis, y-axis and z-axis) need to be considered. Though the initial spectromorphological rhetoric lends itself to the perspective of the listener, Smalley placed the development of localisation and spatial setting in the hands of the composer (1997: 122). The following proposal of the spatial sound shape will approach spectro- and spatiomorphology using Cartesian coordinates as a simplified starting point for composers and listeners alike. Using Cartesian coordinates and a single point of origin (O) to notate the space itself voids any misinterpretation of spatial positioning, spatial movement and scale.

4. SCORING SPECTROMORPHOLOGY

To linearise the measurement of spatial sound shapes alongside spectral properties in graphical notation, it is critical to understand what common elements of the frequential sound shape can be scaled to spatial parameters. When looking at morphological sound shapes and their place in graphical scores, they are depicted linearly. This common approach to spectromorphological notation and its subsequent analysis is reliant on a two-axes representation that neglects the distribution of space, bounding temporality (x) to a frequential or dynamic scale (h) as demonstrated in Figure 1 (Hirst Reference Hirst2011). Fischman’s (Smalley Reference Smalley1997: 227) notation of Crosstalk depicts the common representation of the morphological sound shape in which time and frequency are bonded together. These visuals do not stray far from early categorisations of the sound shape, in which frequential distribution flows through a left-to-right temporal approach (Hirst Reference Hirst2011; Blackburn Reference Blackburn2011). This frequential sound shape resembles the linear notation of Western classical traditions, with noticeable influence from spectrographic and technologically aided depictions.

Figure 1. Two-plane system frequency x time graphical score demonstrating a left-to-right approach (Fischman in Smalley Reference Smalley1997: 127).

Fischmann’s Crosstalk is a clear example of this two-plane system, in which time and frequency are bonded in a similar format to the spectrogram, with no consideration of space. This theme is constant, with the frequency/pitch × temporality/time visualisation appearing in various representations of modern electroacoustic sound shapes. This arrangement of spectral shapes indicates a heavy influence of technological aid. More specifically, these visualisations of morphology as both an analytical and a compositional tool (Patton Reference Patton2007; Thoresen, Reference Thoresen2007; Nyström Reference Nyström2011; Maestri Reference Maestri2018) align the sound shape to the familiar visual cues of spectrographic technologies (Mountain & Dahan Reference Mountain and Dahan2020: 127). This can be seen in Blackburn’s (Reference Blackburn2011) visual sound shapes, which form a preliminary foundation of the visual sound shape. Blackburn (Reference Blackburn2011) follows this spectra-centric approach, with a horizontal system of time and a vertical approach to predominately frequency-based forms. Blackburn’s (Reference Blackburn2006) visual semblance of the sound shape aligns to the dominant structure of the spectral or frequential plane, with little consideration the distribution of space.

By dissecting Blackburn’s visual sound shape, we are left with a simple formula for time frequency analysis. Both parameters of x (time) and h (frequency) are ruled by their respective International System of Units (SI). In electroacoustic settings, both of these units are controlled by the composer as they determine the frequential distribution (Hz) of a sound over time (t).

Given that spectral space is defined as the available range of frequency, it can be anticipated that the value of frequency (h) is dependent on the value or range of notes, or of period vibration presented via a top (high frequency) to bottom (low frequency approach). While the frequencies captured in spectrograms vary across the board, the one constant is scale, by way of an upper- and lower-class limit. Typically, spectrograms hold a frequency scale of 0 to 20000 hertz (Hz). This specific value limit will be used throughout this paper to highlight frequential range. Spectromorphology is very much dependent on how frequencies exist over time and form a conglomerate sound structure (sound shape) (Lotis Reference Lotis2003: 260). As such, all spatial components or geometries should be scored comparably, though according to the localisation of the sound shape instead of its frequency. To successfully notate space, the value of frequency in the x (time) × h (frequency) equation needs to be replaced by spatial properties as scalable values. Given that all music by nature is temporal (Mountain & Dahan Reference Mountain and Dahan2020), the notation of space should exhibit some aspect of space being tied to time or temporality. By replacing the plane of frequency (h) with the spatial planes of width, height and depth, a three-dimensional representation of space can be measured.

5. SCORING SPATIOMORPHOLOGY

One challenge that remains unanswered by previous research is how to notate the value of spatial information in a way that is quantifiable, similar to the measurement of time (seconds) and frequency (hertz) in currently established notational models (see Figure 1). Justifications of this notational framework have been drawn on its unique ability to score spatio and spectromorphology with the same notation and measurement models; a method that is missing from previous notations of space and frequency. Using Euclidean or Cartesian geometry to facilitate the work of the composer, we can not only specify the precise positioning of any sound object in three-dimensional space, but also quantify its geometric size, placement and value against that of a spectral shape.

By utilising the model time frequency analysis for time space visualisation, this section will propose a way to notate spatial aspects of electroacoustic music in line with the standardised linear depiction of frequential sound shapes. By administering a point of origin (O), which is usually the central point of spherical and cartesian spaces (see Figure 3), we can determine the perspectival space of the composer through frontal orientation standardises the central localisation values of the compositional space and match these values to that of the notational score. A listener’s own listening space will be dependent on their coordinate placement within these acoustic settings and will not necessarily be the same as the compositional space unless they are placed at the point of origin. This can align to the origin (O) of both the spherical and cartesian sound systems. By choosing to utilise a common point (the origin) as the intersectional point for all axes, we are able to treat the compositional space with the same localisation values of the notational space, allowing for precise notation and therefore avoid loose analysis. This origin point is vital to the precision in scoring space that has been missing in previous research. Using these spaces allow for the streamlined analysis of notational values, compositional methods and consequential perceptual effects on the listener from the same data sets. These data sets will be integral to the analysis of perceptual experiences brought about from scoring spatial sound shapes.

5.1. Horizontal axis

To notate the horizontal plane of space, spatial values must be determined. In Figure 2, the value of the horizontal axis (x) has been limited to θ = ±100. Previous instances of this can be seen through Nyström’s (Reference Nyström2011) depiction of panoramic space distribution (Smalley, Reference Smalley2007: 55). Replacing the plane of frequency (h) with (x) creates the plottable dimensions of the horizontal plane. When composing for spherical or cartesian coordinate systems in this manner, the composer can plot sound shapes that mimic a combination of movements across horizontal, median and frontal planes that ultimately grant full control of the intricate placement of sound in space.

Figure 2. The standard distribution of horizontal space.

Figure 3 depicts the spatial visualisation of emergence, providing clarification on the visualisation of width. This figure depicts the horizontal distribution of the sound shape in a spatial sense, with Point 1 signifying an onset value of 0. At Point 1, the score indicates that this portion of the sound shape should bear a thin width, much like that of monophonic playback. Alternatively, the notation of Point 2 is much wider, indicating a gradual increase in horizontal distribution, to the points of a and b, which both represent the furthest possible values from the point of origin (O). Following the sound shape in the linear fashion of a standardised score, we can see that the resulting sound gradually increases in horizontal occupancy to fill the horizontal spatial plane.

Figure 3. Visualisation of emergence on the azimuth plane, indicating sizable spatial growth.

5.2. Median plane

5.2.1. Analogical roots

To score the height in Cartesian environments, distinct separations between spatial elevation and misconceptions of frequential height must be drawn. The notion that frequency is tied to spatial perception is analogical and not rooted in factuality. A misapprehension of spatial elevation is often given to higher frequencies (Smalley Reference Smalley1997: 122). In these instances, higher pitches are often perceived as spatially higher, and lower pitches as rooted. In actuality, the frequency of a sound holds no bearing on that sound’s spatial location (ibid.) and should be scored as such. In order to notate accurate localisation in topological, geometrical and physical spaces, the composer must break away from the analogy that frequential height impacts space, as often depicted throughout the visualisations of modern mixing systems. In a multichannel setting, elevation or locality can be distinguished by the vertical coordinate at which a sound shape appears in space.

5.2.2. Scoring elevation and depth

Like that of the horizontal plane (x), spatial values must be determined in order to notate the vertical plane of space (y). Once again assuming a frontal orientation, the vertical plane of the coordinate system highlighted in Figure 2 also depicts the values of c, d and the spectrum of coordinate values between them. Utilising the coordinate value system, this time on the y-axis of the vertical plane instead of the x-axis of the horizontal plane, the composer is given the notational dimensions of an elevation or height. With the same set of lower and upper limits (c, O and d) the composer is able to align the vertical plane (y) to the same visual limits of both frequency (h) and horizon (x).

To score the axis of depth, the preceding coordinate value system can be used to score the e–f-axis (see Figure 2). As such, Figure 4 illustrates the combined graphical staff representation of each spatial and spectral axis, in which frequency, horizontal, vertical and depth are scaled equally alongside one another. When the three staves are scored relative to one another, the accurate notation of three-dimensional space and frequency is achieved. This notational system is aligned to the form of the conventional, linear score and established spectral sound shape. This notational framework forms the basis on which the spatial sound shape will be notated in conjunction to the frequential sound shape.

Figure 4. A notational score consisting of frequency, width and height as equal components of the sound shape.

5.3. Motion

The characteristics of texture, gesture, motion and direction are imperative to the notation of space. Using Blackburn’s (Reference Blackburn2011) visual directory of sound units and starts, middles and ends, it is apparent that the visual representations of frequency can be applicable to the distribution of sound shapes spatially. Blackburn’s visual cues are transferable to spatial notation, though each notational shape holds different value depending on which plane of the score it resides. An example of this can be seen through the visualisation of ascent (Smalley Reference Smalley1997: 116). On the horizontal plane, the use of ascent symbolism would create new context and indicate a right to left motion (Blackburn Reference Blackburn2011: 8), whereas for the median plane the same sound shape would indicate a low to high movement in localisation (Figure 5). This can also be seen through the visualisation of cyclic motion in Figure 6, in which frequential sound shapes would need to vary from those of spatial sound shapes to achieve their respective directional growth.

Figure 5. Ascent depicted on the differing axes of elevation and azimuth. Both sound shapes result in differing motions. The azimuth axis presents a right-to-left motion spatially, whilst the elevation axis presents a bottom-to-top motion spatially.

Figure 6. Cyclic motion’s various representations (2π spatially). Each axis demonstrates an example of directional growth.

Blackburn’s (Reference Blackburn2011: 7) application of starts, middles and ends are feasible in terms of notating, or mapping, a spatial phrase or spatial scene. Her visualisation of structural functions, morphological strings and composites can be used interchangeably on spatial cues to notate spatiomorphology. Repurposing Blackburn’s (Reference Blackburn2011) sound shapes for spatiomorphological notation allows the embellishment of spatial growth and motion processes, formulating new visual meaning. Figure 7 demonstrates how glide (Blackburn Reference Blackburn2011:12) can be achieved spatially, distributed over a vertically localised space, while Figure 8 demonstrates how dissipation no longer correlates to frequential matter, taking on new meaning by way of spatial location. Using this method, the spatial sound shape can be notated within the same framework of the spectra and be used in compositional settings to develop complex spatial scenes.

Figure 7. Smalley’s glide distributed over vertical localised space.

Figure 8. Dissipation distributed over vertical localised space.

5.4. Composed and listening spaces

One component of spectromorphology that has previously been neglected in sound shape visualisation has been the ability to score, notate or analyse composed and listening spaces accurately. Contiguous spaces, non-contiguous spaces and the manifestations of spatial perspective, or parameters, through time, have remained absent from systems of sound shape notation (Blackburn Reference Blackburn2011). Without a way to measure or notate spatiomorphology, the composer forfeits the opportunity to measure contiguous space, otherwise known as the properties revealed to the listener through spectromorphological events across space itself (Figure 9). Without these measurements, any reference point or dimension of the sound space is lost. This also eradicates any reference point to the size, location or depth of any sound that is to be perceived by the listener. As a result, the composer loses the ability to control, or measure, composed and listening spaces in compositional and analytical instances. This means that the composer is only able to visualise a portion of a sound shape’s articulate properties.

Figure 9. Contiguous space depicting the parameters of spatial distribution.

This is overcome by scoring the spatial properties of sound shape. An example of this can be seen in Figure 10, which demonstrates how intimate space can be applied to either elevation or azimuth. By grouping sound objects “closer” to the listener or giving the sensation of appearing to be close to the listener, the composer is able to create spatial intimacy. Alternatively, sounds that appear further away or on the boundary of the listener’s audible perception would be out of the intimate space. Similarly, breadth and depth can be notated with sound shape’s size, illustrating proportions of breadth, while the distance between a sound shape from the point or origin on the z-axis allows for the indication of depth. Both orientation (multidirectional; frontal) and image definition can be visualised using this notation system, depicting the motion, growth and behaviour of sound in a spatial sense. Using this method composers can begin to accurately define the spatial parameters of unfolding sound shapes and their respective spaces. Through this contiguous space is defined, while trajectorial drama, textural definition and localisation can all be recorded. Figure 10 demonstrates how non-contiguous spaces can be clearly articulated, as sounds are scattered and clustered across regions, highlighting spatial gaps between sound shapes.

Figure 10. Intimate space.

5.5. Grouping

Through the notation of spatial properties, Smalley’s motion and growth processes are accentuated. By combining all three axes of the sound shape (frequency, width, height), composers are able to create what I have termed as groupings. Groupings further articulate and heighten the parameters of motion and growth processes. This can be seen through the grouping of spatial and spectral sound shapes to embellish the ‘sense of directed motion’ achieved through bi/multidirectional motions. Figure 11 depicts the motions of dilation (‘becoming wider or larger’) and contraction (‘becoming smaller’) (1997: 116). The composer is able to notate an embellished sense of dilation (or contraction) by grouping each axes of notation together, with all spatial and spectral qualities morphing in unison to accentuate the proposed motion of widening. The opposite effect of contraction can be achieved through the unison narrowing of each axes sound shape. Similarly, all cyclic/centric (Smalley Reference Smalley1997: 116) motions can be notated by mapping out variances to the revolution (or full rotation) of on one or more axes. Reciprocal (Figure 12) and unidirectional motions are also able to be achieved individually, or on multiple grouped axes.

Figure 11. Grouping frequency, width and height together allows for the embellishment of certain morphological processes, such as growth.

Figure 12. Reciprocal space.

6. IMPLEMENTATION

Preliminary implementation of this notational method can be seen through Schema (Cross Reference Cross2019), a spatial electroacoustic work in which a conjunction of frequential and spatial sound shapes were used as a notational tool. This composition was developed using the 4DSOUND system (Oomen Reference Oomen, Holleman and De Klerk2016), in which a point of origin (O) allowed for the notation of the Cartesian coordinate system. This composition was diffused across the 48.9 configuration of MONOM’s Cartesian coordinate multichannel sound system (Figure 13). While developing this notational system, it became clear that spatial sound shapes could be aligned to the typeset of frequential sound shapes and implemented through technological aid. This notation of three-dimensional arrangement allows for the maximum utilisation of cartesian architecture to formulate the controlled spatialisation of unconventional combinations of sound. The spatial properties and resulting listening intricacies of Schema are an example of ways in which to utilise composed and listening spaces with less restraint, through the means of technology. Figure 14 illustrates how the visual relationship between spectral and spatial notation is consistent in scale. The result is a more holistic approach to sound shape notation. This method of rounded notation has unveiled how interchangeable the categorisations of motion, growth and texture are across the planes of space, frequency and time. By combining frequency, width and height within the same temporal space, Schema accurately notates the sound shape in a holistic manner that is reflective of both perceptual instinct and precise technological measurement.

Figure 13. Cartesian co-ordinate sound system speaker placement.

Figure 14. Schema (Cross Reference Cross2019) score utilising spectral and spatial sound shapes.

7. TECHNOLOGICAL ASSISTANCE

Given that spectromorphological visualisation has leveraged off technological advancement (i.e., the spectrogram), it is important to note ways in which technological advancement might aid the visualisation of the spatial sound shape. In the same way that this paper highlights how interchangeable generic sound shapes are, it becomes important to capture how the granular placement of textons are just as interchangeable through technological advancement.

Early conceptualisations of the spatial sound shape came with their own sets of technological challenges, pushing any visual representation or notation of spatial sound shapes into the background. Throughout history, spectromorphology had largely been intended as a descriptive or analytical tool and not a tool for composition, with Smalley at one stage urging researchers to ‘ignore technology’ (Smalley Reference Smalley1997). It seems restricting to the capability of electroacoustic music when technology is ignored, causing compositional practice to become separated from the sound shape of perceptual analysis. Today, recent technological advancements enable the measurement and assessment of the sinner properties of the spatial sound shape in a similar manner to that of frequential spectrograms. The application of vector-based amplitude panning (VBAP), multiple-direction amplitude panning and high-order Ambisonics have allowed composers to delve deeper into the spatialisation of sound, and inherently sound shapes (Cross Reference Cross, Alexandra and de Dios Cuartas2018; Holbrook Reference Holbrook2019; Zotter and Frank Reference Zotter and Frank2019). Spatial audio and acoustic environments have allowed for the treatment of ‘space as an instrument with as much a role as previously dominating harmonic constructs’ in computer music composition (Cross Reference Cross, Alexandra and de Dios Cuartas2018). Applications such as SPARTA (McCormack and Politis Reference McCormack and Politis2019) have since made such technological challenges redundant.

Much like the early traits of spectromorphological sound shapes that have lent themselves to the visual aids of the spectrogram and sonograph (Blackburn Reference Blackburn2006), composers can begin to rely on technological applications such as 4DSOUND and SPARTA as a crutch to understand spatiomorphological sound shape visualisation. From here, it is important to consider what this technological advancement means for spatiomorphology in compositional, analytical and perceptual settings. With technology that now allows composers to manipulate and measure the inner spatial properties of the sound shape, next steps in this field of study would benefit from recording the results of perceptual exposure to such manipulated sonic experience. Future research into the precise granular placement and augmentation of texture, gesture and motion would be beneficial in allowing composers to better understand the perceptual response to intricately placed electroacoustic filaments.

It is clear that technology is pushing the intricacy of space-form archetypes further. Technology allows electroacoustic composers to delve deeper into the structural fabric of sound, break it down and reconstruct it. As frequential visualisation has surged due to the technological aid of spectrographic information, it becomes important to consider ways in which technology might allow composers to access the inner spatial properties of the sound shape. Electroacoustic composers and researchers must creatively embrace new technologies to craft new sonic experiences and compositional practices. Technological capability must be welcomed in theoretical settings in order to craft deeper analysis and understanding of the spectromorphological sound shape in electroacoustic composition and resulting perceptual experiences.

8. CONCLUSION

This paper examined Smalley’s preliminary taxonomy of the sound shape and the challenges in linking time, space and frequency in the graphical notation in electroacoustic music. A method to adapt Smalley’s sound shapes for the notation of sound in three-dimensional sound environments was presented. The challenge of spatial accuracy arose, with previous research struggling to capture the precise notational or geometric value of sound objects. The use of a point of origin (O) seemingly resolved this problem in Cartesian environments, allowing the composer to plot sound shapes representative of morphological space, alongside time and frequency simultaneously. This coordinate value of space allows spatial information to be measured similarly to the scales of time (seconds) and frequency (hertz). Notationally, the use of groupings have demonstrated ways in which characteristics of sound shapes can be distributed interchangeably between spectral, spatial and temporal nodes. This paper has demonstrated how Cartesian space can be notated as a starting point for discussion. The next challenge is understanding how to notate perspectival, circumspace or egocentric space around the listener. Future work will address ways in which the bounds of this paper’s notational model could be tweaked to score the spherical revolution around the composer or listener. Similarly, future work will investigate how proximate, distal and lateral spaces can be notated in relation to the depth and localisation of sound in reference to the listener’s specific vantage point.

Looking forward, it is imperative to consider how the measurement of spatial sound properties can be harnessed to embellish sonic filaments and create further subcategories of spatial sound manipulation. A stronger emphasis on the compositional possibilities and their subsequent analyses is needed to truly understand the parameters of modern sound shapes. Future research will combine this notational method with the precision of technological and spatially driven sound environments. This research will include the measurement of the preceding visualisation techniques in compositional settings, and the measurement of consequential listener experiences. I anticipate that this will forge more complex morphological composition and analysis in the field of electroacoustic computer music.

References

REFERENCES

Anderson, E. L. 2015. Space as a Carrier of Materials, Meaning, and Metaphor in Karlheinz Stockhausen’s Music-Theatre Composition Sirius. Bielefeld: transcript Verlag.Google Scholar
Bacon, B. 2022. Rethinking the Notation Design Space. In T. Vincent, J. Bell and C. de Paiva Santana (ed.) Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’2022. Marseille, France.Google Scholar
Berezan, D. 2011. Mantis: Festival, Community, Sound Diffusion System and Research. In Gorne, A. V. (ed.) Lien: L’espace du Son III. Ohain, Belgium: Musiques et Recherches, 3944.Google Scholar
Berquin, L. 2011. Sismo: Solution intégrée pour la spatialisation de sources multiphoniques orientée multipoint. In Gorne, A. V. (ed.) Lien: L’espace du Son III. Ohain, Belgium: Musiques et Recherches, 4554.Google Scholar
Blackburn, M. 2006. Manuella Blackburn’s Valley Flow Analysis. http://orema.dmu.ac.uk/analyses/manuella-blackburns-valley-flow-analysis (accessed 5 May 2022).Google Scholar
Blackburn, M. 2009. Composing from Spectromorphological Vocabulary: Proposed Application, Pedagogy and Metadata. Electroacoustic Music Society Conference 2009: Heritage and future. Buenos Aires.Google Scholar
Blackburn, M. 2011. The Visual Sound shapes of Spectromorphology: An Illustrative Guide to Composition. Organised Sound 16(1): 513.CrossRefGoogle Scholar
Carinola, R. and Geoffroy, V. 2022. On Notational Spaces in Interactive Music. In T. Vincent, J. Bell and C. de Paiva Santana (ed.) Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’2022. Marseille, France.Google Scholar
Couprie, P. 2011. Représenter l’espace? In Gorne, A. V. (ed.) Lien: L’espace du Son III. Ohain, Belgium: Musiques et Recherches, 20–8.Google Scholar
Cross, T. 2018. Paradigm Shifts in the Technological Spatialization of Music. In Alexandra, S. and de Dios Cuartas, M. J. (ed.) Los nuevos métodos de producción y difusión musical de la era post-digital. Seville: Ediciones Egregius, 8598.Google Scholar
Cross, T. 2019. Schema: Towards a Post-Biological Composer 2019. Spatial Sound Institute, MONOM and CTM 2019.Google Scholar
Duchenne, J.-M. 1991. Habiter l’espace acousmatique. In Dhomont, F. (ed.) Lien: L’espace du Son II. Ohain, Belgium: Musiques et Recherches, 84–6.Google Scholar
Duchenne, J.-M. 2011. De la capture à la projection multiphonique, un exemple de composition: Tournages. In Gorne, A. V. (ed.) Lien: L’espace du Son III. Ohain, Belgium: Musiques et Recherches, 43164.Google Scholar
El Raheb, K., Stergiou, M., Katifori, A. and Ioannidis, Y. 2020. Symbolising Space: From Notation to Movement Interaction. In R. Gottfried, G. Hajdu, J. Sello, A. Anatrini and J. MacCallum (eds.) Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’20/21. Hamburg, Germany: Hamburg University for Music and Theater.Google Scholar
Ellberger, E. 2016. Taxonomy and Notation of Spatialization. In R. Hoadley, C. Nash and D. Fober (eds.) Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’16. Cambridge: Anglia Ruskin University.Google Scholar
Gorne, A. V. 2015. Space, Sound, and Acousmatic Music: The Heart of the Research. In Martha, B. and Ralph, P. (eds.) Kompositionen für hörbaren Raum/Compositions for Audible Space. Bielefeld: transcript Verlag, 205–20.CrossRefGoogle Scholar
Ham, J. J. 2017. An Architectural Approach to 3D Spatial Drum Notation. In H. L. Palma, M. Solomon, E. Tucci and C. Lage (eds.) Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’17. Coruna, Spain: Universidade da Coruna.Google Scholar
Harley, J. 1996. Iannis Xenakis: La Légende d’Eer and Aïs, Gendy3, Taurhiphanie, Thalleïn. Computer Music Journal 20(2): 124–7.CrossRefGoogle Scholar
Harley, M. 1997. An American in Space: Henry Brant’s ‘Spatial Music’. American Music 15(1): 7092.CrossRefGoogle Scholar
Hirst, D. 2011. From Sound Shapes to Space-Form: Investigating the Relationships between Smalley’s writings and works. Organised Sound 16(1): 4253.CrossRefGoogle Scholar
Holbrook, U. A. S. 2019. Sound Objects and Spatial Morphologies. Organised Sound 24(1): 20–9.CrossRefGoogle Scholar
Justel, E. 2011. Vers une syntaxe de lespace. In Gorne, A. V. (ed.) Lien: L’Espace du Son III. Ohain, Belgium: Musiques et Recherches, 111–31.Google Scholar
Kim-Boyle, D. 2019. 3D Notations and the Immersive Score. Leonardo Music Journal 29: 3941.CrossRefGoogle Scholar
Lotis, T. 2003. The Creation and Projection of Ambiophonic and Geometrical Sonic Spaces with Reference to Denis Smalley’s Base Metals. Organised Sound 8(3): 257–67.CrossRefGoogle Scholar
Lotis, T. 2011. The Perception of Illusory and Non-identical Spaces in Acouscmatic Music. In Gorne, A. V. (ed.) Lien: L’Espace du Son III. Ohain, Belgium: Musiques et Recherches, 6370.Google Scholar
Maestri, E. 2018. A Spectro-Gestural-Morphological Analysis of a Musical-Tactile Score. Tokyo: Springer Japan.Google Scholar
McCormack, L. and Politis, A. 2019. SPARTA & COMPASS: Real-time implementations of linear and parametric spatial audio reproduction and processing methods. Audio Engineering Society Conference 2019: Immersive and Interactive Audio. York, Audio Engineering Society.Google Scholar
Merlier, B. 2018. Space Notation in Electroacoustic Music: From Gestures to Signs. In S. Bhagwati and J. Bresson (ed.) Proceedings of the International Conference on Technologies for Music Notation and Representation – TENOR’18. Concordia University, Montreal, Canada.Google Scholar
Mountain, R. and Dahan, K. 2020. Editorial: Time in Electroacoustic Music. Organised Sound 25(2): 127–9.CrossRefGoogle Scholar
Nyström, E. 2011. Textons and the Propagation of Space in Acousmatic Music. Organised Sound 16(1): 1426.CrossRefGoogle Scholar
Oomen, P. H., Holleman, P. and De Klerk, L. 2016. 4DSOUND: A New Approach to Spatial Sound Reproduction and Synthesis: Spatial Sound Institute. Living Architecture Systems Group White Papers 2016. https://papers.cumincad.org/data/works/att/lasg_whitepapers_2016_238.pdf (accessed 4 October 2023).Google Scholar
Patton, K. 2007. Morphological Notation for Interactive Electroacoustic Music. Organised Sound 12(2): 123–8.CrossRefGoogle Scholar
Planel, H. and Merlier, B. 2011. Thélème contemporain et L’ESPACE du son dans la musique électroacoustique la 5ème dimension du son musical. In Gorne, A. V. (ed.) Lien: L’Espace du Son III. Ohain, Belgium: Musiques et Recherches, 2938.Google Scholar
Smalley, D. 1986. Spectro-Morphology and Structuring Processes. In Emmerson, S. (ed.) The Language Electroacoustic Music. London: Palgrave Macmillan, 6198.CrossRefGoogle Scholar
Smalley, D. 1996. The Listening Imagination: Listening in the Electroacoustic Era. Contemporary Music Review 13(2): 77107.CrossRefGoogle Scholar
Smalley, D. 1997. Spectromorphology: Explaining Sound Shapes. Organised Sound 2(2): 107–26.CrossRefGoogle Scholar
Smalley, D. 2007. Space-Form and the Acousmatic Image. Organised Sound 12(2): 3558.CrossRefGoogle Scholar
Thoresen, L. 2007. Spectromorphological Analysis of Sound Objects: An Adaptation of Pierre Schaeffer’s Typomorphology. Organised Sound 12(2): 129–41.CrossRefGoogle Scholar
Zotter, F. and Frank, M. 2019. Ambisonics: A Practical 3D Audio Theory for Recording, Studio Production, Sound Reinforcement, and Virtual Reality. Cham, Switzerland: Springer International.CrossRefGoogle Scholar

VIDEOGRAPHY

Smalley, D. 2014. Spatiality in Acousmatic Music. CIRMMT Distinguished Lectures in the Science and Technology of Music. YouTube. www.youtube.com/watch?v=_G68Q4gkOMc (accessed 4 October 2023).Google Scholar
Figure 0

Figure 1. Two-plane system frequency x time graphical score demonstrating a left-to-right approach (Fischman in Smalley 1997: 127).

Figure 1

Figure 2. The standard distribution of horizontal space.

Figure 2

Figure 3. Visualisation of emergence on the azimuth plane, indicating sizable spatial growth.

Figure 3

Figure 4. A notational score consisting of frequency, width and height as equal components of the sound shape.

Figure 4

Figure 5. Ascent depicted on the differing axes of elevation and azimuth. Both sound shapes result in differing motions. The azimuth axis presents a right-to-left motion spatially, whilst the elevation axis presents a bottom-to-top motion spatially.

Figure 5

Figure 6. Cyclic motion’s various representations (2π spatially). Each axis demonstrates an example of directional growth.

Figure 6

Figure 7. Smalley’s glide distributed over vertical localised space.

Figure 7

Figure 8. Dissipation distributed over vertical localised space.

Figure 8

Figure 9. Contiguous space depicting the parameters of spatial distribution.

Figure 9

Figure 10. Intimate space.

Figure 10

Figure 11. Grouping frequency, width and height together allows for the embellishment of certain morphological processes, such as growth.

Figure 11

Figure 12. Reciprocal space.

Figure 12

Figure 13. Cartesian co-ordinate sound system speaker placement.

Figure 13

Figure 14. Schema (Cross 2019) score utilising spectral and spatial sound shapes.