Hostname: page-component-586b7cd67f-2brh9 Total loading time: 0 Render date: 2024-11-23T18:37:32.177Z Has data issue: false hasContentIssue false

Towards Deconstructivist Music: Reconstruction paradoxes, neural networks, concatenative synthesis and automated orchestration in the creative process

Published online by Cambridge University Press:  01 August 2023

Philon Nguyen*
Affiliation:
Department of Music, Faculty of Fine Arts, Concordia University, Montreal, Quebec, Canada
Eldad Tsabary
Affiliation:
Department of Music, Faculty of Fine Arts, Concordia University, Montreal, Quebec, Canada
*
Corresponding author email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Since the 1980s, deconstruction has become a popular approach for designing architecture. In music, however, the term has not been absorbed as well by the related literature, with a few exceptions. In this article, ways to find ideological groundings for deconstructivism in music are introduced through the concepts of enchaînement and reconstruction paradoxes. Similar to the Banach–Tarski paradox in mathematics, reconstruction paradoxes occur when reconstructing the parts of a whole no longer yields the same properties as the whole. In music, a reconstruction paradox occurs when a piece constructed from tonal segments no longer yields a perceived tonality. Deconstruction in architecture heavily relies on computer-aided design (CAD) to realise complex ideas. Similarly in music, computer-aided composition (CAC) techniques such as neural networks, concatenative synthesis and automated orchestration are used. In this article, we discuss such tools in the context of this advocated new aesthetics: deconstructivist music.

Type
Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. INTRODUCTION

The role of the composer as an artist who has been bestowed the Atlantean burden of choosing notes and sounds (Murail Reference Murail2005) has led to many formalisations and theories about music. From stochastic and spectral music to neo-Riemannian and transformational theories, different schools of thought have proposed different solutions and techniques, to the benefit of the novice composer who can now choose or unchoose among different paths. Notes can be picked randomly from some scale (a sieve in the sense of Xenakis or a partials reconstruction in the sense of Murail), a series (in the sense of the serial school), a process (in the sense of process music) or by an algorithm (in the sense of algorithmic and generative music). This paper describes the deconstructivist aesthetics that emerges from techniques for generating music that have emerged in new music, notably concatenative synthesis, neural networks, random walks, transformational techniques and variants. It discusses various reconstruction paradox instances in which parts do not reconstruct the whole. We then discuss three case studies: first, random walks in neo-Riemannian spaces learned from a corpus; second, automated orchestration of piano rolls based on distance spaces learned from a corpus; and finally, the use of recursive neural networks (RNN) to generate polyphonic multichannel sketches (i.e., orchestral) based on a learned corpus and steering input data. This article is an exploratory walk in the lands of (this so-called) deconstructivist music, not an exhaustive survey in the field. Other papers have served this purpose (Roads Reference Roads1985; Edwards Reference Edwards2011; Briot, Hadjeres and Pachet Reference Briot, Hadjeres and Pachet2020; Ji, Luo and Yang Reference Ji, Luo and Yang2020).

2. BACKGROUND AND RELATED WORK

2.1. Deconstruction in architecture

Deconstruction in philosophy, as a late structuralist declination by French philosopher Jacques Derrida, has had a long legacy in the arts since as early as the 1980s (Wigley Reference Wigley1993; Vitale Reference Vitale2019). In the fine arts, architecture is perhaps the field in which deconstruction has been the most successfully applied as a technique with palpable and visible aesthetic results, as may be exemplified through the works of architects Daniel Libeskind, Zaha Hadid, Peter Eisenman, Bernard Tschumi, Rem Koolhaas, Frank Gehry, Coop Himmelblau or Morphosis, among many others (Figure 1). About destruction and architecture in a harrowing vision of a lost Europe (in his work Danube), Claudio Magris writes:

It is comforting that travel should have an architecture, and that it is possible to contribute a few stones to it, although the traveller is less like one who constructs landscapes for that is a sedentary task than like one who destroys them … But even destruction is a form of architecture, a deconstruction that follows certain rules and calculations, an art of disassembling and reassembling, or of creating another and different order. (Magris Reference Magris2001)

Figure 1. (left) Vladimir Tatlin, Project for a Monument to the Third International, 1919 (The Museum of Modern Art/Licensed by SCALA/Art Resource, NT); (centre) Constructive Theatrical Set by Iakov Chernikhov (1889–1951) (figure reproduced from his book The Construction of Architectural and Machine Forms (Chernikhov Reference Chernikhov1931)); (right) Micromegas, drawings (1979) by Daniel Libeskind (Image Courtesy © Daniel Libeskind). The literature indicates that the term ‘deconstructivist architecture’ (inspired from early Russian constructivism and avant-garde of the twentieth century) was first popularised in the MOMA exposition catalogue Deconstructivist Architecture (Johnson and Wigley Reference Johnson and Wigley1988).

In architecture and the visual arts, the deconstructivist movement of the 1980s and 1990s found its inspiration partly from the Russian Constructivism of the early twentieth century (Johnson and Wigley Reference Johnson and Wigley1988; Wigley Reference Wigley1993) and partly from challenging new designs coming from computer-generated art (Boden Reference Boden2009). The patterns used for the windows of Libeskind’s Jewish Museum in Berlin are reminiscent of some of Malevitch’s paintings; as are the random geometries early personal computers could generate.

We can describe the deconstructivist works of the 2000s and forward as part of a post-deconstructivist or late deconstructivist era in architecture, where, inspired by the early deconstruction of the 1980s, form has been effectively liberated from the straight line. A myriad of designs has been imagined and built following this initial drive. Computer-aided design and physical models in the design of complex shapes have been instrumental in this artistic movement by allowing greater technical possibilities and greater control over the rendered product. Generative art (e.g., using machine learning) has been a trend in computer graphics and visual arts since at least the 1960s (Boden Reference Boden2009; Audry Reference Audry2021).

2.2. Deconstruction in music

Deconstruction has also been explicitly applied to the theory of music and its meanings (philosophical and sociological). Rose Subotnik seemed to have coined the term in music and used critical theory to provide a new interpretation of classical music (e.g., Mozart, Chopin) (Subotnik Reference Subotnik1995). Deconstuctivism in architecture relates to a set of techniques, effective processes with aesthetic and ‘palpable’ results. In music, however, existing work on deconstructivist approaches is scattered among different niches of contemporary music with diverse labellings.

No deconstructivist school of thought effectively exists in music and association with a heuristic idea of deconstruction may be more of an impression left on an auditor by some work rather than an intentional process.

This being said, the so-called parametric music of serialism and high modernism was an early example of deconstruction (of a piece into its parameteric models). The generative material is represented as a set of tweakable parameters with which a piece can be reconstructed. It was commonplace among composers of the era to believe that once the series was chosen, the rest of the piece would follow and could be reconstructed. Koblyakov’s analysis of Boulez’s Marteau sans maître undeniably demonstrates the parametric and generative aspects of Boulez’s work for instance (Koblyakov Reference Koblyakov1993). Also, composer Iannis Xenakis is known to have parametrised the stochastic generation of some of his most famous pieces, yielding results that still today can surprise and impress (Solomos Reference Solomos2015). Edwards (Reference Edwards2011) lists the following examples as forerunners of the current day algorithmic music: Guillaume Dufay’s (1400–74) isorhythmic motet Nuper rosarum flores; evidence of Fibonacci relationships in the music of Bach, Schubert and Bartók; Mozart’s Musikalisches Würfelspiel (1792); and the Quadrille Melodist sold by Professor J. Clinton of the Royal Conservatory of Music, London (1865) (a set of cards that allowed a pianist to generate quadrille music).

The music of Brian Ferneyhough and the visual scores of John Cage or Sylvano Bussotti are also proto-deconstructions (Ferneyhough Reference Ferneyhough1981; Attinello Reference Attinello1992; Bogue Reference Bogue2014; Hidalgo and Ipinza Reference Hidalgo and Ipinza2016). Figure 2 shows several such examples of deconstruction in music. In Ferneyhough’s work, the serial use of rhythm trees (i.e., in the sense of the Patchwork or OpenMusic softwares) disjuncts the time dimension; for John Cage, the deconstruction of the time domain comes from the use of indeterminacy and chance composition; for Bussotti, the deconstruction is visual (as it relates to deconstruction in the visual arts) – the straight line, like in architectural deconstruction, is literally perturbated. The work Piano Pieces for David Tudor: 1. Rhizome by Bussotti was incidentally inspired by the ‘rhizome’ concept of post-structuralist philosopher Gilles Deleuze (Reference Deleuze2013): a ‘rhizome’ (a concept borrowed from botany and dendrology) is a structure of constant splitting, such that synthesis (in a Hegelian sense) of the parts is no longer possible. Corbussen (Reference Corbussen2002) describes a work by Gerd Zacher, Die Kunst seiner Fuge (1968) – a set of ten variations of Contrapunctus I by Bach where the composer interprets the piece in various ways – as deconstructivist. Variations are a musical form that adequate well with the idea of deconstruction or parametric music in general. Generated content and reconstructions can be obtained by slightly varying some parameters, yielding significantly different results.

Figure 2. (left) Fontana Mix by John Cage (Reference Cage1958) (© 1958 by Henmar Press Inc. Permission by C. F. Peters Corporation. All rights reserved); (centre-left) John Cage, II from Mushroom Book 1972 (image courtesy @ John Cage Trust; digital image © The Museum of Modern Art/Licensed by SCALA/Art Resource, NT); (centre-right) Introduction: Rhizome – From Five Piano Pieces for David Tudor Music by Sylvano Bussotti (© 1959 Casa Ricordi Srl, a division of Universal Music Publishing Classics & Screen. International Copyright Secured. All Rights Reserved. Reprinted by permission of Hal Leonard Europe BV (Italy)); (right) Lemma-Icon-Epigram by Brian Ferneyhough (© Reference Ferneyhough1981 by Hinrichsen Edition, Peters Edition Limited. Permission by C. F. Peters Corporation. All rights reserved).

Granular synthesis – deconstructing (segmenting into ‘grains’) and reordering audio signals (often with additional parametric signal processing) – is a technique notably pioneered by Xenakis (Solomos Reference Solomos2015). The same process applied to music notation (via MIDI) is at the source of techniques such as concatenative synthesis – which could be performed on audio or MIDI (Zils and Pachet Reference Zils and Pachet2001; Schwarz Reference Schwarz2004, Reference Schwarz2005; Maestre, Ramírez, Kersten and Serra Reference Maestre, Ramírez, Kersten and Serra2009).

The work on real-time generative accompaniment and concatenative synthesis (Dannenberg Reference Dannenberg1993; Lewis Reference Lewis2000; Thom Reference Thom2000; Young Reference Young, Kronland-Martinet, Ystad and Jensen2007; François Pachet, Roy and d’Inverno Reference François Pachet, Roy and d’Inverno2013; Carsault Reference Carsault2017; Nika, Déguernel, Chemla-Romeu-Santos and Vincent Reference Nika, Déguernel, Chemla-Romeu-Santos and Vincent2017) could also be described as a form of musical deconstruction. In the OMax project (Dubnov, Assayag and El-Yaniv Reference Dubnov, Assayag and El-Yaniv1998; Assayag, Bloch, Chemilliera, Cont and Dubnov Reference Assayag, Bloch, Chemilliera, Cont and Dubnov2006; Assayag and Bloch Reference Assayag and Bloch2007), segments are classified in a suffix tree using the Oracle Factor algorithm. These research systems segment musical sources and reconstruct them out of place based on different heuristics. In the DYCI2 project (Nika et al. Reference Nika, Déguernel, Chemla-Romeu-Santos and Vincent2017), the musical material is broken into memories that are triggered in real time by an instrumentalist. The work Ex Machina by saxophonist Steve Lehman and artistic director Frédéric Maurin is a hallmark of the use of DYCI2. Interestingly, most of these systems have parameters designed to improve the continuity of the reconstructed pieces, or their contextual/genre readability. These parameters can, however, be tweaked to their extremities and yield completed deconstructed, disjuncted results. The work of Umberto Eco (Reference Eco1965) on open works can be noted, but also indeterminacy as pioneered by John Cage and the New York School (Iddon and Thomas Reference Iddon and Thomas2020).

3. RECONSTRUCTION PARADOXES

A fundamental concept in the construction of tonal music, which is often overlooked when considering other such concepts such as tonality, harmony, counterpoint and timbre, is the enchaînement concept. Simplistically speaking, enchaînement is to music what the straight line is to architecture. As a related concept, voice leading and its perception was extensively studied in the cognitive sciences (Huron Reference Huron2016). Enchaînement is thus the composition of the rules defining what comes before and what comes after a given musical event (e.g., a certain chord, a dissonance, an incomplete melodic pattern), the actualisation of harmony and tonality in a temporal structure. In neo-Riemannian theory, or transformational theory, some chords occur with higher probability after some other chords based on their position in some neo-Riemannian embedding space. In Hollywoodian music, this syntax has been made evident (Lehman Reference Lehman2018). It is our opinion that enchaînement conventions define musical order in at least as pervasive a manner as tonality or timbre.

In the piece Morphogenesis (2022) (Figure 3), we used as source material Beethoven’s Symphony no. 9 (4th movement), or Bee9, and a recursive neural network to reconstruct the material using Boulez’s Notations for orchestra as a steering material. The reconstructed pieces were composed of reordered (tonal) segments from Bee9, and they sound atonal.

Figure 3. Sketch for the work Morphogenesis (2022): reconstruction of Bee9 using a segmentation window of length on average μ = 2.0512s steered using Boulez’s Notations for orchestra. The result reminds the pointillism of early serial post-Webernian music.

In a sense, this relates to the Banach–Tarski paradox where reconstruction from the parts does not yield the whole. Sets in complex spaces (a 3D ball or a musical piece) may not always follow expected behaviours. The following quote gives a description of the Banach–Tarski paradox (de Rauglaudre Reference de Rauglaudre2017: 37):

[The] Banach-Tarski Paradox states that a ball in 3D space is equidecomposable with twice itself, i.e. we can break a ball into an infinite number of pieces, and with these pieces, build two balls having the same size as the initial ball. This strange result is actually a Theorem which was proven in 1924 by Stefan Banach and Alfred Tarski using the Axiom of Choice.

In music, we could call this a reconstruction paradox. In the same way, the reverse transformation or rotations in post-romantic and serial music could be seen as generative techniques (e.g., Rachmaninov’s Variations on a Theme by Paganini). Breaking the enchaînement conditioning of musical sources effectively generates new sonorities and aesthetics. We will discuss the details of these compositional processes in later sections.

Figure 4 demonstrates a reconstruction paradox using an example from Rachmaninov’s Piano Concerto no. 2. In the study sketch for the piece Morphogenesis (2022) (Figure 3), a neural network was used to learn associations between segments of Rach2 and their MIDI representations. The neural network was then used to transform segments from Boulez’s Notations for orchestra into multitrack polyphonic MIDI signals (i.e., orchestral segments). Judging from the atonality of the resulting sequences, the perceived sense of tonality is not the consequence of choosing notes from a tonal scale but rather an artefact of tonal enchaînement patterns (i.e., tonally syntactic chord progressions, dissonance resolution conventions and so on).

Figure 4. (a) An excerpt of the piano part (MIDI layout) of the first movement of Rachmaninov’s Piano Concerto no. 2 (bb. 38–42); (b) a reconstruction of the piano part of Rachmaninov’s Piano Concerto No. 2 steered using Boulez’s Notations for orchestra. Multitrack polyphonic MIDI files are reconstructed using neural networks (for simplicity, only the piano parts are shown here). Syncopations and disjunctions are introduced in the reconstruction. While (a) is clearly tonal (post-romantic), (b) is clearly modern: a reconstruction paradox occurs when a reconstruction from the parts of a whole no longer yields the same properties as the whole.

4. PIECES, SLABS, WEDGES, CASES AND STUDIES

4.1. Learning generative transformations in neo-Riemannian spaces

At least two trends in algorithmic music and composition can be distinguished in the massive grove of new music techniques and technologies: on one hand, generative techniques, such as OMax, DYCI2, Somax (Assayag et al. Reference Assayag, Bloch, Chemilliera, Cont and Dubnov2006); and on the other hand, a perpetual search for understanding inner structures of musical composition through neo-Riemannian and transformational theories (some examples include Lewin Reference Lewin1987; Lerdahl Reference Lerdahl1996; Tymoczko Reference Tymoczko2011; Cohn Reference Cohn2012). Concepts from transformational theories, such as chordal distance and spatial embeddings, have been used in generative techniques to steer the generation of music from pure randomness to some sense of coherent aesthetics. The ability to choose notes among the infinitude of possible assemblies of frequencies has been at the core of contemporary music research since Xenakis’s exploration into stochasticity and his development of sieve theory (Xenakis Reference Xenakis1992; Solomos Reference Solomos2015), Messiaen’s quest for modes of limited transposition (Messiaen Reference Messiaen2000), and Cage’s chance music, to the usage of partials and microtonality generated from sound analyses in the spectral school (Murail Reference Murail2004, Reference Murail2005). The quest for theories of compositional choice is foundational to the design of algorithms and processes of generative music.

Figure 5 shows the embeddings-into-nearest-neighbour graphs of segments obtained from distance matrices using the Spiral Array distance (Chew Reference Chew2014) and computed for different works (i.e., Rach2 and Schoenberg’s Klavierstücke, op.11, no. 3). Different pieces yield different structures (which is reminiscent of the work on generative meshing in the new architectural geometry field). These graphs and structures can be used as terrain for a random walk steered by the concavities and convexities of the terrain. We start from an input chord and the algorithm chooses some optimal path based on the data it has learned from the different works. Reproducing the genre or style of the piece is not the primary goal of such methods. Following the concept of reconstruction paradoxes, we can say that the reconstituted wholes rarely display all the properties of the original works. Figure 6 shows examples of such reconstructions. Figure 7 shows a piece generated from a random walk in a space learned from ‘O Fortuna’ from Carmina Burana by Carl Orff. The random walks generated on a single channel were then orchestrated using Orchidea, a software developed at IRCAM (Maresz Reference Maresz2013).

Figure 5. (left) A generative transformation learning algorithm is learned from transformation matrices (i.e., similarity distance matrices); (centre) two-dimensional Euclidean space embedding using the multidimensional scaling (MDS) algorithm; (right) the network community graphs for (a) segments generated from Schoenberg’s Klavierstücke, op. 11, no. 3 and (b) segments generated from Rach2.

Figure 6. (a) A random walk based on Schoenberg’s Klavierstücke, op. 11, no. 3; (b) a random walk based on a piano reduction of Rach2. Both random walks were steered by chordal distance measures (e.g., chromagrams, Estrada distance, Costère distance and Chew distance).

Figure 7. Excerpt from the cantata for choir and piano The Fall of Rome, 1. The Last Harangue (2022). The source material used for training the model was ‘O Fortuna’ from Carl Orff’s Carmina Burana. From the model, a random walk was generated and the result was orchestrated using the software Orchidea developed at IRCAM (Maresz Reference Maresz2013).

The generated random walks sometimes preserve some characteristics of the original corpus (in ‘O Fortuna’, the grandiloquence could still be perceived), however, reconstruction paradoxes (in the perception of tonality and metric rhythm especially) still occur in the generated artefacts. The random walk algorithms are described in Algorithms 1 and 2.

4.2. Distance-based automated orchestration

Automated orchestration designs have offered a smaller number of candidates for review than algorithmic composition. Orchidea, the latest creation of the Orch* series developed at IRCAM by Carmine-Emanuele Cella, uses a customisable database of sounds to output an optimal instrumentation using genetic algorithms (Maresz Reference Maresz2013). IRCAM has also developed another piece of software, Orchestral Piano by Leopold Crestel (Crestel and Esling Reference Crestel and Esling2017), where orchestration rules are learned from real orchestration of piano reductions. The machine-learning algorithms then learn association predictions from the piano reduction and orchestration data using variants of Restricted Boltzmann Machines (RBM, cRBM and FGcRBM). Finally, Handelman and Sigler (Reference Handelman and Sigler2012) proposed an automated orchestration method based on the concept of z-chains.

We demonstrate a technique that is based on some chordal distance measure (e.g., chromagrams, Estrada distance, Costère distance and Chew distance) that has a lightweight learning phase (i.e., updating a distance matrix) and that can therefore be used in real time with minimal footprint (storing a transformation matrix, which in the case of subsets of 12 notes is 4096×4096). A first learning phase where an existing MIDI orchestration of some work A is segmented and tagged by its pitch-class content (i.e., the multichannel MIDI signal is collapsed, and duplicate pcs across all octaves are deleted) is performed. A second-generation phase is performed with MIDI from some work B that is used for generating sketches: each segment of work B is matched to some segment in Work A and a reconstruction of A is generated, steered by B.

This process is explained in Algorithms 1 and 3 (in Figure 8). Algorithm 1 is shared between the random walk algorithm and the distance-based automated orchestration algorithm; as the generation of a transformation matrix uses the same techniques. This process can be used to handle timbre as well in the distance-based automated orchestration algorithm. Timbre, here, is extracted from the information on the channels/instruments provided in the MIDI file. Creating a distinct transformation matrix for the channel information in addition to the note information (pitch-class sets, or harmony) is possible. Joining both transformation matrices using a weighing strategy is also possible. During the past decade, the recent interest in artificial intelligence and embodied knowledge has provided novel important insights to rethink mediation technology, thereby paving the way for a transdisciplinary approach to artificial intelligence and other technologies such as wearables or CAC. Stemming from a phenomenological-based approach and considering current trends in sonic interaction design and interactive composition, our paper also proposes experiments on artificial intelligence and embodied approaches to mediation technology and underlines the increasing importance of somatic and generated knowledge within the computing field. The understanding of the phenomenal body and mind and the interpretation of mediation technology as an environment capable of affording transformative feedback, both allow, in fact, for an enrichment of the phenomenological experience. The paper also presents an autoethnographic analysis of our own performances, which provide an original contribution to the artistic application of artificial intelligence and other technological applications. Stemming from an ongoing research-creation on musical improvisation and generation with artificial intelligence, the case study suggests a new way of understanding viscerality as a key feature to design interactive systems and interactive composition and empower the performer’s agentivity. This contribution also tries to emphasise how an embodied and visceral approach to interaction can transform artificial intelligence into an active sensory-perceptual mode of experiencing, which is capable of stimulating the performer’s sensorimotor metaplasticity. The reconfiguration of generativity through the use of sound feedback is a process that unfolds with a high degree of sensitivity in which the body can be poetically understood as an emergent territoriality, inhabited and transfigured by the sound.

Figure 8. Algorithms 1–4 discussed in this article.

4.3. Recursive neural networks for steered orchestral concatenative synthesis

Neural networks in music synthesis have been studied extensively since the idea was first put forth (Lewis and Todd Reference Lewis, Todd, Touretzky, Hinton and Sejnowski1988; Lewis Reference Lewis1998). Also, more recent RNN, ConvNets, GAN, VAE and concatenative synthesis-based techniques in music synthesis have notably been experimented with in IRCAM’s automated accompaniment libraries (e.g., OMax, DYCI2, Somax).

Here we demonstrate the use of a robust and well-tested technique used in speech recognition/classification (the so-called spoken digit classification problem) using RNNs and apply it to music synthesis. As usual, neural networks are sensible to input data and training parameters.

We generated three sets of input data using Beethoven’s Symphony no. 9 (4th movement) or Bee9. One with an average segment length of μ = 0.137405s (a short segment length corresponding to a segment generated at each onset of the MIDI file modulo a precision parameter handling notes not hitting the same exact onset), one with an average μ = 1.04207s (a coarse segment length where segments could contain many onsets – motifs) and a coarser segmentation with average μ = 2.05122s.

Different datasets used to train the RNN yielded different properties in the generated music. Shorter segments yielded less diversity in the generated segments. Coarser segments yielded more diversity in the segments. Coarser segments also increased the recognisability of the input corpus. Reconstruction paradoxes are clear in the generated material, even in longer segments in the Bee9 example. This is illustrated in Figure 9.

Figure 9. (a) Histogram showing the index of the chosen segments (i.e., such an index is the label of some set of notes/channels/onsets/durations with duplicate notes deleted) and the number of such indices returned by a recursive neural network trained on a fine-grained segmentation (the average segment size was μ = 0.1374s) – the fine-grained segmentation corresponded to a segmentation where each new onset generated a segment; (b) histogram for a coarse segmentation (μ = 1.0420s); (c) histogram for a coarser segmentation (μ = 2.0512s). Coarse segmentations could contain multiple onsets where a given segment contained a continuous succession of segments from the source material (i.e., Bee9), effectively increasing continuity. Finer segmentations yielded less diversity in the segments chosen and more repetition. Coarser segmentations yielded more diversity with the coarsest segmentation producing a Gaussian mixture. These neural networks were trained for the piece Morphogenesis (2022).

5. ALGORITHMS

Algorithm 1 is identical for the random walk and distance-based orchestration algorithms and one can overload the other. When dealing with orchestration, the MIDI files used are multichannel: the feature extraction and embedding algorithm (Algorithm 1) learns the pitch-class space and collapses the channel information to keep only the pc-set data (note that this technique can be extended by learning the timbral embedding space and storing the MIDI channel information on another segment).

Algorithms 2 and 3 assume that a transformation matrix was learned using Algorithm 1. In the case of Algorithm 2, a single channel segment is returned. In the case of Algorithm 3, multichannel information is returned.

Algorithm 4 assumes a trained RNN of the type described in Figure 10. The RNN effectively classifies orchestral audio segments to their MIDI polyphonic multichannel counterparts (additional technicalities are necessary to handle MIDI files with more than 16 channels).

Figure 10. Example of architecture from the Wolfram Language and Mathematica software for a recursive neural network trained on audio segments and their corresponding MIDI segments. Audio inputs are encoded into MFCC coefficients. A segment number is decoded. The segment number corresponds to the MIDI data of the segment.

6. CONCLUSION

The study of deconstruction in music promises to be a testbed for many new works. Reconstruction paradoxes are real consequences of many generative and algorithmic music techniques that are being studied in new music (e.g., neural networks, random walks, concatenative synthesis, variational autoencoders, machine learning). When comparing Figures 4a (Rach2 non-deconstructed) and 4b (Rach2 deconstructed), we can see that the excerpt generated by AI has an embedded deconstructivist ‘logic’ to it: the traditional patterns of enchaînement do not seem to hold. The AI deconstructivist creations may share some traits and parameters with their component parts and the original pieces these parts were extracted from (mostly at the microtemporal domain); but they demonstrate unique aesthetical features, distinguishable by the absence of traditional enchaînement conventions – resulting from the dismantling of their original syntactic tonal/metric contexts. As we continue to research and create in the area of deconstructivist AI music, other paradoxes, phenomena and epiphanies are bound to be discovered, leading to a growing body of works and theoretical base.

Acknowledgements

We wish to thank the following for providing permission to reproduce artworks and scores in our paper: Edition Peters for the work Fontana Mix (1958) by John Cage; the John Cage Trust, Edition Peters and MOMA/Scala/Art Resource for the works II from Mushroom Book (1972) by John Cage; Casa Ricordi for the work Introduction: Rhizome – From Five Piano Pieces for David Tudor (1959); Edition Peters for the work Lemma-Icon-Epigram (1981) by Brian Ferneyhough; MOMA/Scala/Art Resource for the picture of the Project for a Monument to the Third International (1919) by Vladimir Tatlin; and Studio Libeskind for the drawing Micromegas (1979).

References

REFERENCES

Assayag, G. and Bloch, G. 2007. Navigating the Oracle: A Heuristic Approach. Proceedings ICMC’07, the International Comp. Music Association, Copenhagen, 405–12.Google Scholar
Assayag, G., Bloch, G., Chemilliera, M., Cont, A. and Dubnov, S. 2006. OMax Brothers: A Dynamic Topology of Agents for Improvization Learning. AMCMM ’06: Proceedings of the 1st ACM Workshop on Audio and Music Computing Multimedia, New York, 125–32.Google Scholar
Attinello, P. 1992. Signifying Chaos: A Semiotic Analysis of Sylvano Bussotti’s Siciliano . Repercussions 1(2): 84110.Google Scholar
Audry, S. 2021. Art in the Age of Machine Learning. Cambridge, MA: MIT Press.10.7551/mitpress/12832.001.0001CrossRefGoogle Scholar
Boden, M. 2009. What is Generative Art? Digital Creativity 20(1/2): 2146.10.1080/14626260902867915CrossRefGoogle Scholar
Bogue, R. 2014. Scoring the Rhizome: Bussotti’s Musical Diagram. Deleuze Studies 8(4): 470–90.10.3366/dls.2014.0166CrossRefGoogle Scholar
Briot, J.-P., Hadjeres, G. and Pachet, F. 2020. Deep Learning Techniques for Music Generation. New York: Springer.10.1007/978-3-319-70163-9CrossRefGoogle Scholar
Carsault, T. 2017. Automatic Chord Extraction and Musical Structure Prediction through Semi-Supervised Learning: Application to Human-Computer Improvisation. Master’s diss., ATIAM, Université Pierre-et-Marie Curie.Google Scholar
Cage, J. 1958. Fontana Mix. London: Peters Edition.Google Scholar
Chernikhov, I. 1931. The Construction of Architectural and Machine Forms (Konstruktsiya Arhitekturnyih i Mashinnyih Form). Leningrad: Izdanie Leningradskogo obschestva arhitektorov.Google Scholar
Chew, E. 2014. Mathematical and Computational Modeling of Tonality: Theory and Applications. Berlin: Springer.10.1007/978-1-4614-9475-1CrossRefGoogle Scholar
Cohn, R. 2012. Audacious Euphony: Chromatic Harmony and the Triad’s Second Nature. Oxford: Oxford University Press.10.1093/acprof:oso/9780199772698.001.0001CrossRefGoogle Scholar
Corbussen, M. 2002. Deconstruction in Music. The Jacques Derrida – Gerd Zacher Encounter. PhD diss., Erasmus University Rotterdam.Google Scholar
Crestel, L. and Esling, P. 2017. Live Orchestral Piano: A System for real-Time Orchestral Music Generation. 14th Sound and Music Computing Conference 2017, Espoo, Finland.Google Scholar
Dannenberg, R. B. 1993. Software Support for Interactive Multimedia Performance. Journal of New Music Research 22(3): 213–28.Google Scholar
Deleuze, G. 2013. Mille plateaux: Capitalisme et schizophrénie, 2 . Collection Critique. Paris: Les Editions de Minuit.Google Scholar
Dubnov, S., Assayag, G. and El-Yaniv, R. 1998. Universal Classification Applied to Musical Sequences. Proceedings of the International Computer Music Conference, Michigan.Google Scholar
Eco, U. 1965. L’oeuvre ouverte. Paris: Éditions du Seuil.Google Scholar
Edwards, M. 2011. Algorithmic Composition: Computational Thinking in Music. Communications of the ACM 54(7): 5867.10.1145/1965724.1965742CrossRefGoogle Scholar
Ferneyhough, B. 1981. Lemma-Icon-Epigram. London: Peters Edition.Google Scholar
François Pachet, J. M., Roy, P. and d’Inverno, M. 2013. Reflexive Loopers for Solo Musical Improvisation. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2205–8.Google Scholar
Handelman, E. and Sigler, A. 2012. Automatic Orchestration for Automatic Composition. AIIDE Workshop AAAI Technical Report WS-12-16.Google Scholar
Hidalgo, A. and Ipinza, C. 2016. Cartografiar lo intangible, El Sonido. Ponencia VI Congreso Internacional de Expresión Gráfica, Córdoba, Argentina.Google Scholar
Huron, D. 2016. Voice Leading: The Science behind a Musical Art. Cambridge, MA: MIT Press.10.7551/mitpress/9780262034852.001.0001CrossRefGoogle Scholar
Iddon, M. and Thomas, P. 2020. John Cage’s Concert for Piano and Orchestra. Oxford: Oxford University Press.10.1093/oso/9780190938475.001.0001CrossRefGoogle Scholar
Ji, S., Luo, J. and Yang, X. 2020. A Comprehensive Survey on Deep Music Generation: Multi-level Representations, Algorithms, Evaluations, and Future Directions, arXiv: 2011.06801 [cs.SD].Google Scholar
Johnson, P. and Wigley, M. 1988. Deconstructivist Architecture. New York: Museum of Modern Art.Google Scholar
Koblyakov, L. 1993. Pierre Boulez: A World of Harmony. Abingdon: Routledge.Google Scholar
Lehman, F. 2018. Hollywood Harmony: Musical Wonder and the Sound of Cinema. Oxford: Oxford University Press.10.1093/oso/9780190606398.001.0001CrossRefGoogle Scholar
Lerdahl, F. 1996. Generative Theory of Tonal Music. Cambridge, MA: MIT Press.10.7551/mitpress/12513.001.0001CrossRefGoogle Scholar
Lewin, D. 1987. Generalized Musical Intervals and Transformations. New York: Yale University Press.Google Scholar
Lewis, G. E. 2000. Too Many Notes: Computers, Complexity and Culture in Voyager. Leonardo Music Journal 10: 33–9.10.1162/096112100570585CrossRefGoogle Scholar
Lewis, J. 1998. Creation by Refinement: A Creativity Paradigm for Gradient Descent Learning Networks. IEEE 1988 International Conference on Neural Networks, New York.10.1109/ICNN.1988.23933CrossRefGoogle Scholar
Lewis, J. and Todd, P. M. 1988. A Sequential Network Design for Musical Application. In Touretzky, D., Hinton, G., and Sejnowski, T. (eds.) Proceedings of the Connectionist Models Summer School, 7684.Google Scholar
Maestre, E., Ramírez, R., Kersten, S. and Serra, X. 2009. Expressive Concatenative Synthesis by Reusing Samples from Real Performance Recordings. Computer Music Journal 33(4): 2342.10.1162/comj.2009.33.4.23CrossRefGoogle Scholar
Magris, C. 2001. Danube. New York: Vintage.Google Scholar
Maresz, Y. 2013. On Computer-Assisted Orchestration. Contemporary Music Review 32(1): 99109.10.1080/07494467.2013.774515CrossRefGoogle Scholar
Messiaen, O. 2000. Technique de mon langage musical. Paris: Alphonse Leduc.Google Scholar
Murail, T. 2004. Modèles et artifices. Strasbourg, France: Presses Universitaires de Strasbourg.Google Scholar
Murail, T. 2005. Spectra and Sprites. Contemporary Music Review 24(2/3): 137–47.10.1080/07494460500154806CrossRefGoogle Scholar
Nika, J., Déguernel, K., Chemla-Romeu-Santos, A. and Vincent, E. 2017. DYCI2 Agents: Merging the Free, Reactive, and Scenario-Based Music Generation Paradigms. International Computer Music Conference Proceedings, Shangai, China.Google Scholar
de Rauglaudre, D. 2017. Formal Proof of Banach-Tarski Paradox. ASDD-AlmaDL 10(1): 3749.Google Scholar
Roads, C. 1985. Research in Music and Artificial Intelligence. ACM Computing Surveys 17(2): 163–90.10.1145/4468.4469CrossRefGoogle Scholar
Schwarz, D. 2004. Data-Driven Concatenative Sound Synthesis. PhD diss., Université de Paris 6 – Pierre et Marie Curie.Google Scholar
Schwarz, D. 2005. Current Research in Concatenative Sound Synthesis. Proceedings of the International Computer Music Conference (ICMC), Barcelona.Google Scholar
Solomos, M. 2015. Iannis Xenakis, la musique électroacoustique: The Electroacoustic Music. Paris: Editions L’Harmattan.Google Scholar
Subotnik, R. R. 1995. Deconstructive Variations: Music and Reason in Western Society. Minneapolis, MN: University of Minnesota Press.Google Scholar
Thom, B. 2000. BoB: An Interactive Improvisational Music Companion. Proceedings of the Fourth International Conference on Autonomous Agents, AGENTS ’00, Barcelona, 309–16.Google Scholar
Tymoczko, D. 2011. A Geometry of Music. Oxford: Oxford University Press.Google Scholar
Vitale, F. 2019. The Last Fortress of Metaphysics: Jacques Derrida and the Deconstruction of Architecture. Albany, NY: State University of New York Press.Google Scholar
Wigley, M. 1993. The Architecture of Deconstruction: Derrida’s Haunt. Cambridge, MA: MIT Press.Google Scholar
Xenakis, I. 1992. Formalized Music: Thought and Mathematics in Composition. Hillsdale, NY: Pendragon Press.Google Scholar
Young, M. 2007. NN Music: Improvising with a ‘Living’ Computer. In Kronland-Martinet, R., Ystad, S., and Jensen, K. (eds.) Computer Music Modeling and Retrieval. Sense of Sounds. Berlin: Springer, 337–50.Google Scholar
Zils, A. and Pachet, F. 2001. Music Mosaicing. Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland.Google Scholar
Figure 0

Figure 1. (left) Vladimir Tatlin, Project for a Monument to the Third International, 1919 (The Museum of Modern Art/Licensed by SCALA/Art Resource, NT); (centre) Constructive Theatrical Set by Iakov Chernikhov (1889–1951) (figure reproduced from his book The Construction of Architectural and Machine Forms (Chernikhov 1931)); (right) Micromegas, drawings (1979) by Daniel Libeskind (Image Courtesy © Daniel Libeskind). The literature indicates that the term ‘deconstructivist architecture’ (inspired from early Russian constructivism and avant-garde of the twentieth century) was first popularised in the MOMA exposition catalogue Deconstructivist Architecture (Johnson and Wigley 1988).

Figure 1

Figure 2. (left) Fontana Mix by John Cage (1958) (© 1958 by Henmar Press Inc. Permission by C. F. Peters Corporation. All rights reserved); (centre-left) John Cage, II from Mushroom Book 1972 (image courtesy @ John Cage Trust; digital image © The Museum of Modern Art/Licensed by SCALA/Art Resource, NT); (centre-right) Introduction: Rhizome – From Five Piano Pieces for David Tudor Music by Sylvano Bussotti (© 1959 Casa Ricordi Srl, a division of Universal Music Publishing Classics & Screen. International Copyright Secured. All Rights Reserved. Reprinted by permission of Hal Leonard Europe BV (Italy)); (right) Lemma-Icon-Epigram by Brian Ferneyhough (© 1981 by Hinrichsen Edition, Peters Edition Limited. Permission by C. F. Peters Corporation. All rights reserved).

Figure 2

Figure 3. Sketch for the work Morphogenesis (2022): reconstruction of Bee9 using a segmentation window of length on average μ = 2.0512s steered using Boulez’s Notations for orchestra. The result reminds the pointillism of early serial post-Webernian music.

Figure 3

Figure 4. (a) An excerpt of the piano part (MIDI layout) of the first movement of Rachmaninov’s Piano Concerto no. 2 (bb. 38–42); (b) a reconstruction of the piano part of Rachmaninov’s Piano Concerto No. 2 steered using Boulez’s Notations for orchestra. Multitrack polyphonic MIDI files are reconstructed using neural networks (for simplicity, only the piano parts are shown here). Syncopations and disjunctions are introduced in the reconstruction. While (a) is clearly tonal (post-romantic), (b) is clearly modern: a reconstruction paradox occurs when a reconstruction from the parts of a whole no longer yields the same properties as the whole.

Figure 4

Figure 5. (left) A generative transformation learning algorithm is learned from transformation matrices (i.e., similarity distance matrices); (centre) two-dimensional Euclidean space embedding using the multidimensional scaling (MDS) algorithm; (right) the network community graphs for (a) segments generated from Schoenberg’s Klavierstücke, op. 11, no. 3 and (b) segments generated from Rach2.

Figure 5

Figure 6. (a) A random walk based on Schoenberg’s Klavierstücke, op. 11, no. 3; (b) a random walk based on a piano reduction of Rach2. Both random walks were steered by chordal distance measures (e.g., chromagrams, Estrada distance, Costère distance and Chew distance).

Figure 6

Figure 7. Excerpt from the cantata for choir and piano The Fall of Rome, 1. The Last Harangue (2022). The source material used for training the model was ‘O Fortuna’ from Carl Orff’s Carmina Burana. From the model, a random walk was generated and the result was orchestrated using the software Orchidea developed at IRCAM (Maresz 2013).

Figure 7

Figure 8. Algorithms 1–4 discussed in this article.

Figure 8

Figure 9. (a) Histogram showing the index of the chosen segments (i.e., such an index is the label of some set of notes/channels/onsets/durations with duplicate notes deleted) and the number of such indices returned by a recursive neural network trained on a fine-grained segmentation (the average segment size was μ = 0.1374s) – the fine-grained segmentation corresponded to a segmentation where each new onset generated a segment; (b) histogram for a coarse segmentation (μ = 1.0420s); (c) histogram for a coarser segmentation (μ = 2.0512s). Coarse segmentations could contain multiple onsets where a given segment contained a continuous succession of segments from the source material (i.e., Bee9), effectively increasing continuity. Finer segmentations yielded less diversity in the segments chosen and more repetition. Coarser segmentations yielded more diversity with the coarsest segmentation producing a Gaussian mixture. These neural networks were trained for the piece Morphogenesis (2022).

Figure 9

Figure 10. Example of architecture from the Wolfram Language and Mathematica software for a recursive neural network trained on audio segments and their corresponding MIDI segments. Audio inputs are encoded into MFCC coefficients. A segment number is decoded. The segment number corresponds to the MIDI data of the segment.