Hostname: page-component-586b7cd67f-vdxz6 Total loading time: 0 Render date: 2024-11-27T13:46:04.920Z Has data issue: false hasContentIssue false

REFLECTIONS ON CYBORG COLLABORATIONS: CROSS-DISCIPLINARY COLLABORATIVE PRACTICE IN TECHNOLOGICALLY-FOCUSED CONTEMPORARY MUSIC

Published online by Cambridge University Press:  01 April 2024

Rights & Permissions [Opens in a new window]

Abstract

Creating new works combining live musicians with new technologies provides both opportunities and challenges. The Cyborg Soloists research project has commissioned and managed the creation of 46 new works of this type, assembling teams of composers, performers, researchers and technology partners from industry. The majority of these collaborations have been smooth-running and fruitful, but a few have demonstrated complications. This article critically evaluates collaborative methods and methodologies used in the project so far, presenting five case studies involving different types of collaborative work, and exploring the range of professional relationships, the need for different types of expertise within the team and the way technology can act as both a creative catalyst and a source of creative resistance. The conclusions are intended as a toolkit – pragmatic guidelines to inform future practice – and are aimed at artists, technological collaborators, and commissioners and organisations who facilitate these types of creative collaborations.

Type
RESEARCH ARTICLE
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Introduction

As the integration of technology into contemporary music practice and research has become increasingly prevalent, various challenges have emerged for collaborating artists and technologists. The potential for both to benefit from each other's expertise is clear, but the question for such cross-disciplinary collaborations is how to design the specific conditions that produce insightful and innovative work. Our purpose in this article is to outline instances of best practice and identify complications that might arise in technology-based, creative collaborations in the domain of contemporary music. We outline the context and methodology implemented in the Cyborg Soloists research project and present five case studies examining different types of collaborative projects before noting our conclusions. This article is intended as a toolkit, providing pragmatic guidelines to inform future practice, and is addressed to those who may be responsible for implementing the formal structures of such collaborations, to collaborators in such projects and to people engaged in technology, the creative arts, or collaboration in general.

Context

Cyborg Soloists

The Cyborg Soloists research project forms the basis for this article and has facilitated many successful collaborations combining music and new technologies. The project is a UKRI-funded Future Leaders Fellowship led by Dr Zubin Kanga at the Department of Music, Royal Holloway, University of London. It seeks to explore interdisciplinary interactions between music, other arts and new digital technologies, and is doing so through an extensive programme of commissioned collaborations and associated published and technical outputs.

Cyborg Soloists primarily operates within the field loosely described as contemporary classical music. To scrutinise the role and effect of such technologies within creative practice, the project assembles teams of composers, performers, researchers and technology partners from industry to create new musical works that utilise innovative hardware or software. This is undertaken through formal commissions, culminating in a live premiere and recording of the new work. The technology partner's role is both to advise on its product's functionality and adaptability, based on a composer's or performer's needs, and to receive feedback from the musicians, which may prompt the development of new solutions and tools as additional research outputs. Creative workshops and meetings with collaborative teams are documented for research purposes, providing a rich resource for understanding how collaborative projects involving participants from diverse fields may prosper.

At the time of writing, Cyborg Soloists has facilitated 46 commissions, most of which have resulted in fruitful partnerships, premieres of exciting new artworks and follow-up performances. They have led to numerous research insights and impactful outputs. A few of these collaborative projects have also demonstrated complications; these have been equally useful for developing our understanding of this field of work. The Cyborg Soloists project therefore functions as an ideal environment within which to identify best practice and potential challenges for technologically focused artistic collaborations. Accordingly, the methods and methodologies we adopt – which parallel those of other research projects and commissioning schemes that focus on music and new technologiesFootnote 1 – require critical evaluation. The insights which follow are particular to this context, but may also be applicable to a range of collaborative creative projects and situations that integrate new technologies.

Technology in Contemporary Music

The ubiquitous presence and role of technology within contemporary musical practice is well documented and vigorously debated in academic conference seriesFootnote 2 and publications.Footnote 3 Whether in the form of recording, amplification, retro hardware or new digital applications, technology is embedded in the creation, performance and reception of such music.

The use of technology within creative musical practice can afford new forms of artistic expression and knowledge and can meaningfully comment or reflect upon the presence of technology within our broader lives. However, there are also many instances where technology has been used purely for its own novelty: tech for tech's sake. The Cyborg Soloists project examines how creative practice can critically engage with new technologies, and how and when this interaction can lead to new artistic practices, new technological innovations and powerful new experiences for audiences.

There are numerous barriers and challenges inherent in this field. There are barriers for artists working with new technologies, both financial and in the expertise required to use these new digital instruments and tools. There are challenges for technology researchers and industry partners who may be testing and experimenting with hardware and software outside the laboratory and beyond their original use cases and intended users. The Cyborg Soloists project addresses such issues by working with musicians who have ambitious creative visions for the use of technology in their work and, regardless of their level of expertise, pairing them with technological partners and researchers who can facilitate these aims and gain new insights into their own technology from the results. The project also seeks to broaden understanding of the technology used and the creative processes undertaken through publications, public presentations, conferences, concerts, concert films, media interviews, public workshops, our website and blogFootnote 4 and other public engagement. The project prioritises the sharing of creative and technological knowledge, including uploading many of the patches created to GitHub.

The collaborative artistic process may also be supported by using new technologies. The technology can provide a central point for collaborators’ attention, and a fruitful source of collaborative inspiration and productive resistance through which musicians can discover new creative approaches. However, in some circumstances, focusing on technology can also amplify the conditions for conflict within a collaboration.

Collaboration and Technological Sources of Creative Inspiration and Resistance

Collaboration – in all its varied forms – is a key factor in many contemporary musical practices.Footnote 5 In any collaboration between a composer and interpreter, between two or more human or non-human parties, there are constraints or points of resistance. General constraints may relate to, for instance, the time available, restrictions to budget or resources, or geographical dispersion. Specific constraints set exact requirements or limitations on the final work. For example, a funding body or promoter may require the piece to be a specific length, to approach a certain theme or engage with a selected third party. A performer may set instrumental constraints, such as specifying the choice of alternate or auxiliary instruments, or requesting certain playing techniques for the composer to focus on.

These constraints may also relate to the technology used. This might include the hardware and software used in composition and/or performance, the number or type of speakers used, or the lighting and projection setup. These factors are often contingent on the resources and support available to the project, the equipment available in the venue and the technological skillset of the performer or composer. Such constraints are often helpful; creative resistance can play a vital role in the creation of innovative work.Footnote 6 They construct edges to the composer's sandbox, guiding creative decision-making and providing practical confines, which many practitioners find helpful. Unlimited freedom, an endless blank page, can be overwhelming.Footnote 7

Sometimes technological constraints affect the process of creative collaboration. They might require the composer to make important decisions about the finished product before work and collaboration on the piece has begun. They might ask the practitioners to engage with collaborative partners outside the composer-performer pairing or outside the field entirely. They might challenge what constitutes productive collaboration between all parties. Some constraints can imply a tacit expectation for technological proficiency, generally applicable to any new forms of software or hardware. They may result in a performance situation in which the composer's presence is necessary every time the piece is performed, complicating future programming and touring.

Case Studies

Case Study 1. Industry Partner as Enabler: Ben Jameson, Harry Matthews and Vochlea

Dubler 2 is an audio-to-MIDI app developed by Vochlea.Footnote 8 It is designed to translate an audio signal into MIDI data, offering the possibility to interpret different vocal vowel sounds, automatically shift pitch data to single pitches and chords within a specified key and comprehend complex beats formed from unique vocal sounds. Dubler 2 is standalone software which integrates with Digital Audio Workstations (DAWs) and MIDI-capable programs. It has been designed primarily for vocal input, particularly within popular music studio settings. Cyborg Soloists commissioned composer-guitarist Ben Jameson and composer-pianist Harry Matthews to co-create a new work in collaboration with Vochlea. Working in the experimental classical tradition, Jameson and Matthews created a live concert work, Aeolian Fantasy (2022), for guitar, piano, tabletop fans and electronics.

Jameson and Matthews’ initial idea was to ‘misuse’ Dubler 2 by feeding the app with various field recordings and explore the resulting digital artefacts. They discussed this idea with Vochlea's Community Manager, Liam Cutler, in a meeting to understand the software's optimum calibration for their needs. Cutler explained various parametersFootnote 9 to demonstrate the spectrum of consistency and instability with which the app could respond to field recordings. Jameson and Matthews were not yet certain what field recordings to use and Cutler's demonstration equipped them with a flexible framework within which to explore and attune the app. This collaborative interaction enabled Jameson and Matthews to hone their expertise with the software and to delay consequential decisions on how or where they might apply this knowledge in the creative process and output.

After some experimentation, the pair configured the Dubler 2 app to respond to Matthews’ field recordings of wind sounds and to tabletop fans, used to create similar sounds live in performance. Jameson's interest in microtonal tuning systems led him to explore ways to push the software beyond 12-tone equal temperament. While Dubler 2 offers the possibility of microtonal tuning using its pitch-bend capabilities, this method demands precise input pitch. Instead, Ben retuned his DAW and added the FB-3300 instrument plug-in to access aggregate tuning systems with more than 12 pitches per octave.Footnote 10 This allowed a narrow configuration on the Dubler 2 app to achieve consistent responses to the varied frequency content of the wind sounds, while outputting a wider harmonic palette. Here, the technology partner's framework allowed the musicians to conduct independent experiments in which their individual musical interests and technological skills could create a productive response to the affordances and resistances of the technology.

While the interaction between Jameson and Matthews and Vochlea was a light-touch collaboration, the project led to fruitful engagements and benefits after the work's completion. The resulting piece was artistically engaging and demonstrated the potential of the software in performance situations beyond its original intended use with vocalists. By their reapplication of vocally oriented software, Jameson and Matthews created beauty from the sound of wind hitting a microphone, a sound which many field recordists consider ugly and a flaw in a recording. A large contingent of Vochlea staff attended the work's premiere, demonstrating their curiosity and investment in the project, and Vochlea have asked Jameson and Matthews to create a demo video of their setup for their own marketing and tutorial platforms, informing future practice in utilising the software.

Case Study 2. Using External Expertise: Alexander Schubert and ANT Neuro

ANT Neuro are a multinational company leading the field in EEG brain sensor technology.Footnote 11 EEG caps have been used in experiments exploring the brain responses of musicians and audiences, as well as to sonify or visualise brain data as part of musical works,Footnote 12 but Alexander Schubert's work for Zubin Kanga, Steady State, is the first – to our knowledge – to use them for conscious decision-based brain control of sounds for a musical performance.

Many factors aided Schubert and Kanga's collaboration, despite it venturing into new creative and technological territory. Alexander Schubert has a particular expertise in interdisciplinary work and works integrating new media and digital culture, including CODEC ERROR (2017), ASTERISM (2021) and Sleep Laboratory (2022); he also studied bioinformatics as an undergraduate student, gaining vital insights into brain sensors and their capabilities.Footnote 13 The pair had previously collaborated on WIKI-PIANO.NET (2018), a work that utilised an internet-based score, allowing anyone online to contribute to the score and its multimedia elements. The long-term success of the piece has contributed to the development of a fruitful collaborative relationship between Kanga and Schubert and their fluent and efficient communication when dealing with the challenges of a new interdisciplinary work for Cyborg Soloists.Footnote 14 Their previous collaboration meant that they were both aware of the large amount of artistic and technological development time that this groundbreaking project would require, as well as sharing a willingness to dedicate this time, including the acquisition of new skills to facilitate the design of cutting-edge technology for use as a musical instrument.

Unlike digital instruments that are designed for easy integration with DAWs, there were challenges to overcome in order to use the EEG caps as musical controllers. The first was a mechanism to allow for conscious brain control, where decisions by the user can be detected by the sensors. Brainwave sonification has been explored since Alvin Lucier's Music for Solo Performer (1965) but conscious decision-based control is much more difficult to achieve, given the complexity of the data. The other challenge was translation of the EEG data into a format that could be used by common music software.

To overcome these challenges, a Brain-Computer Interaction (BCI) researcher recommended by ANT Neuro, Dr Serafeim Perdikis, Lecturer at the Neural Engineering and BCI Laboratory at the University of Essex, joined their collaboration. Across five months of testing and discussions with Kanga and Schubert to select the best approaches, Perdikis created a series of applications that would allow direct brain control of music software using the EEG caps, using a steady-state visual evoked potential (SSVEP) system: a flashing light projected into a person's eyes will create a signal of the same frequency in the occipital lobe of their brain, where visual stimuli are processed.Footnote 15 In a medical context, this type of system is used to allow people with disabilities to control prosthetic limbs, motorised wheelchairs or speech software.Footnote 16

In this case the ‘steady-state’ system is used for control of music software by creating distinct regions on a screen, each flashing at a different frequency. Looking from one region to another acts like pressing different switches on a digital controller and using live-generated visuals creates the possibility of a feedback loop, with the brain as the central component in this audiovisual system both responding to, and influencing, the visual stimuli. Perdikis also managed the data conversion from raw EEG data into OSC, which can be read by music software such as Max/MSP. With successive updates after workshops between Schubert and Kanga, interleaved with frequent online meetings with Perdikis, the team was able to use the brain sensor cap as a basic musical instrument or controller, with four frequencies used for control – the equivalent of four switches on a digital controller.

The groundbreaking use of medical sensors as a musical instrument in Steady State has required the combination of a high degree of technical proficiency and artistic experience from both composer and performer, the input of a leading researcher in BCI, high-level equipment from an industry partner and time to experiment with the equipment. The three collaborators’ specific expertise was essential: Perdikis’ knowledge facilitated the translation of data from the EEG cap for use in music software, Schubert's experience in multimedia work and bioinformatics assisted him in the compositional implementation of this converted data and Kanga's experience in digital instruments made possible the calibration and virtuosity required to use the new brain sensor ‘instrument’ in a complex live performance. Funding and time for all participants has been crucial to the work's development over two years of experimentation, development, workshops and composition. Success in this field depends on ideal conditions and collaborators, although our aim and hope is that brainwave control of music software could become more accessible in the near future.

Case Study 3. Varying Technical Needs for the Same Technology: Robin Haigh/Oliver Leith and TouchKeys

The TouchKeys technology was created by Professor Andrew McPherson from research in his Augmented Instruments Laboratory (Queen Mary, University of London/Imperial College, London).Footnote 17 The technology consists of a set of touch sensors that can be applied to any keyboard – from a grand piano to MIDI keyboards of different sizes – allowing for control through movement across the surface of the keys. One of the more obvious applications for this type of control – which is also the first available preset – is allowing for vibrato and pitch-bending through the movement of the fingers across the key surfaces, but these sensors can be mapped to a multitude of effects or controls. Although this technology has been made commercially available, it is at a relatively experimental stage, with idiosyncrasies, challenges and new artistic applications still being identified through new, explorative projects.

Two recent projects written for Kanga as a solo performer show contrasting approaches to this technology, demonstrating how different compositional aims may require different levels of technical proficiency to achieve. Oliver Leith's Vicentino, love you – studies for keyboard (2023) uses a preset within the accompanying app that allows for microtonal tuning systems, dividing the keys up into halves or thirds, and Leith used this to facilitate quarter-tone tunings in a keyboard work. Kanga demonstrated the different ways the TouchKeys instrument can function so that Leith could develop a broad understanding of the device. Leith then chose a number of digital synthesiser sounds in Ableton Live that would respond to this microtonal tuning, while at the same time feeding the MIDI into an analogue synthesiser (a Sequential Prophet Rev2) which renders the notes without these microtonal inflections. This blend of sounds and tunings allowed him to create a series of studies using sounds that resemble a brass ensemble, with variable tunings between instruments.

This work demonstrates an imaginative use of the preset functionality of the instrument as well as standard software and hardware (Ableton Live, and MIDI control of an analogue synthesiser) to create a piece that nevertheless offers a unique playing approach and soundworld. Although relatively straightforward to achieve, the technical realisation of the work still required Kanga's knowledge of the TouchKeys software to set up the microtonal interaction, knowledge of MIDI control of synthesisers and the control of sounds in Ableton Live via the TouchKeys, and knowledge of how to hone these sounds into a variety of subtle variations of the central brass sound. Kanga spent some additional time to learn and implement necessary changes through testing of the setup and communication with Professor McPherson, and the project's success was in part reliant on an expertise that was present or easily acquirable between these particular collaborators.

Robin Haigh's Morrow (2023) took a contrasting approach to the TouchKeys technology. Having heard that the TouchKeys keyboard could be used to control MIDI-controllable parameters other than pitch, Haigh wanted to control the speed and dynamic of a repeating piano sample using the vertical position of the performer's finger on the key. After some experimentation in Ableton Live, Kanga found it straightforward to map the vertical touch of the key to the sample dynamic. Mapping the speed of the sample was more complicated, especially if the aim was to produce a smooth tempo curve without distortion or digital artefacts when the sample was repeated, and Haigh and Kanga engaged composer Nicholas Moroz – a frequent collaborator and assistant of Kanga's – to write a Max for Live patch to accomplish this speed mapping for the repeating samples. This required multiple rounds of experimentation with different versions of the patch to debug the idiosyncrasies of the polyphonic functionality of the TouchKeys because Haigh wanted notes to be able to change speed independently, as well as applying some microtonal retuning.

Professor McPherson was included in the team's discussions to help Kanga and Moroz to understand how the TouchKeys processes and outputs MIDI data. Once Moroz's Max for Live patch was calibrated it worked reliably, allowing for the recording and performance of Haigh's work with the precise functionality the composer envisioned. One specific component of the technical solution required external assistance to achieve the composer's artistic ends. With further research and training Kanga could have achieved a similar result, but it was more efficient and reliable to bring in an expert to build this particular function. In cases such as this, a cost–benefit analysis needs to be done by the participants, weighing the costs of employing an external consultant against the time required for them to learn the necessary skills themselves. External factors of funding and available time will affect this decision and solutions may be different for a superficially similar collaboration around the same technology.

Case Study 4. Assumptions Regarding Roles, Skills and Time: An Ensemble Collaboration with AirSticks

Sometimes, with a project configuration that looks great on paper, things can still go wrong with a collaboration. These may not be anybody's fault, but may still be so disruptive to the project that it cannot be completed. This case study examines such a project: a percussion concerto using the AirSticks that encountered difficulties because of a series of incorrect assumptions between the collaborators. The AirSticksFootnote 18 are a gestural musical instrument co-designed by percussionist and researcher Alon Ilsar and programmer-composer Mark Havryliv, who have developed working prototypes in collaboration with SensiLab at Monash University, Melbourne, Australia. Using the purpose-designed AirWave software, the AirSticks combine the physicality of drumming with computer programming, providing a wireless MIDI/OSC controller capable of triggering and manipulating sound and media events.Footnote 19 While not yet commercially available, these devices are being created for a wide range of end users, from professional percussionists to music education and community outreach programmes, increasing accessibility to music-making and coding.Footnote 20

In 2021, Cyborg Soloists commissioned Composer A to write a new work for Ensemble B using the AirSticks.Footnote 21 After viewing demonstration videos of the devices, Composer A proposed a theatrical ‘percussion’ concerto for Ensemble B's percussionist, Percussionist C, in which sound, movement and lighting would combine with the live ensemble. The project began well. Composer A made it clear that she would require considerable technical assistance in using and programming the AirSticks. Percussionist C already had considerable expertise with programming technology and the use of MIDI instruments – as well as relevant skills and an aptitude for this type of work – and it was agreed that he would program the AirSticks, after a short online tutorial.

The partnership seemed well balanced, and a tacit assumption was made that Composer A and Percussionist C could then create the piece ‘in the room’ in a workshop setting, beginning with conceptual and musical ideas before working out how to programme these for the AirSticks.

Composer A had a clear vision for how the AirSticks could be used to create a large-scale concerto, and Ilsar provided Percussionist C with a training session on the AirWaves software, after which he began practising with the devices in preparation for the workshop. During the first workshop, however, it quickly became apparent that there was a gap in the assumptions made by the collaborating parties. It had been generally assumed that the training Percussionist C received from Ilsar would be sufficient for him to achieve Composer A's aims quickly, that it would be easy to start creating complex music using a non-standard gesture-functionality within a short period of time and that Composer A's vision would not require much programming beyond the standard presets of the AirWaves software. These assumptions were not only misplaced but also contributed to a delay in communicating difficulties with what had appeared to be a relatively straightforward process.

The result was an unsuccessful and frustrating workshop. The AirSticks required more complex programming than anticipated to achieve what Composer A wanted in her piece. These difficulties were not beyond Percussionist C's capabilities, but he did not have the time in his schedule – nor did we have budget for additional time – to develop the specific mappings for these devices using the AirWaves software that would have been required to realise Composer A's ideas. Composer A had already explained that she did not have the skills to program the AirSticks and did not have the time to develop the required expertise without external assistance. Ilsar – based in Australia, so geographically distant and in a different timezone – was engaged with the demands of his own research on the hardware, so could not readily dedicate time to developing a bespoke patch for this project. Cyborg Soloists had not expected to have to budget for external expertise time to build the electronics for the work. These gaps could have been bridged with the integration of an external expert and a reassessment of the project vision but the gap that emerged between the vision and its realisation – and the logistics of adding another person to a team of people who were already working around challenging schedules – led to the project's mutual closure.

This project encouraged us to improve our management of Cyborg Soloists collaborations, to discuss assumptions and roles at an early stage, and to identify and budget for external expertise where this may be needed, even when this appears not to be required. It also demonstrated the need to identify when additional training might be useful, sometimes in addition to bringing in an external collaborator. No project exists in a vacuum, but although these types of collaborations often benefit from skills gained on previous projects, it also needs to be recognised that gaining new skills may take more time than collaborators have available.

We are now planning new AirSticks projects, where the lessons learned in this first attempt are helping us to plan more appropriately and flexibly for the skills and time needed to work with these powerful and fascinating digital instruments.

Case Study 5. Expertise in Other Fields: Neil Luck, Chisato Minamimura and MiMU

A number of Zubin Kanga's collaborations for Cyborg Soloists have featured the MiMU sensor gloves.Footnote 22 These gloves allow for control via motion (using accelerometers and gravimeters) as well as gestures (using flexor sensors in the fingers). This combination allows for a wide variety of control using the hands without any further instruments, with the ability to map these to many different types of outputs via MiMU's application, Glover, which then functions like any other digital instrument or controller. The combination of these different sensors and the powerful mapping software allows for many possible ‘instruments’ to be created from the gloves, each with different functionality and sounds.

The Cyborg Soloists collaborations around the gloves also showcased the input of others into the mapping process. Kanga had already created or co-created a number of works using the MiMU gloves, including his own Steel on Bone (2021), before collaborating with composer Neil Luck on Whatever Weighs You Down (2022), and he was able to bring a level of expertise in the instrument's capabilities as well as experience of pushing them to their limits in a variety of musical contexts. Luck was also experienced in integrating gesture and performance art into his works, including 2018 (2016), Live Guy Dead Guy (2017–2018) and Imaginary Solutions (2019), and contributed this expertise, alongside a wealth of ideas about how to integrate the gloves into the work.

This work also featured a third collaborator who had a significant influence on the use of the gloves, the Deaf performance artist Chisato Minamimura. Minamimura appears on screen in a performance combining dance, acting and gesture, integrating the influence of her experience in British Sign Language (BSL) (even though BSL was not used in the final work). In a series of workshops with Luck and Kanga, Minamimura drew on her experience – as a BSL guide and in related forms of expression such as Sign Mime, as well as dance and performance art – to develop gestures and movements that Luck and Kanga could not have generated on their own. In places where these movements and gestures would be imitated by Kanga, using the gloves to trigger and control sounds and effects, her influence extended to the work-specific mapping of the gloves. Although Luck provided the sounds and Kanga did the mapping and programming of Glover and Ableton Live, Minamimura's influence facilitated a semiotic dimension to the gestures beyond the merely functional, creating an additional layer to the performance, and to this particular realisation of the digital instrument.Footnote 23

This case study shows how the use of a digital instrument can be influenced by an external collaborator who does not have expertise in the technology being employed. Minamimura had no prior knowledge of the MiMU gloves, but her expertise in gestural performance opened up new ways of using this instrument, showing how using gestures with symbolic or communicative influence could create a different set of gestures that went beyond merely finding the largest variety of sounds and functions from the instrument. Although it would have been straightforward for Luck and Kanga to assume their expertise with the gloves required no further input, Minamimura's input was vital to the collaboration, demonstrating how technology can be influenced by experts in other artistic fields and by the unique experience of disabled artists.

Conclusions: A Toolkit for Commissioners and Collaborators

In our case studies we have demonstrated aspects of technology-based creative collaborations that have required attention. Some of these may be common to collaborative practice of any sort; others may relate directly to the inclusion of technologies that may amplify or differentiate these issues. In conclusion, we propose some recommendations, based on our experience both in the case study projects described here and others from Cyborg Soloists, as a toolkit for commissioners and collaborators on musical projects involving technology.

Communicating Across Different Fields of Expertise

Communication is key to any successful collaboration, but particularly so when multiple partners are involved and when those partners come from different disciplines. Rather than communicating through individual pathways – for example, between composer and technology partner, or composer and performer – we suggest developing open networks of communication between all collaborators to benefit from everyone's expertise, minimise misunderstandings and reduce time spent relaying information from private conversations to others. Discuss preferred methods of communication and experiment to find the best collaborative methods for your project, especially when managing a geographically dispersed collaboration. Acknowledge that parties may have varied priorities, time schedules and expectations. Discuss areas of expertise and areas where external expert assistance could be valuable, and keep each other updated as the project develops. Encourage open and supportive communication, a sense of humour and clear, shared goals.

Articulating Roles

It can be easy to make assumptions about who is responsible for the nitty-gritty technical work on a project. These assumptions may be implicit and may not be based in a realistic understanding of collaborator proficiencies, time and resources. This can result in a single partner shouldering an unfair proportion of the work and/or being expected to be responsible for elements for which they lack the appropriate skills. To manage this, define roles for each collaborator at the start of the project and write these down for future reference if problems arise. Articulate how much and what kind of support the composer and performer(s) can expect from the technology partner and who will be responsible for any coding required that goes beyond the skillset and availability of the original participants. Both commissioners and collaborators need to be ready to discuss changes in approach if a point is reached during the project where roles need to be reconsidered.

Choosing Collaborators

Moving beyond software and hardware presets to innovate with technology usually needs a high proficiency in skills such as live audio signal processing/manipulation, programming patches in Max/MSP, Pure Data and other languages, and MIDI-mapping within DAWs. Naturally, this produces a situation in which those who have such skills will be the ones to make such work: those who don't will avoid such projects. This can lead to a disciplinary bifurcation between these two types of contemporary musicians, and individual collaborators should always be clear about their own levels of technological literacy and need for support or help with coding if required. Information about skill levels among collaborators should be shared between the composer and performer(s) and with the technological partner and commissioner, who may also have expectations as to what each party can accomplish independently. Our experience suggests that the work of coding often falls to the composer, a potential problem if they do not already have the required skills. We recommend that communicating levels of existing skill – as well as levels of interest and time available for developing relevant additional skills – is vital to ensuring that the right help can be given at the right time.

It is also important for artists and commissioners to communicate on their need or desire to work with non-technical collaborators and to acknowledge the creative and technological innovations that can result from diversifying collaborative teams. Case Study 5 is indicative of the valuable contributions made to Cyborg Soloists by disabled artists, and we encourage projects to consider artists or technologists whose varied life and creative experiences may suggest fresh approaches to the task at hand, as well as creating work that may be of use or interest to diverse communities.

Planning for Longevity

Collaborators and commissioners should focus on the long-term health of their partnership. Pieces should live beyond their premieres, collaborations should develop beyond individual pieces and technology should be reused and repurposed wherever possible. Completing a piece that uses new technology should feel like the start of the journey with that technology rather than the end. Collaborators may feel after a premiere that there were things they could have done differently and may also have many unrealised ideas for using the technology. Commissioners, artists and organisations should plan for multiple projects, using one particular device or software to develop a repertoire and a body of creative knowledge around it, as we do in Cyborg Soloists. Artists should also work with their collaborating technology partners to plan for future versions of the work as the technology evolves, because the original device or software used may become obsolete. Thus a work can have renewed life, as the technology changes and advances, just as works of the musical canon have survived and adapted to the technical evolution of acoustic instruments.

Keeping Priorities in Mind

No musical collaboration is ‘about’ technology. Yet when a project is focused on technology it is easy to get caught up in the minutiae of programming, wrangling cables and fixing bugs. Some collaborators revel in this work, while others can find it frustrating, putting strain on collaborative relationships. We suggest focusing on music-making as much as possible. Plan time to rehearse the music without the technology, or with only a minimal audiovisual setup, allowing the performers to rehearse as easily and efficiently as possible. Make time to discuss instrumental concerns, not just problems with the technology, and discuss individual and shared goals for the piece from a musical perspective. A productive rehearsal can raise the morale of a collaborative venture: sometimes stepping away from the technology for a few hours is the best way to achieve this.

If you are trying to innovate with existing technologies, work by other creators may already exist that can be utilised and developed. Consider limiting the amount of new work to be done by tweaking existing patches, presets, plug-ins and virtual instruments, rather than trying to make everything from scratch. Not every aspect of the work needs to be radically new for it to have a significant impact, and choosing to use, for example, a well-designed existing plug-in is likely to free up time to innovate with other aspects of the interaction between live musicians and the technology.

Preparing for Contingencies and Embracing Risk

Any project may encounter unforeseen difficulties requiring additional time, money or expertise. In our experience this is relatively common when working with new technologies. We suggest incorporating an opportunity for the technology to be reviewed and tested early in the process, ensuring that there is no mismatch between artist expertise, time allotted for the work and the demands of the technology. Something that seemed straightforward at first glance may prove more difficult when it is embedded in a workshop or performance setting.

Be prepared for the possibility that additional expertise may be needed. This may include reserving budget for such a purpose. If there is no additional scope for an expert to be added to the team, or to increase the time available to do the work, it is possible that the project may need to be cancelled or radically altered.

Performers always want to feel confident that the technology being used will work on stage. They need to invest time to test their own tech setup rigorously and to have a contingency plan for problems that cannot be easily resolved on stage. Collaborators may want to create risky or experimental musical-technological setups, but performers are the ones who confront this risk on stage and should be included in all discussions around risk, testing and troubleshooting.

In any collaboration there are always challenges. Complications – whether technological, emotional or logistical – arise as we mediate the myriad interests, motivations, opportunities and compromises of a project. Yet such challenges and the risk of failure are central to the process of creating innovative work. Combining novel creative approaches with new technologies necessitates confronting points of resistance, undertaking multistage problem-solving, testing and re-assessing, and fostering creative ambitions that may outrun current technical expertise. Such a complex process must be underpinned by strong and clear communication and an integrated approach to collaboration within creative teams. We encourage musical and technological collaborators to embrace risk together: ambitious aims, failure, and learning, or even creating, new skills are all central to discovering new approaches to the combination of art and technology.

Acknowledgments

This research was undertaken as part of the Cyborg Soloists project, supported by a UKRI Future Leaders Fellowship and Royal Holloway, University of London [grant number MR/T043059/1]. www.cyborgsoloists.com/. The data used in this research is not publicly available as participants have not consented to public sharing of these materials. Selected documentation is available on request to researchers. For enquiries, please contact the project PI, Zubin Kanga, via the Royal Holloway research portal: https://pure.royalholloway.ac.uk/en/persons/zubin-kanga.

References

1 For more information about Cyborg Soloists, see: www.cyborgsoloists.com. For example, the RNCM Centre for Practice and Research in Science and Music (PRiSM); the AHRC Leadership Fellowship ‘The Garden of Forking Paths’ led by Dr Scott McLaughlin at the University of Leeds; the ERC-funded ‘MusAI’ led by Professor Georgina Born at University College London.

2 For example, The International Conference on New Interfaces for Musical Expression (NIME) and The Sound and Music Computing (SMC) Conference,.

3 Including numerous special issues in the Journal of New Music Research, Organised Sound and the Leonardo Music Journal (LMJ) among others; and various editions including Collins, Nick and d'Escrivan, Julio, eds., The Cambridge Companion to Electronic Music, 2nd edition (Cambridge: Cambridge University Press, 2017)CrossRefGoogle Scholar; Cook, Nicholas, Ingalls, Monique M. and Trippet, David, eds., The Cambridge Companion to Music in Digital Culture (Cambridge: Cambridge University Press, 2019)CrossRefGoogle Scholar; Hepworth-Sawyer, Russ, Hodgson, Jay, Paterson, Justin and Toulson, Rob, eds., Innovation in Music: Performance, Production, Technology and Business (New York: Routledge, 2019)CrossRefGoogle Scholar.

5 See for example: Hayden, Sam and Windsor, Luke, ‘Collaboration and the Composer: Case Studies from the End of the 20th Century’, TEMPO, 61, no. 240 (April 2007), pp. 2839CrossRefGoogle Scholar; Taylor, Alan, ‘“Collaboration” in Contemporary Music: A Theoretical View’, Contemporary Music Review, 35, no. 6 (2016), pp. 562–78CrossRefGoogle Scholar; Clarke, Eric F. and Doffman, Mark, eds., Distributed Creativity: Collaboration and Improvisation in Contemporary Music (New York: Oxford University Press, 2017)CrossRefGoogle Scholar; Redhead, Lauren and Glover, Richard, eds., Collaborative and Distributed Process in Contemporary Music-Making (Newcastle upon Tyne: Cambridge Scholars Publishing, 2018)Google Scholar.

6 Gorton, David and Kanga, Zubin, ‘Risky Business: Negotiating Virtuosity in the Collaborative Creation of Orfordness for Solo Piano’, in Music and/as Process, eds. Redhead, Lauren and Hawes, Vanessa (Cambridge: Cambridge Scholars), p. 97Google Scholar.

7 Stravinsky, Igor, Poetics of Music: In the Form of Six Lessons, trans. by Knodel, Arthur and Dahl, Ingold (Cambridge, MA: Harvard University Press, 1947), pp. 6365Google Scholar.

8 For further information and video demonstrations, see: https://vochlea.com/.

9 Including Training Sensitivity (input gain), Velocity Response (input compression), Input Level, Stickiness (output responsiveness and smoothness), pitch bend settings and microphone choices.

10 Read more about this process at: Ben Jameson, ‘Using alternate tunings with Vochlea's Dubler 2’, Cyborg Soloists, 10 May 2023 www.cyborgsoloists.com/news-large/using-alternate-tunings-with-vochleas-dubler-2 (accessed 23 November 2023).

11 For further information about the company, see: www.ant-neuro.com/about-ant.

12 Experiments on brain responses to music include: Yuan-Pin Lin, Chi-Hong Wang, Tzyy-Ping Jung, Tien-Lin Wu, Shyh-Kang Jeng, Jeng-Ren Duann and Jyh-Horng Chen, ‘EEG-Based Emotion Recognition in Music Listening’, IEEE Transactions on Biomedical Engineering, 57, no. 7 (2010), pp. 1798–806; Helmuth Petsche, K. Linder, Peter Rappelsberger and Gerold Gruber, ‘The EEG: An Adequate Method to Concretize Brain Processes Elicited by Music’, Music Perception, 6, no. 2 (1988), pp. 133–59; Tiffany Field, Alex Martinez, Thomas Nawrocki and Jeffrey Pickens, ‘Music Shifts Frontal EEG in Depressed Adolescents’, Adolescence, 33, no. 129 (1998), pp. 109–16. As well as Alvin Lucier's Music for Solo Performer (1965), other works using sonification and/or visualisation of EEG data are used in Cliff Kerr's Consciousness (2019), Emily Howard's DEVIANCE (2023) and were explored by Shankha Sanyal, Sayan Nag, Archi Banerjee, Ranjan Sengupta and Dipak Ghosh in ‘Music of Brain and Music on Brain: A Novel EEG Sonification Approach’, Cognitive Neurodynamics, 13 (2019), pp. 13–31.

13 Further discussion of Schubert's integration of multimedia in his work can found in Zubin Kanga and Alexander Schubert, ‘Flaws in the Body and How We Work with Them: An Interview with Composer Alexander Schubert’, Contemporary Music Review, 35, no. 4–5 (2016), pp. 535–53.

14 The changing dynamics of long-term collaborations are discussed further in Zubin Kanga, ‘Inside the Collaborative Process: Realising New Works for Solo Piano’ (Ph.D. dissertation, Royal Academy of Music, 2014).

15 Danhua Zhu, Jordi Bieger, Gary Garcia Molina and Ronald M. Aarts, ‘A Survey of Stimulation Methods Used in SSVEP-Based BCIs’, Computational Intelligence and Neuroscience, 2010 (2010), Article ID 702357.

16 The use of a SSVEP system with speech software is discussed in Xiaogang Chen, Yijun Wang, Masaki Nakanishi, Xiaorong Gao, Tzyy-Ping Jung and Shangkai Gao, ‘High-Speed Spelling with a Noninvasive Brain–Computer Interface’, Proceedings of the National Academy of Sciences, 112, no. 44 (2015), pp. E6058–67. The use of an SSVEP system for the control of prosthetics is discussed in Rui Li, Xiaodong Zhang, Hanzhe Li, Liming Zhang, Zhufeng Lu and Jiangcheng Chen, ‘An Approach for Brain-Controlled Prostheses Based on Scene Graph Steady-State Visual Evoked Potentials’, Brain Research, 1692 (2018), pp. 142–53. Use of this type of system for controlling a motorised wheelchair is discussed in Yuanqing Li, Jiahui Pan, Fei Wang and Zhuliang Yu, ‘A Hybrid BCI System Combining P300 and SSVEP and Its Application to Wheelchair Control’, IEEE Transactions on Biomedical Engineering, 60, no. 11 (2013), pp. 3156–66.

17 For further information about the Laboratory, see: https://instrumentslab.org/research/.

18 For further information, see: Sam Trollard, Alon Ilsar, Ciaran Frame, Jon McCormack and Elliot Wilson, ‘AirSticks 2.0: Instrument Design for Expressive Gestural Interaction', 28 June 2022 – 1 July 2022, NIME 2022.

19 Ibid.

20 For more information on the use of AirSticks as an accessible instrument, see the following presentation, ‘Airsticks 2.0: instrument providing new possibilities to create music', 8 December 2022, YouTube video, https://www.youtube.com/watch?v=7V1dnyBueQ0 (accessed 13 February 2024).

21 Collaborating artists' names have been anonymised due to the sensitive nature of this case study.

22 For further information and video demonstrations, see: www.mimugloves.com.

23 This collaboration is explored in detail in Kanga, Zubin, ‘The Cyborg Hand: Gesture, Technology, Disability and Interdisciplinarity in Whatever Weighs You Down’, Contemporary Music Review, 42, no. 3 (2023), pp. 319338CrossRefGoogle Scholar.