AUDIFICATION RESEARCH AT NOVARS


DNA & RNA Sequences as Systems for Music Composition by Mario Duarte

The aim of this PhD portfolio (awarded in 2017 at the University of Manchester, UK) is to explore another means of model-based sonification, more specifically, Musification or musical rendering where DNA and RNA sequences are just not auralized from a hard scientific perspective which unedited results output are presented as a musical composition. But instead, the enquiry here involves questioning how a composer could make use of sonification as a tool in order to generate and organise musical material. Could Sonification/Musification be used as a starting point to develop material and establish a compositional system?

The research questions are: Is it possible to create musical structures from biological configurations? How can the composer realize meaningful mappings between biological processes and musical parameters? How can DNA models contribute to the concept of structure in a musical piece? Is it possible to create a compositional system from DNA and RNA sequences? Can creativity be captured by a set of rules? Composers vs. Example Collectors. Is mapping arbitrary? Is it subjectively based in personal compositional choices?

The answers to these research enquiries are investigated within the pieces in this portfolio and also through the creation of a compositional DNA sequencer tool, in addition to this written commentary.

Radio Telescope data from Jodrell Bank: "Touch the Stars" by Mark Pilkington

Commissioned by Futureeverything for the opening of the Futuresonic festival Manchester 2009. "Touch the Stars" is a collaboration between NOVARS and the Astro Physic department at the University of Manchester, UK.

Live data from the Radio Telescope at Jodrell Bank, Cheshire, is turn into sound - a sonification of the universe. Performers musician Mark Pilkington (UK) and astro-physicist Tim O'Brien (UK).

State Space Models and Sonification: Composing with Numbers by Rosalia Soria-Luz

This research explores the sonification of state-space mathematical models, commonly used in control engineering, as a tool for mu- sic composition. The paper presents three specific representations: inverted pendulum, a spring-mass-damper, and an abstract model, implemented in Max/MSP, and ways of using the different output states as a control rate signal for sound synthesis and sound trans- formations. Some sonification examples are provided to show the possibilities in electroacoustic composition.

In recent years sonification has gained importance as a mean to analyse and understand different types of information. Although the main goal is accurately represent the information of interest using sound, the aesthetic aspect has played an important role while designing sonic representations. Furthermore, sonifications have been intentionally used as source for electroacoustic composition, and different combinations of abstract sound synthesis and sound transformation techniques have been used. On the other hand modelling has been included in the sound syn- thesis field with the aim of reproducing musical instrument sounds based on their physical characteristics rather than attempting to recreate them using abstract sound synthesis techniques. This papers shows the combination of real time state-space math- ematical models and sonification. The purpose of the mathematical models is not aimed to represent musical instruments or any acoustic characteristic, but rather to represent physical system behaviours or mathematical concepts that can be sonified. The paper shows the implementation of three state-space models using Max/MSP: a controlled inverted pendulum, a spring-damper- mass system, and an abstract representation. These models can be used as an interactive real time sonification tool. The sound materials can be used as

STATE SPACE REPRESENTATION

State-space models consist of n differences equations grouped in a vector-matrix equation. The model consists of a set of inputs, outputs and state variables expressed as vectors [4]. In this paper the models consist only of one input and multiple outputs grouped in the model’s output vector. State-space models are not limited to represent only physical systems, but also chemical, economic, electrical, etc. The general state space representation of a linear system is as follows:
x ̇(t) = Ax(t) + Bu(t)
y(t)=Cx(t)+Du(t)
Where x(t) ∈ Rn is the n-dimensional state vector, u(t) ∈ Rm is the m-dimensional input vector, and y(t) ∈ Rp is the p- dimensional output vector [5]. A, B, C and D, are constant matrixes defined by he system’s parameters. A and B contain information about the system. The C matrix determines which states are the model outputs, and D is a feedback matrix connecting directly the output to the input. In order to have a digital representation of such systems it is necessary to sample them. A sampled version of (1) with sampling period Ts can be represented as follows:
x(k+1)=Φx(k)+Γu(k)
y(k) = Cx(k) + Du(k)
where Φ = eaT and 􏰂 ts eAsds are obtained by considering a zero 0 order sample and hold circuit.

Molecular Sonification by Falk Morawtiz

What is Molecular Sonification?

The term molecular sonification encompasses all procedures that turn data derived from chemical systems into sound. Nuclear Magnetic Resonance (NMR) data of 1hydrogen and 13carbon nuclei are particularly well suit-ed data sources for molecular sonification, as the corresponding resonant processes already lie in a frequency range of 0 - 20000 Hz, rendering the need for any addi-tional frequency transpositions unnecessary. NMR data of many hundred thousand molecules are easily accessible available via online databases. The structure of the molecule being analysed is directly related to the features present in its 1hydrogen and 13carbon NMR spectra. It is therefore possible to select molecules according to their structural features, in order to create sounds in preferred frequency ranges and with desired frequency content and density. Using the sonification methodology developed and presented in this paper, it was possible to create an acousmatic music composition based exclusively on NMR data as source of sound. It is argued that NMR sonification, as a sound creation methodology based on scientific data, has the potential to be a potent tool to effectively contextualize extra-musical ideas such as Alzheimer's disease or global warming in future works of art and music.

From Science to Sound Chemical Spectroscopy as the Basis of Molecular Sonification

All molecules and atoms interact with the electromagnetic spectrum, from low frequency radio waves to high frequency γ-ray radiation. As an example, microwave radiation can change the rotational state of polar molecules, infra-red radiation changes their vibrational state. All of these interactions are quantized: Due to quantum physical limitations and rules, molecules can essentially only absorb and re- emit radiation at specific frequencies. If two molecules have different chemical structures, the frequencies at which they absorb and exhibit electromagnetic waves will likely be different, too. This correspondence between structure and quantized energy absorption led analytical chemists and physicist to develop a wide range of spectroscopic methods, such as Infra-Red, NMR, Raman and UV-Vis spectroscopy, each suited to elicit different aspects of a molecule's structure.

Microbial Music: "Oxidising the Spectrum II" by Ricardo Climent

"Oxidising the spectrum" (2004) was born as an interdisciplinary collaboration between composer Ricardo Climent and chemical engineer Quan Gan. This interactive installation explores the possibilities of Microbial electrochemistry in the compositional environment.

To reinvent the Chemistry laboratory as a musical instrument, the composer conceived a system, which could generate and manipulate biological patterns using Microbial Fuel Cells (MFCs) to generate electricity for musical mapping. It includes five families of microbial cultures (the Microbial Ensemble), which behave as a musical quintet, while low voltages are converted into musical expression during the live performance. Compositionally, the manipulation of live organisms seeks to "re-engineer the process of sonification" by reconstructing electrical patterns that are sonically tested. After being trained by Dr Gan for a year at the chemistry laboratory at Queens University of Belfast, Climent constructed the Interactive system and the piece.

"Oxidising the Spectrum II", premiered at ZKM's sonification symposium Strömungen in 2016 in Karlsruhe, is a bio-simulator created in Unreal Game Engine 4, which mimics the microbial system in the original installation and its original data-extracted set musical soundworld. Therefore, biologically safe, since the microbes are virtual.

The project was financed by the Arts Council of Northern Ireland, the Sonic Arts Research Centre and the Belfast City Council by providing in-kind the Tropical Ravene (Botanic Gardens) for its public premiere during the Sonorities Festival. A 1-month long installation was featured at the Temple Bar de Dublin. Since then, the author in numerous forums has presented its research and it was widely echoed in Spanish mainstream newspapers and radio, and also at SGAE's magazine CREA.

The Microbial Ensemble are: Saccharomyces diastaticus, Janthinobacterium lividum, Pichia anomala, Saccharomyces cerevisiae Strain K5-5A, Kluyvermoyces lactis and Leuconostoc mesenteroides. Musicologist Aine Leamy discussed it as part of her thesis for National University of Ireland- The Phenomenon of the Scientist-Composer, August 2008.

Maritime data for the acousmatic:"Sea Lantern" by David Berezan

Sea Lantern - electroacoustic sound installation, 4-channel audio incorporating real-time data from sea buoys.

Evoking the sound of the sea, David Berezan has recorded real-time data from ocean buoys across the world. These buoys, exposed to ever changing sea and weather patterns generate data that David has used to create this melodic soundscape. Named after the John Rylands Library’s beautiful Lantern Gallery, this piece is a haunting blend of sound and space that brings to mind deep and wondrous ocean.

Virtuoso a collaborative project between NOVARS, AMBS and EonReality Manchester

Virtuoso (H-SIF award) is a collaborative project emerging from Alliance Manchester Business School's Creative and Digital Sandpit to teach high-level musical performance using bio-feedback and Virtual Reality.

What is it: This HSIF-funded project Virtuoso, is a collaborative effort between Manchester University academics from Music (NOVARS Research Centre), the Alliance Manchester Business School and EON Reality (https://www.eonreality.com/), a company specialised in Augmented Reality (AR) and Virtual Reality (VR) software for Education and Training.

POC: It was set as a 2-month task to build a Prof of Concept (POC) by developing a multimodal capturing platform able to collect a range of real-time bio-related data, as well as other data, such as pitch data, live movement. We wanted to see if it was technically possible to spot performative differences between professional and an intermediate-level violin players based on a set of prepared musical exercises. Also we wanted to recreate the player in 3D and VR to potentially connect extracted data with virtual behaviour and visualise those differences.

Implementation and context: Virtuoso’s first implementation is a biofeedback-based training system for violinists to understand how to reach the highest-levels of musical performance learning from a VR and potentially AR training system. Virtuoso’s context contemplates a musician learning a practice-led discipline at a high level, via technology enhancement, aiming to drive conclusions that, may apply to other disciplines.

Aims: Virtuoso aims to demonstrate ways to create increased value for learners using bio- feedback tools. More in particular, to see whether our system could provide valuable knowledge to understand what distinguishes a very good instrumental player from a real virtuoso performer. It also aims to complement musical instrument’s tutoring in the traditional one-to-one / one-to-few way and not to replace it, while cultivating a culture that supports change in its learning methods and attracts talent. In the long run, the system aims to maximise the potential of acquiring knowledge facilitated by syndication, using digital tools and decentralized data.

Team:
Jane McConnell - Eon Reality, Manchester.
Ricardo Climent - Music, Novars Research Centre, School of Arts, Languages and Cultures, The University of Manchester.
Richard Allmendinger, Lecturer in Data Science, Alliance Manchester Business School (Alliance MBS), The University of Manchester.
Kieron Flanagan - Innovation Management and Policy Division. Alliance MBS, The University of Manchester

Violin players:
Linda Jankowska and Ignacio Lara Romero.