MANTIS SONIFICATION FESTIVAL 3-4 MARCH 2018
Saturday, 3. March
Martin Harris Centre for Music and Drama
University of Manchester, England, UK
Bridgeford St, Manchester M13 9PL
Room G16 (ground floor)
Hosted by NOVARS Research Centre
Free to attend
Coffee and refreshments served
10.30 - 12.30 NWCDTP student-led workshop
13.30 - 15.35 Paper Session 1
16.00 - 17.20 Paper Session 2
10.30 – 10.50
Molecular Sonification 10.50 – 11.10 Michelle Stephens
Noise in Images
11.10 – 11.30 Gloria Lanci
Mapping Cities with Sound
11.30 – 11.50 Daniel Alcaraz
The Red Hen Database as basis for sonification
11.50 – 12.10 Sam Skinner
The New Observatory
12.10 – 12.30 Group Discussion
13.30 – 13.55
Sonification as a means to generative music
13.55 – 14.20 Simon Blackmore
Data communications as a musical performance
14.20 – 14.45 Núria Bonet
Orchestral Sonification: Harnessing the listener's musical knowledge
14.45 – 15.10 Ricardo Climent
The Virtuoso Project
15.10 – 15.35 Michaela Palmer
15.35 – 16.00
Touch the Stars
16.00 – 16.25
Samuel Van Ransbeeck
Outros Registros: The sound and silence of police violence in the Olympic city
16.25 – 16.50 Ignacio Pecino
Data Synchronisation in MAIA, a mixed reality simulation
16.50 – 17.20 NOVARS Postgraduates
Compositional Approaches to Sonification – Case Studies
What do molecules sound like? In this talk, methodologies and findings for the use of chemical data in electroacoustic music composition are presented and evaluated: from aesthetic inquiries into the audification of nuclear magnetic ringing to the sonification of complex chemical systems such as the pollution cycles of the Baltic Sea or DNA patterns of the passenger pigeon.
Generative design is used as a digital process for jacquard weave design, towards reanimating the historical jacquard pattern archives of Macclesfield Silk Museum & Paradise Mill. The project positions the researcher as a practitioner, exploring the hybrid connections between digital and hand expressions.
This paper will present ways on how sounds and mapmaking are brought together in urban contexts and explored in artworks. Approaching concepts as soundscape, cartophony and cybercartograph with the support of recent academic papers, experimental artworks involving sound data and geographic/ spatial information in cities will be discussed. The aim is to question how cities are perceived and experienced in our present digital era, where data is widely used to convey information and produce knowledge.
The NewsScape library is a multimodal database that contains more than 300,000 hours of television news as well as a corpus of more than 2 billion words. This presentation will focus on the characteristics of the database and the different tools offered, as well as different research opportunities in different fields, such as language, gesture or sound.
Generative music (as defined by Brian Eno) unfolds in real time, using a set of probabilistic rules to generate a constantly changing musical experience for an audience. Such generative works have no real beginning or end in the traditional sense; they are open ended. I (and others) see a natural correspondence with open ended, timeless realtime datasets such as weather reports and financial data and have used the artistic sonification of such ever-changing data as a means to ever-changing generative music. It is this correspondence that I will explore here, alongside the creative compositional opportunities (and dilemmas) that such artistic sonification creates. The paper presentation will examine current trends, spotlighting contemporary works by artists such as John Luther Adams and John Eacott, and demonstrate my personal approach to the problem. I will discuss my approaches to compositions which have used data from fluctuations in the national electricity grid, the passage of the moon through the sky, the constantly shifting corpus of Wikipedia and the machinations of international currency markets. Looking at each example I will explore the tension between the need to map, translate and interpret the data to create a musical piece which satisfies my personal artistic goals and engages listeners (and rewards repeated listening) whilst remaining 'true' to the structure of the data itself.
This paper explores how computer signals could be conceived as musical patterns that can be performed and translated into information. If sonification can be described as the conversion of data to audio, here I explore a form of reverse sonification where sounds produced by the body are converted into visually displayed text. I will discuss the relevance of this work within the fields of music and human-computer interaction, computer music and sonification. I will demonstrate software I have developed to convert rhythmic sounds to ASCII text by mimicking the functionality of a serial port connection. (Blackmore, 2017) The paper draws on the work of Shintaro Miyazaki who has argued that the signals that transmit information and form the basis for our digital world should be analysed by the ‘ears, the hands and the whole body’. (Berry and Dieter, 2015). I will continue this debate by suggesting that to really understand the logic of digital signals it is useful to slow them down and learn perform them as well as simply listening to them. In this work I explore data signals at slow speeds where they are both performable and able to be perceived clearly by both machines and humans. By entering into the signal at this level it is my hope that both the performer and audience understand the structure of data communications more clearly. The work also aims explore the materiality of information and its relationship with the body as explored by writers such as Anna Munster. (Munster, 2006) I continue to discuss how techniques of learning traditional musical systems that employ rhythmic improvisation such as flamenco, Cuban rumba, bata and Indian konnakol could be useful in thinking about how signals can be explored musically and also consider this in relation to the history of process music. I will conclude with some thoughts for extending these ideas further towards multiple performers and consider if it possible to discover new ways of interacting with computers and each other by learning to perform the rhythmic binary systems they use to securely transmit information?
Transmitting information to a listener involves a complex communication process. The sonification designer makes data handling and compositional decisions to facilitate information transmission while the listener must decode the sonification process in order to receive said information. The novice needs to gain the knowledge required to understand the process, for example the data-to-sound mappings; further complications arise when attempting to transmit the meaning of a data set. This paper discusses the concept of orchestral sonification as used to compose the piece Waasgischwasch, which displays climate change data by modifying musical parameters of Rossini's William Tell Overture. This sonification method harnesses the listener's pre-existing musical knowledge, such as well-known pieces and connotations associated with dissonance and consonance, to simplify the communication process between data and listener.
This HSIF-funded project Virtuoso, is a collaborative effort between Manchester University academics from Music (NOVARS Research Centre), the Alliance Manchester Business School and EON Reality, a company specialised in Augmented Reality (AR) and Virtual Reality (VR) software for Education and Training. Technical goals: In June 2017 we set a 2-month task to build a Prof of Concept (POC) by developing a multimodal capturing platform able to collect a range of real-time bio-related data, as well as other data, such as pitch data and live movement. We wanted to see whether it was technically possible to spot performative differences between professional and an intermediate-level violin players, based on a set of prepared musical exercises. Also we wanted to recreate the player in 3D and VR to potentially connect extracted data with virtual behaviour and visualise those differences. Feasibility study: another important goal was to run a feasibility study, including audiences marketing and research and business impact, in relation to Virtuoso’s potential market. Implementation and context: Virtuoso’s first implementation is a biofeedback-based training system for violinists to understand how to reach the highest-levels of musical performance learning from a VR and potentially AR training system. Virtuoso’s context contemplates a musician learning a practice-led discipline at a high level, via technology enhancement, aiming to drive conclusions that, may apply to other disciplines. Aims: Virtuoso aims to demonstrate ways to create increased value for learners using bio-feedback tools. More in particular, to see whether our system could provide valuable knowledge to understand what distinguishes a very good instrumental player from a real virtuoso performer. It also aims to complement musical instrument’s tutoring in the traditional one-to-one / one-to-few way and not to replace it, while cultivating a culture that supports change in its learning methods and attracts talent. In the long run, the system aims to maximise the potential of acquiring knowledge facilitated by syndication, using digital tools and decentralized data. Virtuoso Team: Research Team: Prof. Ricardo Climent, a Professor of Interactive Music Composition and Director of the NOVARS Research Centre; Dr. Richard Allmendinger, Lecturer in Data Science, Alliance Manchester Business School (AMBS), has data analysis expertise in the development and application of simulation, optimization and machine learning techniques, to real-world problems in manufacturing, biology, economy and logistics b) A Non-Academic Partner: Jane McConnell - Marketing Executive, EON Reality UK, specialises in business impact investigation and analysis of commercial viability; Kieron Flanagan - Senior Lecturer in Science and Technology Policy, AMBS, University of Manchester; Dr Julia Handl - Senior Lecturer in Decision Sciences, AMBS, specialised in in machine learning (analytics) and clustering (i.e. PCA, multiobjective clustering, pattern recognition, forecasting). Research assistants: Chris Rhodes (MusM Composition in Interactive Media and Electroacoustics at the UoM) and Cameron Sand (current PhD in Computer Science, the UoM)
This project is a 4-year residency project in liaison with the NOVARS Research Centre and as part of EASTN-DC European Network for Digital Creativity (first NOVARS residency out of four). MAIA is a mixed reality simulation framework designed to materialise a digital overlay of creative ideas in synchronised trans-real environments. It has been proposed as an extension to the author's previous research on Locative Audio (SonicMaps). For this purpose, a number of hyper-real virtual replicas (1:1 scale) of real world locations are to be built in the form of 3D parallel worlds where user-generated content is accurately localised, persistent, and automatically synchronised between both worlds-the real and the virtual counterparts. The focus is on fading the boundaries between the physical and digital worlds, facilitating creative activities and social interactions in selected locations regardless of the ontological approach. We should thus be able to explore, create, and manipulate "in-world" digital content whether we are physically walking around these locations with an AR device (local user), or visiting their virtual replicas from a computer located anywhere in the world (remote user). In both cases, local and remote agents will be allowed to collaborate and interact with each other while experiencing the same existing content (i.e. buildings, trees, data overlays, etc.). This idea somehow resonates with some of the philosophical elaborations by Umberto Eco in his 1975 essay "Travels in Hyperreality", where models and imitations are not just a mere reproduction of reality, but an attempt at improving on it. In this context, VR environments in transreality can serve as accessible simulation tools for the development and production of localised AR experiences, but they can also constitute an end in itself, an equally valid form of reality from a functional, structural, and perceptual point of view. Data synchronisation takes places in the MAIA Cloud, an online software infrastructure for multimodal mixed reality. Any changes in the digital overlay should be immediately perceived by local and remote users sharing the same affected location. For instance, if a remote user navigating the virtual counterpart of a selected location decides to attach a video stream to a building's wall, any local user pointing at that physical wall with an AR-enabled device will not only watch the newly added video but might be able to comment it with its creator who will also be visible in the scene as an additional AR element.
The presentation discusses Touch the Stars (2009) a sonification performance piece commissioned by Futureeverything and the Astro-Physics department at Jodrell Bank. The piece was started whilst undertaking a PhD 'A portfolio of original compositions' at NOVARS research centre 2009 - 2012. Part of my research at the time was to investigate the use of remote sensor data to create generative electroacoustic compositions. Focus of the talk will be mainly on the creative challenges of generative, mapping and materialistic meta-modelling of a musical system.
This presentation will focus on the recent group exhibition The New Observatory that Sam co-curated at FACT, Liverpool in 2017. The exhibition transformed FACT’s galleries into an observatory for the 21st century and brought together an international group of artists whose work explores new and alternative modes of measuring, predicting, and sensing the world today. Sam will focus on a new commission entitled 53°32'.01N, 003°21'.29W, from the Sea, by sound artist David Gauthier that featured in the exhibition, produced by filming a ‘wave rider’ buoy at in Liverpool Bay and sonifiying its movement, turning liquid waves into sound waves.
MusM students created 10 Musical Etudes based on Sonification Data, some of which were premiere at the MANTIS sonification festival.