The sounds and noises of our urban landscapes provide the artist with a fertile resource with which to reflect society back at itself. By listening to these noises with greater scrutiny we can gain a better understanding of our environment and insights into possible futures [1, 4].
In addition, urban landscapes are awash with hidden noises, such as network and communication data, subsonic and ultrasonic frequencies, radio signals, electromagnetic fields and the transitory data noise of its inhabitants .
This PhD will involve artist-led, and practice-based research and development into ways in which listeners can create personal, interactive and immersive mobile sound art experiences, or interactive soundscapes. It is proposed that these interactive soundscapes could enhance perceptions of interior and exterior urban environments and provide insights into, and a more accurate document of, our contemporary urban landscapes.
Detailed research will be undertaken into ways of effectively capturing and processing environmental and personal data sources for the purpose of its sonification and inclusion within the soundscape.
It is also envisaged that this project will involve the research and development of a wearable audio and data capturing interface that would facilitate an accessible and mobile audio experience. Such an interface could incorporate binaural recording and monitoring, along with environmental sensing and smartphone technologies.
Increasing the granularity of the data and the fidelity of the audio captured would build significantly on previous work in this area , expanding options relating to data mapping, processing and sonification, and resulting in a more detailed soundscape.
The personal sound art creation and performance experience that this project potentially offers, also offers exciting opportunities for the generation of new musical encounters. The empowerment of the listener as performer and the embracing of ambient noise are considered key compositional elements in the realisation of such new musical experiences [1, 2].
This author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/L015463/1) and Fusing Semantic and Audio Technologies for Intelligent Music Production and Consumption (FAST).