Whereas traditional Augmented Reality applications primarily concern themselves with the overlaying of virtual visual information on the real world, Augmented Sonic Reality concerns itself with the overlaying of virtual sound on the real world.
Studies related to Augmented Sonic Reality [3, 6] have been largely concerned with the relationship between virtual audio sources and virtual objects, rather than the relationship between virtual audio sources and real world objects. Or, have focussed their attention on navigation, assistive technology, security, situated gaming and soundscape design .
Exhibition spaces, art galleries, cultural venues, museums and heritage sites present useful contexts within which to explore the potentials of Augmented Sound Reality as they offer a rich source of interesting objects and locations with interesting stories to tell [5, 7]. Also, with Augmented Sonic Reality having the potential to be a solely audible intervention, it presents the opportunity to augment without visual distraction or interference within these, for the main part, visually orientated environments [4, 7].
Furthermore, in addition to understanding how rich and accessible cultural experiences for visitors could be realised, studies within such contexts could provide insights into the development of a framework which curators and artists could then use to deploy such experiences.
As such, I plan to conduct practice-based research  with a variety of prototyped solutions deployed across different cultural venues, institutions and heritage sites, and also within the context of contemporary artistic practice.
It is intended that this practice-based research approach and it’s qualitative analysis, which will include the ethnomethodological analysis of both the visitor and curator experience, will form an in-depth understanding of the potentials of such a system and contribute towards the development of a usable framework for both artists and curators.
The audible augmentation of the art or museum object also leads us to think about how, in relation to contemporary curatorial and artistic practice, these objects could advertise their presence, and potentially the presence of other related objects around them, beyond the traditional confines of line-of-sight .
This extension of the object’s, or the location’s, communicable and cultural footprint could even extend beyond the confines of the architecture of the cultural venue or institution itself, with sound augmented objects or exhibitions advertising their presence and inciting interaction with them through related experiences beyond the walls of the gallery.
This is perhaps not to suggest that an Augmented Sonic Reality led experience could be better, just different. An experience that has the ability to reframe existing collections, and continue to reframe them with the overlaying of differing audible content over time.
Barfield, W. (ed.). 2015. Applications of audio augmented reality. In Fundamentals of Wearable Computers and Augmented Reality (1st ed.), 309–330. CRC Press, Boca Raton, FL.
Benford, S., Adams, M., Tandavanitj, N., Row Farr, J., Greenhalgh, C., Crabtree, A., Giannachi, G. (2013). Performance-Led Research in the Wild. ACM Transactions on Computer-Human Interaction, 20(3), p.1–22.
Dobler, D., Haller, M., Stampfi, P. (2002). ASR - Augmented Sound Reality. Proceedings ACM SIGGRAPH ‘02. p. 148. San Antonio, Texas.
Kelly, C. (2017). Gallery Sounds. Bloomsbury. London.
Seidenari, L., Baecchi, C., Uricchio, T., Ferracani, A., Bertini, M., & Bimbo, A. D. (2017). Deep Artwork Detection and Retrieval for Automatic Context-Aware Audio Guides. ACM Transactions on Multimedia Computing, Communications, and Applications, 13(3s), 1–21.
Sodnik, J., Tomazic, S., Grasset, R., Duenser, A., Billinghurst, M. (2006). Spatial Sound Localization in an Augmented Reality Environment. Proceedings OZCHI ’06. Sydney, Australia
Vazquez-Alvarez, Y., Oakley, I., & Brewster, S. A. (2011). Auditory display design for exploration in mobile audio-augmented reality. Personal and Ubiquitous Computing, 16(8), 987–999.
This author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/L015463/1) and Fusing Audio and Semantic Technologies (FAST).