A A A Volume : 46 Part : 3 Proceedings of the Institute of Acoustics Towards an internet of sounds based method for acoustic profiling of rooms and audio spaces I. Ali-MacLachlan, Birmingham City University, Birmingham, UK S. Hall, Royal Birmingham Conservatoire, Birmingham, UK J. Crawford, Ingenious Audio, Edinburgh, UK 1 INTRODUCTION The focus of this paper is the methodology used for the acoustic measurement, modelling and auralisation of a number of historic buildings in Coventry, as part of an ongoing research project named Aural Histories: Coventry 1451-1642. In order to verify the accuracy of acoustic models, impulse response measurements can be used as a comparison where physical spaces exist1. Collection of data from multiple loudspeaker and microphone pairs is time consuming but necessary in providing a more complete analysis of the space. While traditional cable-based systems for impulse response collection are widely used, they are time-consuming, cumbersome, and impractical in large or culturally sensitive spaces. This paper proposes a novel wireless method, leveraging Internet of Sounds principles, to streamline data collection while maintaining accuracy and flexibility. The Aural Histories project explores the acoustic environments of key historic buildings in Coventry, focusing on the period’s architectural and cultural transformations. Two key sites, Holy Trinity Church, and St. Mary’s Guildhall have been acoustically modelled and the results compared to reverberation times derived from the capture of room impulse responses. This information will be used to inform and calculate the reverberation time in St. Michael's Church, a historically significant building that was tragically destroyed during the Coventry Blitz of 1940. The future analysis will cover both pre- and post-Reformation architectural modifications, which significantly impacted the acoustic properties, especially in terms of speech clarity and music performance. For example, St. Michael's Church, the largest parish church in England by the late Middle Ages, underwent significant structural changes, including the addition of large Gothic windows, which reduced surface reflections, altering the reverberation time. Post-Reformation modifications further altered the space to prioritise speech intelligibility for Protestant services2,3. In addition to benchmark acoustic measurements taken in accordance with ISO3382-1 and the creation of accurate acoustic models, capturing high-fidelity recordings of sound sources is crucial for creating accurate auralisations. These recordings allow us to simulate how sound would have interacted with historic spaces, offering deeper insight into the acoustics of these environments. This paper also explores recording strategies to complement the proposed wireless data collection method, ensuring a comprehensive approach to room profiling. 2 HISTORY AND EVOLUTION OF AURALISATION IN CHURCH AND CATHEDRAL PROJECTS Auralisation involves creating audible simulations of sound in specific environments using numerical modelling, which is crucial for understanding the complex acoustics of churches. These buildings present unique challenges due to their large volumes, irregular shapes, and long reverberation times, which influence the perception of both music and speech. Auralisation techniques are used to study historical soundscapes, inform acoustic design, and guide restoration projects to achieve desired acoustic outcomes. As computing power increased and more comprehensive simulations of complex acoustic settings, such as churches and cathedrals, could be produced, the use of computer modelling for acoustic auralisation of churches started to gain popularity in the 1980s and 1990s. Early initiatives were frequently driven by research and centered on developing architectural acoustics for novel ideas in addition to conserving historic areas. Odeon, a well-established software tool for acoustic modelling and simulation, is designed to analyse and predict sound behaviour in architectural spaces. It excels in handling complex geometries and simulating various acoustic environments. Odeon was selected as the acoustic modelling platform for the Aural Histories project due to its ability to manage intricate architectural structures, calculate sound propagation based on the directivity of sound sources, and output auralisation files in multiple formats, including Binaural and B-Format. Odeon has also featured in a number of similar projects4–6. In order to validate the results produced with Odeon, a swept sine tone method was implemented using Room Eq Wizard. This method has been used successfully in similar cultural heritage-based acoustics projects7,8, and was used alongside room impulse response (RIR) measurements where a balloon burst was the impulse source9. Although traditional cable-based systems for data collection have played a significant role in the auralisation of complex environments their limitations, especially in large historic buildings, prompt the need for innovative solutions. 3 EXPLORING REVERBERATION AND DATA COLLECTION TECHNIQUES The primary goal of data collection in this study was to obtain accurate room impulse responses (RIRs) and detailed reverberation time (RT) measurements, crucial for understanding the acoustic behaviour of the spaces under investigation. To achieve this, we utilised Room EQ Wizard (REW). The loudspeaker used was the Mackie SRM450. For capturing the room responses, we employed the MM2-USB measurement microphone, which was calibrated using a Cirrus calibrator and the levels compared to a Cirrus Optimus+ Class 2 sound level meter. This ensuring that the recorded data were accurate in terms of sound pressure level (SPL). The decision to use a swept sine tone as the excitation signal is grounded in its advantages over traditional methods such as impulse or noise excitation. Following Farina’s seminal work on impulse response measurement using sine sweeps, it became clear that the swept sine method is superior in terms of dynamic range, frequency resolution, and noise rejection10,11. This method minimises the effect of ambient noise and distortion by distributing the signal energy evenly across the frequency spectrum, making it ideal for acoustically complex environments such as churches and cathedrals that often have high reverberation times and complex reflective properties12. A sine sweep allows for precise deconvolution of the recorded signal, producing a clean impulse response that is critical for both reverberation time (RT60) and frequency response measurements. As a secondary method to check the reliability of the impulse response data, balloon bursts were employed. Balloon bursts provide a quick and practical method for obtaining an RIR, especially in situations where equipment setup needs to be minimal or rapid. While less precise than sine sweeps, this can be an effective validation tool, offering a rough yet useful approximation of the acoustic properties of a space. This method is also particularly advantageous in culturally sensitive environments, where long setup times or the use of intrusive equipment such as long cables might be inappropriate or restricted. In summary, the combination of swept sine tones and balloon burst methods provides a comprehensive approach to data collection, ensuring both precision and flexibility in the challenging environments of churches and heritage sites. The cable-based measurements presented here serve as a reference for the wireless method proposed later in this paper. 4 RECORDING METHODS FOR MUSICAL CONTENT 4.1 Recording Strategies for Historically Informed Musical Performance Successful recordings in historically informed musical performance (HIP) generally adhere to conventions that have been established for over a century, with all performers recording simultaneously in the same live space. This practice, often described as the classical "Decca" tradition, emphasizes capturing performances in acoustically appropriate locations, frequently concert halls or sacred spaces13,14. However, the Aural Histories project presents different requirements, as auralisation demands a capture approach that contrasts starkly with traditional methods. Performers must be recorded in as acoustically neutral an environment as possible, while still maintaining the musical synergy and cohesion of ensemble. Performers should be able to hear and respond to each other in real time, as they would in a shared space, but the recording process must also allow each individual’s sound to be isolated and controlled as a distinct sound source within a virtual acoustic environment. This necessity dictates a modified capture process that diverges from standard practices. From a sonic standpoint, the optimal approach would involve recording each performer individually within an anechoic chamber, thus eliminating any acoustic colouration. However, this presents a clear tension between sonic ideals and musical effectiveness and efficacy in ensemble performance, highlighting a challenge to balance these competing needs. For comparison, the Space, Place, Sound, and Memory: Immersive Experiences of the Past project (2018–2019), led by the University of Edinburgh College of Art, employed a stereo capture of an ensemble of singers within the anechoic chamber of the University of York7. These stereo recordings were then placed into a digitally reconstructed version of Linlithgow Palace, both visually and acoustically, to be experienced through virtual reality (VR) technology. A substantial earlier output is the Virtual Haydn project from McGill University, which sought to recreate authentic performance environments for Joseph Haydn’s keyboard works. In this project, performances were executed on period instruments in a dry, though not anechoic, studio environment at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT). In this project, studio performances were monitored via headphones, processing the signal in real time using impulse responses (IRs) from sampled spaces. This allowed for a nuanced exploration of how performance acoustics may influence musical interpretation15. 4.2 Source Capture For the Aural Histories project, the initial recording sessions comprised short sample movements of works for historical instruments and chamber choir, conducted at the Royal Birmingham Conservatoire (RBC) studios. For instrumental pieces, the preproduction process included rehearsal and a performance at St. Mary’s Church, a location integral to the project. As a reference, a basic yet conventional recording was undertaken using two simplified microphone arrays: an ORTF pair of MKH40 microphones at 0.5*Dc (critical distance) and an AB pair of DPA4006 omnidirectional microphones at the Dc. When moving the production to the studio, we adopted directional, lower-sensitivity dynamic microphones (MD441, MD421, Beta58) positioned in close proximity to each performer, using acoustic screens to isolate them as much as possible. Performers were able to maintain visual contact with both the director and each other, as they would in a traditional rehearsal or performance setting, whilst minimizing sound spill between the parts. Each performer had access to personal foldback via headphones, in which they could hear a blend of their ensemble and an artificial reverb, replicating the IR of St. Mary’s. Having captured ensemble performances in the dry studio setting, we proceeded with multitrack recording to establish a "gold-standard" reference for sonic comparison. In the University of Birmingham’s anechoic chamber, performers overdubbed individually using flat frequency-response microphones (MKH40), with the studio sessions serving as backing tracks. This process resulted in truly dry versions of the material, free from any spill between players or colouration from reflected sound. Multiple takes were recorded for each part, allowing for detailed postproduction editing at the individual instrument level. In the case of choral works, this method also enabled the use of multiple takes to create double-tracked parts, enhancing the virtual choir's size. 4.3 Qualitative Comparisons and Next Stages: Sonic, Musical, Practical The studio captures were musically compelling and produced high-quality recordings. Sonically, the anechoic recordings exhibit an unparalleled purity, though the studio recordings—while significantly reducing spill between players—still present some degree of interaction when soloing individual performers. Nominally, this spill measures at between -24dB and -30dB for this type of sound source, the equivalent of players being at least 30m apart were they in free field. The use of less acoustically neutral dynamic microphones in the studio recordings does have an impact on the final sound, but this can be effectively compensated for through application of EQ, countering for the microphones' frequency responses. Recording the ensemble together in a studio, though different from a resonant sacred space, remains preferable for capturing extended repertoire. This is particularly the case for some of the pieces that are being recorded or performed for the first time based on newly produced editions by the musicology team. While overdubbing each performer individually in an anechoic chamber produces the cleanest acoustic results, it is very time-intensive, physically demanding for both musicians and production teams, and would significantly reduce the volume of material that could be captured within the project’s timeframe and budget. Recording multiple performers simultaneously in the anechoic chamber may be feasible, but this introduces compromises due to sound spill and space limitations for larger ensembles. Thus, the project’s next phases will prioritise capturing performances in a dry studio environment, with the anechoic recordings serving as a gold-standard reference for sonic comparisons. 4.4 Postproduction Processing Considerations In postproduction, corrective equalization is applied to the studio captures to further compensate for the frequency response of the dynamic microphones, balancing this with the need to additionally correct for instruments captured at proximities closer than optimal. Additionally, de-reverberation and de-spill tools such as iZotope RX, Waves Clarity, and Acon Deverberate are being tested for their potential to further improve individual musical parts without introducing negative sonic artifacts. Audible processing artifacts can include transient alteration and frequency-domain ringing, potential side effects that must be carefully managed, particularly in this context where the content is highly exposed. While training neural networks on historically informed instrumental and vocal performance recordings may offer potential future solutions, it is beyond the scope of this current project. Ultimately, this may not be needed for the work to be successful. When working in the simulated acoustics, small amounts of low level spill may prove not to be an issue, and indeed add to a less contrived musical result. 5 PROPOSED FUTURE METHOD FOR CAPTURING REVERBERATION: WIRELESS MICROPHONE SYSTEM The spaces under investigation in this project are physically large, and benefit from a large number of varied sample points. As historic and important buildings, with a clear value to visitors, disruption must be kept to a minimum and physical damage is unacceptable. In each case centralised repeat measurements, and data storage would be of assistance. Audio over Ethernet (AoE) provides a convenient and easily managed approach but requires a physically cabled Ethernet-based network. Using physical cable to achieve this would be extremely time consuming for set up, a large nuisance and come at significant cost and is therefore more practical for fixed-use venues such as stages, studios, stadiums, and convention centers, not single use testing setups. To maintain quality and reduce system latencies AoE systems generally do not utilise audio data compression. In music recording and playback, the industry standard has long been CD quality, 16- bit resolution, 44.1kHz sampling, and 1.4Mbps bandwidth16. As such most of these systems target c.1 Mbit/s per channel with a transmission latency typically less than 10 milliseconds. There is therefore a compelling case for a central temporary data collection system, such as a laptop and multiple instances of wireless remote temporary self-contained listening devices. For such range the standard wireless systems would be UHF or VHF based networks. Wireless systems offer freedom of movement, easier setup, and less complex cabling, but many existing solutions suffer from common issues: Limited Frequency Spectrum: Wireless audio systems whether UHF, VHF, or Digital operate within limited frequency spectrums shared with various other devices such as wireless microphones, Wi-Fi networks, and Bluetooth devices. This can lead to interference issues, resulting in dropouts, signal degradation, or even complete signal loss. This limitation becomes more prominent in crowded environments, such as music festivals or live performances, where multiple devices compete for the same frequencies. Latency: Wireless audio systems introduce a certain amount of latency, which is the delay between the original sound and its reproduction through the wireless system. This latency can be noticeable, particularly in scenarios where precise timing is crucial, such as multi-track recording or live performances. A significant issue is the delay between the musician acting and hearing the sound. Susceptibility to this seems to vary by instrument but the upper limits for the guitar, bass and keys are around 16, 30 and 40.5ms respectively17. Signal Quality and Range: Wireless audio systems are susceptible to signal degradation and interference, especially over longer distances. The quality of the transmitted audio signal can be affected by obstacles, such as walls or other physical barriers, as well as electromagnetic interference from nearby electronic devices. This limitation can result in signal dropouts, reduced audio clarity, and compromised recording quality. Battery Life and Reliability: Wireless audio systems heavily rely on battery-powered transmitters and receivers. The limited battery life of these devices can pose a challenge during extended recording sessions or live performances. Musicians and engineers need to carefully manage battery levels to ensure uninterrupted operation. Additionally, wireless systems are prone to occasional signal dropouts or connection issues, which can disrupt the recording process and compromise the reliability of the system. Cost and Complexity: Implementing wireless audio solutions in a music recording setup can be costly and complex. High-quality wireless systems with advanced features and reliable performance often come with a significant price tag, and require technical expertise and careful planning to optimise signal quality, minimise interference, and integrate with existing recording equipment. Point to Point: Such systems are typically proprietary closed radio networks, meaning a transmitter can communicate to a dedicated receiver, conceptually meaning only one cabled connection is being replaced. Alongside this, the 802.11 Wi-Fi standard has continued to evolve and is a pervasive standard for data transmission and networking in residential and commercial settings offering a familiar radio standard. IEEE 802.11be at the time of writing is the latest draft standard for the technology widely known as WiFi 718. 5.1 Audio over Wi-Fi Wi-Fi particularly with advancements in the 802.11 specification for increased capacity, range, and focus on multimedia – offers a proven platform for audio sharing. There are several methods and applications available that allow one to stream audio from pre-existing non-live recorded sources wirelessly over a Wi-Fi network, including Esinkin, SonoBus, LimeOnAir, and VLC in non-real-time. Fi-Live(TM) is a native 24bit 48kHz audio streaming system over 802.11 Wi-Fi which provides for lower latency, higher bandwidth, longer range, and fully networkable multichannel possibilities by harnessing the inherent benefits of Wi-Fi offering a system for converting analog sources such as instruments into real-time Wi-Fi packets. By leveraging the 10 billion Wi-Fi-capable phones, tablets, and PCs in use today, Fi-Live offers a common platform for audio recording and distribution with a ready-to-go audience and a massive range of software editing and mixing options. Unlocking low-latency multichannel audio over Wi-Fi will radically reduce the cabling and setup complexity in these use cases. Being compliant with the 802.11 standard integrated into existing third party hardware will remove the need for a physical interface device such as a mixer and move this into a software capability within a Wi-Fi device such as a PC or tablet. The inherent networking makes Wi-Fi a simple routing option, and the removal of the cables reduces the noise contribution of the physical cabling paths and would allow digital DSP to provide the effects, EQ, and volume control capability. 5.2 Bandwidth A major consideration for audio applications is quality, and a key complaint about existing wireless standards in this space such as Bluetooth is compression. With restricted bandwidth, Bluetooth relies on audio codecs for compression and decompression of audio signals. Different codecs, such as SBC, AAC, aptX, and LDAC, offer varying levels of audio quality. These codecs are lossy as they discard most of the audio data, typically reducing a CD 16-bit resolution, 44.1kHz sampling, would require a bandwidth of 1.4Mbps source to around 300 kbps, with codecs like AAC running at a maximum of 250 kbps19 The discarded data generally is audio that the human ear is less likely to detect, such as a soft sound in the presence of a similar but louder sound, but it is widely perceived as a less rich and tonal sound. Wi-Fi has substantially higher bandwidth available and therefore does not need compression and has the capacity for better than CD quality transmissions, work on 802.11e implementations showed a Wi-Fi PHY setting of >2Mbps was appropriate for stereo applications, while for 5.1 channel home theater applications, this value should be 11Mbps.20. Tests have shown Pure Audio Blu-ray format or AES-21id-2011 with eight independent audio streams with simultaneous 24-bit audio samples at the higher rate of 192 kHz being successfully carried in 802.11n.21. 5.3 Collisions and Packet loss The Medium access control (MAC) is responsible for coordinating transmissions to manage the link layer (OSI Layer 2) controlling end-to-end delay, collisions, power consumption, and overall throughput. Studies have shown that the main source of distortion in Wi-Fi-based audio systems are due to packets being permanently lost, generating gaps and loss of synchronization22. Furthermore, as a real-time setting, we must also consider packets late beyond a given threshold to be lost. In playback, a delay of >4ms should be ignored. A test setup running an 802.11n network in UDP mode, with optimised buffering, streaming a stereo audio 48kHz audio showed an average packet loss of 0.8%. Increasing to four-channel audio streaming saw packet loss increase to 3%.23. Understanding the distribution of these losses is also key as again the 802.11n MAC is not optimised for real-time audio so collisions, transmission jitter, and the scheduling of non-audio transmission network tasks such as housekeeping can cause bursty periods of packet loss. 5.4 Error Correction As a packetised system when a packet fails to arrive, reaches the receiver in a corrupted state, or arrives too late to be useful for the audio stream, steps must be taken to mitigate the consequences of the missing data. If data is not available a short gap of a few ms will be audible as a ‘‘click’’. Some error correction strategies involve duplication of the transmitted packets or Forward Error Correction (FEC). The trade-off for FEC is that any processing comes at the expense of time and therefore latency, however with improving algorithms, faster, cheaper processors, and the overall reduction in system latency these may be an option. On the receiver side Packet Loss Concealment (PLC) is a more appropriate solution given the potentially low packet loss rate (< 3%) and for the small packets involved (<6 ms). A practical approach is to consider late packets as lost, and not seek to recover lost packets and apply PLC. There are 3 main types of PLC:- insertion, interpolation, and regeneration24. Regeneration is the most complex and computationally intensive. The potential results are good and offer substantial areas for future research but simpler methods may be more suitable. Data interpolation methods fill the gap based on the data on either side of it, but as a real-time system – if the more recent packet is already available, it is often impractical to return to process post material. Insertion is a quick and coarse solution, often relying on the human brain's ability to self-correct issues, simply replacing a lost packet with a filler, for example, silence (zero insertion), white noise, or "comfort noise”25. This comfort noise could be a repeated package or a modified previous package based on recent activity in the stream. A mix of these correction methods would likely provide the best overall solution in a balance between fidelity, latency, and processing load. 5.5 Proposed setup To optimize data gathering with minimum disruption, we are proposing existing Fi-Live hardware to interface to, and provide phantom power to a Behringer ECM8000 microphone. This resulting entirely wireless solution will be placed remotely, in Wi-Fi connection to a conventional HP laptop. A simple GUI will be developed allowing the measurer to click on a schematic of the venue, marking where in the venue the microphone is placed. Running a command will then run the previously discussed suite of tests for that given location. The Fi-Live microphone set up can then be repositioned at will, and the measurement repeated on a large number of sample points. 6 CONCLUSION This paper has presented a novel wireless method for acoustic profiling of rooms and audio spaces, proposing a significant advancement in data collection for large and culturally sensitive environments. By leveraging Internet of Sounds principles, this method addresses the limitations of traditional cable based systems, offering a flexible and time-efficient alternative for capturing room impulse responses and reverberation data. The Aural Histories project provides a valuable testbed, demonstrating the potential of this approach to enhance the acoustic measurement, modelling, and auralisation of dry captured historical musical performances into historical spaces. The exploration of recording methods for historically informed musical performance further complements the proposed technique, ensuring a comprehensive approach to accurate acoustic reconstruction. Future research will focus on refining the wireless method, including addressing potential challenges related to signal interference, packet loss, and latency in wireless data transmission. Further development is needed to enhance error correction protocols and optimise audio quality in real-time applications. Additionally, expanding the wireless system to accommodate multichannel audio and investigating its use in assessing the acoustic accuracy of environments for immersive auralisation will be key areas of exploration. 7 ACKNOWLEDGMENTS The authors would like to acknowledge the support of the Arts and Humanities Research Council (AHRC) for funding the Aural Histories: Coventry 1451-1642 project, which provided the foundation for the research presented in this paper. Their generous support has been invaluable in exploring the acoustic environments of Coventry’s historic buildings. 8 REFERENCES Foteinou A, Murphy DT. Perceptual validation in the acoustic modeling and auralisation of heritage sites: The acoustic measurement and modelling of St Margaret’s Church, York, UK. In: Proceedings of the Conference on the Acoustics of Ancient Theatres; 2011:18–21. Monckton L. Coventry: Medieval Art, Architecture and Archaeology in the City and Its Vicinity: Volume 33. Routledge; 2017. Boren B. Word and mystery: The acoustics of cultural transmission during the Protestant Reformation. Front Psychol. 2021;12. Boren B, Longair M. A method for acoustic modeling of past soundscapes. In: Proceedings of the Acoustics of Ancient Theatres Conference, Patras, Greece; 2011:18–21. Ciaburro G, Iannace G, Trematerra A, Lombardi I, Abeti M. The acoustic characteristics of the “Dives in Misericordia” Church in Rome. Build Acoust. 2021;28(2):197–206. Sygulska A, Czerniak T, Czarny-Kropiwnicki A. Experimental investigations and computer simulations to solve acoustic problems in the modern church. Eng Struct Technol. 2018;10(1):34–45. Cook J, Kirkman A, McAlpine K, Selfridge R. Hearing historic Scotland: Reflections on recording in virtually reconstructed acoustics. J Alamire Found. 2023;15(1):109–126. Moore G, West B, Ali-MacLachlan I. The historical auralisation of Lichfield cathedral’s quire. In: Proceedings of Acoustics 2024; 2024. Ţopa MD, Toma N, Kirei BS, Homana I, Neag M, De Mey G. Comparison of different experimental methods for the assessment of the room’s acoustics. Acoust Phys. 2011;57:199–207. Farina A. Advancements in impulse response measurements by sine sweeps. In: Audio Engineering Society Convention 122. Audio Engineering Society; 2007. Farina A. Simultaneous measurement of impulse response and distortion with a swept-sine technique. In: Audio Engineering Society Convention 108. Audio Engineering Society; 2000. Foteinou A, Murphy DT. Multi-positional acoustic measurements for auralization of St Margaret’s Church, York, UK. In: Forum Acusticum; 2014. Haigh C, Dunkerley J, Rogers M. Classical Recording: A Practical Guide in the Decca Tradition. Focal Press; 2020. Massey H. The Great British Recording Studios. Hal Leonard Corporation; 2015. Beghin T, Haydn J, Francisco M de, Woszczyk W, Litz R, Tusz J. The virtual Haydn: Complete works for solo keyboard. Naxos; 2011. Accessed October 8, 2024. http://www.naxosmusiclibrary.com Bouillot N, Cohen E, Cooperstock JR, et al. AES white paper: Best practices in network audio. J Audio Eng Soc. 2009;57(9):729–741. Lester M, Boley J. The effects of latency on live sound monitoring. In: Audio Engineering Society Convention 123. Audio Engineering Society; 2007. Khorov E, Levitsky I, Akyildiz IF. Current status and directions of IEEE 802.11be, the future Wi-Fi 7. IEEE Access. 2020;8:88664–88688. Specifications and Document. Bluetooth® Technology Website. Accessed September 27, 2024. https://www.bluetooth.com/specifications/specs/ Floros A, Karoubalis T. Delivering high-quality audio over wireless LANs. In: Audio Engineering Society Convention 116. Audio Engineering Society; 2004. Nikkilä S. Introducing wireless organic digital audio: A multichannel streaming audio network based on IEEE 802.11 standards. In: Audio Engineering Society Conference: 44th International Conference: Audio Networking. Audio Engineering Society; 2011. Tatlas NA, Floros A, Zarouchas T, Mourjopoulos J. WLAN technologies for audio delivery. Adv Multimed. 2007;2007:1–16. Gabrielli L, Squartini S, Piazza F. Advancements and performance analysis on the wireless music studio (WeMUST) framework. In: Audio Engineering Society Convention 134. Audio Engineering Society; 2013. Perkins C, Hodson O, Hardman V. A survey of packet loss recovery techniques for streaming audio. IEEE Netw. 1998;12(5):40–48. Rottondi C, Chafe C, Allocchio C, Sarti A. An overview on networked music performance technologies. IEEE Access. 2016;4:8823–8843. Previous Paper 15 of 17 Next