Welcome to the new IOA website! Please reset your password to access your account.

Proceedings of the Institute of Acoustics

 

And then it turned outside in: New insights into spatial game audio

 

M. Dalgleish, SUGI, Staffordshire University, Stoke-on-Trent, Staffordshire, UK

 

1 INTRODUCTION

 

Over the past 50 years, computer games have advanced significantly, evolving from early text-based and sprite graphics to 3D worlds, massively multiplayer online games, and virtual reality. For much of this history, graphical improvements have been prioritised, but sound for video games has also progressed and the academic field of video game sound has developed significantly over the last two decades, in turn spurred and supported by foundational texts from Grimshaw,1 Collins,2 and Farnell,3 amongst others. Nevertheless, Probert cautions that “[a]s a comparatively young field, theoretical research into sound in video games is far from comprehensive and as such the associated literature is quite foundational in nature.”4 There has been little exploration of the intersection between music production and video game sound, especially their spatial aspects.

 

Drawing on a range of literature, this paper starts by examining what Harris5 terms a turning ‘inside out’ in two different musical contexts: electroacoustic composition and alternative rock. Turning inside-out refers to a sustained expansion of sound sources through artificial processing, beginning in the 1960s and eventually leading to a shift from distinct, recognisable sound objects that are placed within a background sound space, to immersive sound environments that envelop both performer and audience. Crucially, the latter environment is not only seen to be more fluid and less controllable, but also to have become a primary compositional and experiential focus.

 

This initial exploration provides a basis for the subsequent examination of a related phenomenon in the context of video game sound, with a focus on two exemplary games: Thief: The Dark Project6 (hereafter Thief) and Tom Clancy’s Rainbow Six Siege7(hereafter Siege). In both titles, sound is a fundamental part of player experience, but the underlying role of sound will be shown to have shifted or, rather, turned ‘outside-in’. The implications and legacy of this shift will also be considered.

 

2 THE MUSIC CONTEXT

 

2.1 Processing as Expansion

 

The industrialisation of electricity in the late 19th century had significant effect on musical instrument design, leading to the creation of electronic instruments such as the Telharmonium and theremin,8 but also electrified versions of acoustic instruments. The latter, driven by the need to amplify acoustically quieter instruments in groups and ensembles,9 in turn enabled artificial sound processing, expanding what Varèse called the search for new sounds.10

 

Artificial sound processing concerns changing the characteristics of an audio signal in some way, or, as Wilmering et al. put it, “the controlled transformation of a sound typically based on some control parameters.”11 Many of the first effects processors were inspired by earlier physical or mechanical processes, sometimes incorporating experienced or intentionally induced malfunctions.12,13 This includes the Maestro Fuzz-Tone FZ-1 (1962), the first effect to enter mass production,14,15 and the flanger effect popularised by The Beatles’ “Tomorrow Never Knows”,16 as well as numerous other effects that became mainstays of subsequent music production.

 

Audio effects technologies evolved quickly from these starting points, often driven by transectorial innovation.11 Wilmering et al. identify three distinct stages of their development in little over two decades: electromechanical, electronic, and digital (hardware).11 Pfaff et al.17 add digital (software) as an additional stage, as software is typically far more flexible than hardware.

 

It has been extensively discussed how, as sound quality improved in the digital era, there grew a desire for older, lower-fidelity technologies.12 The reasons for this are beyond the scope of this paper, but it is notable that the differences between effects are usually emphasised, sometimes down to minutiae. At the same time, there has been little consideration of how different effect types (particularly formative types that still exist today) have some similar goals in terms of expanding the presence of a given sound source.

 

To this end, consider distortion and Artificial Double Tracking (ADT) effects. As a frequency domain and time-based effect respectively, they are, to state the obvious, different in many ways. Nevertheless, if distortion is described by as contributing to “an increased density of sound” and, depending on the type and amount of distortion, are variously characterised as ‘thick’, ‘heavy’, ‘gritty’, and ‘rich’,18 the aims of ADT are not so far removed. That is, the effect relies on a short delay between two signals to produce a noticeably fuller or ‘fatter’ tone at the output.19 Various other delay-based effects bestow related properties. Flanging and phasing are seen to have a ‘thickening’ effect, while chorus can create several layers of duplicate sound, adding depth.20 We will return to this idea of increasing the presence (perceived size) of sound sources shortly.

 

While many effects processes were refined and, generally, processors were miniaturised over time, the practice of chaining effects together started soon after the advent of effects processors. Aided by the development of pedalboards to help manage routing and power complexities, substantial effects chains became commonplace even (and arguably especially) in a performance context. By the 1980s, chains of effects could transform almost any sound source beyond recognition. However, rather than simply broadening the sonic palette, Théberge argues that advancements in sound transformation also spurred the development of fundamentally new instrumental techniques.21 A well-documented example of this is the so-called glide technique developed by My Bloody Valentine guitarist Kevin Shields.22,23 With the output of the guitar heavily effected by backwards reverb, the technique involves subtle, rhythmic manipulation of the tremolo arm and simultaneous strumming to create a sense of gliding motion. Another prominent example is the guitar sound of U2’s The Edge. This involves precise delay times being used to repeat individual notes and build layered, cascading patterns that would otherwise be impossible with a single guitar.24,25

 

2.2 Turning Inside-Out

 

The transition from analogue electronic to digital processors, particularly the development of digital reverb systems that algorithmically replicated architectural acoustic behaviours, allowed sound sources (expanded by other effects processing) to be situated within increasingly realistic sound spaces. The digital reverberators developed by Schroeder, intended to simulate concert hall acoustics by way of series allpass filters and parallel comb filters, represented a significant first step in this direction.26 Subsequently, researchers like Moorer and Griesinger made both theoretical and practical enhancements to the Schroeder model, with a Griesinger design being commercially released as the Lexicon 224 digital reverberator.27,28

 

In parallel and exemplified by the Moore-designed Ursa Major Space Station SST-282,29 there was also a turn away from reverb as a means of (re)creating realistic physical spaces, towards reverb as a creative or musical space in and of itself. As Blesser30 puts it: “[u]nknowingly, I was a member of an expanding generation of aural architects: electroacoustic designers who were liberating auditory space from its physical roots.” American composer Gordon Mumma hinted at this potential as early as 1961, observing: “[a]t the present time I am convinced that electronic music will reveal its greatest potential in a spatial context.”31 In turn, this move away from realism set the stage for a kind of reversal or turning inside-out of the sound space. This refers to the notion of ‘inside-out’ instruments developed by Harris,5 where technologisation has blurred prior boundaries between the musician, instrument, and audience, and led to a fundamental shift in how sound is produced, experienced, and perceived.

 

Now the body inhabits and navigates through this instrument, instead of holding it; the sound and tangibility of the sound comes from outside, rather than generated from inside the body; and the audience spectator no longer has the focal point of body-instrument-sound, but explores as one of the players. It is like a turning inside-out of the intimacy of the musician-instrument into a space inhabited by multiple performers and instruments.5

 

In conventional accounts of musicianship, the performer physically acts on and directly manipulates an instrument object to (controllably) produce sound, with (depending on instrument type) the performer's hands or mouth serving as the point of interaction. In the inside-out instrument case, the performer no longer physically holds and interacts with an instrument-object to produce sound. Instead, the performer inhabits and navigates an instrument environment that encompasses and envelops their body, creating more immersive and embodied engagement.

 

Rather than sound being generated from the body or bodily extension (as in the case of singing or playing traditional instruments like the violin or piano, where sound is physically produced through the body-instrument interaction), Harris5 emphasises that sound is directed inwards from the outside. Thus, rather than a sound source, this suggests more spatial, ambient interaction in a sound-filled environment; an environment where sound is external to and detached from the performer’s body. This potentially (depending on the number of participants) shifts the experience from a focused, performance-based interaction to one that is diffused and both performers and audience members share the same sonic landscape. The inside-out instrument develops, Harris5 contends “not as an exoskeleton, but as an exocentric (rather than egocentric) space of interaction.”

 

As Rebelo has written in the context of digital musical instruments (DMIs), one result of this is that the previous intimate relationship between performer and instrument; a relationship confined to the close space around the performer’s body and based on the strong haptic connection between body and instrument, is effectively ended.32 Yet, for Harris, there is also another loss of control, where “the performer can no longer command full attention over the distributed sounds and images,” and “the space itself moulds and influences the decisions and results of sound and image.”5

 

This shift from sound objects to inhabited sound space is palpable in Mumma’s “Stressed Space Palindromes”,33 created between 1977 and 1982 and a generally neglected output in his catalogue. Rather than focus on a series of electronic or electroacoustic sounds (i.e. sound sources) as expected, “Stressed Space Palindromes” instead foregrounds a series of seamlessly and randomly morphing (virtual) spaces into which sounds are passed and pressed. This choice serves to displace sound from its dual role as both primary compositional material and McLuhanesque message, and replaces it with (artificial) spatial sound simulation (processing) that wraps round and envelops the sounds within. This turning inside-out would likely have been all-the-more surprising at the time because spatial sound processing had typically been regarded as merely supplementary and was often designed for relative transparency (i.e., intended to be only subtly present).

 

A decade on from Mumma’s obscure piece and, over the course of a few years, Shields’ pedalboard setup had grown into a far more extensive arrangement.34 Pre-empting the evolution traced by Toop’s Oceans of Sound,35 this embrace of processing enabled a transition from the lightly distorted lo-fi of Isn’t Anything36 to a sound space taken to the extreme on Loveless.37 With both Shields and second guitarist Bilinda Butcher adding layers of textural guitars that slip in and out of focus in an almost entirely mono mix, the end result has been described as both beautiful and ferocious.38 The band’s live shows, especially the prolonged ‘Holocaust’ section of “You Made Me Realise”,39 only extended the brutalising yet intimate embrace to the entire body.40 Yet, more significant is how this sound wraps around like Mumma’s “Stressed Space Palindromes.” As Shaviro describes, “It surrounds you, envelops you, enfolds itself around you.”40

 

3 GAME SOUND

 

3.1 Evolution

 

Although the simple beep of Atari’s Pong arcade game (1972) unexpectedly became iconic, by 1980, the inclusion of dedicated Programmable Sound Generators (PSGs) such as the AY-3-8910 and SN76489 enabled tonal music and more elaborate sound effects to develop.2 These were soon surpassed by the five-channel PSG used in the Nintendo Famicon (1983).2,41 However, despite these advancements, there remained little consideration of the dynamic balance between sounds.2

 

The 16-bit era changed little in terms of song structures, but developments such as FM synthesis, wavetable synthesis, and basic digital sampling enabled more flexible and realistic sound generation.2 More significant was the adoption of the Musical Instrument Digital Interface (MIDI) protocol and agreement of a General MIDI (GM) standard aimed at ensuring consistency across devices.2 The likes of Monkey Island 2: LeChuck’s Revenge42 exploited the possibilities of MIDI to create early examples of dynamic music systems,43 and, although sound quality actually varied in practice, most home computers of the early 1990s had an FM soundcard that supported MIDI.2

 

Following the arrival of CD-ROM technology that enabled the use of high-quality audio recordings at the cost of generally encouraging more linear approaches,the Diamond Monster Sound (1997) was the first PC soundcard to support 3-D sound.44 In the following decade, 3-D audio became more integral to game design, and more sophisticated but also computationally demanding simulations emerged, particularly as real-time games grew in popularity. Probertidentifies three generalised approaches that evolved, often focusing on the production of an equivalent perceptual experience instead of the detailed and accurate 3-D simulation of all sound waves in a stated space. Grimshaw1 contends that these developments in spatial sound not only improved player immersion, but also drove new types of gameplay, most obviously within the first person shooter (FPS) genre.

 

3.2 Outside-In

 

To date, most accounts of video game sound have been chronological, like the above.If a turning inside-out can be identified in music, and in this domain the development of effects spurred new instrumental techniques,21 can something similar be identified in a game sound context, and has this turn spurred new kinds of gameplay?

 

Thief is of particular interest here for how fully it employed sound for various forms of communication. For instance, AI-controlled enemies could ‘hear’ sounds made by both the player and other enemies, thereby enabling them to react to various kinds of environmental stimuli and scenarios. Although relatively simple, this approach aided immersion by expanding the game world beyond the visible, required players to minimise their noise to avoid detection, and led to emergent AI behaviors that might have been difficult to achieve through other methods.

 

AIs communicated with each other almost exclusively through sound. AI speech and sounds in the world, such as the sound of swords clashing, were assigned semantic values. In a confrontation, the player could expect nearby AIs to become alarmed by the sound of combat or cries for help, and was thus encouraged to ambush opponents as quietly as possible.45

 

Second, to achieve the intended behaviours, a new audio system was developed for Thief that enabled sound to propagate as a series of flows passing through ‘room brushes’ (discrete sound zones). These room brushes could be linked to form pathways for sound to travel, simulating realistic properties within the game environment.46

 

The defining of these sound spaces is part of the level design process. Using DromEd, the Thief level editor, a designer would first create basic geometric brushes to define physical structures like walls. These are then enclosed within room brushes which define specific architectural spaces such as rooms, hallways, and open areas. A portalisation process subsequently identifies doorways, windows, and other openings between room brushes that enable sound to travel from one room brush to another. The game engine uses these connections to calculate how sound will propagate through the level. For instance, if there is no direct connection, a sound source will not extend beyond the room brush where it originated. However, if two room brushes are linked by a portal (such as a door or window), sound can travel between them but is modified by the geometry of the portal and its material properties. If multiple room brushes are added to a level, DromEd generates a room database that calculates how sound will propagate, taking into account the spatial geometry and the connections between brushes.47 It’s important to emphasise that outright realism was not necessarily the aim of the developers, but these flows are seen as key to the creation of more natural sonic behaviours.46 Environmental Audio Extensions (EAX) are optionally supported to add specific characteristics per room brush,47 but do not fundamentally change the underlying paradigm.

 

Overall, sound in Thief operates as a dynamic system of flows and exchanges from which both the player and AI-controlled enemies extract auditory cues to navigate the environment and respond to the sounds around them. However, the possibilities are largely fixed by the level designer, and the player is provided with a limited set of possibilities for manipulation. There is also generally a distanced quality in that, rather than operate in proximity to the player, sounds are typically heard in the distance, around corners, etc. Indeed, up close, the affordances of sounds are considerably more limited: hiding and then knocking over a vase on the other side of a room can momentarily distract or confuse an enemy AI, but an essentially binary series of play states are exposed. Players are either undetected or detected; (and, if detected) in cover or exposed; (and, if exposed) a stationary target that faces a near-certain demise or moving sufficiently rapidly to have a chance of escape.

 

With Siege, sound becomes a malleable, almost three-dimensional substance that closely envelops the player from all directions, almost to the point of tactility. These properties make it possible for sound to become a full and integral part of player agency. In other words, the core gameplay of Siege not only features dynamic, real-time processing of audio cues, but actively depends on the intentional and controlled manipulation and exploitation of these by the player. This signifies a significant shift in the potential of game audio, and, as in the Thief case, its implementation necessitated the development of a novel audio system. It is important to note that true spatial audio is not implemented, and therefore all spatial characteristics are simulated.

 

Where most FPS games apply a simple filter effect to sounds originating from behind surfaces to create a muffled effect, Siege uses a more complex system to process sound cues based on simulated spatial acoustic phenomena including obstruction, occlusion, reflection, and refraction. Sound cues are still muffled if the player character directly interacts with a surface (for example, placing a breaching charge). Yet rather than assume that all sound is propagated from behind that surface, Siege traces the shortest possible path between the sound source and the head (ears) of the player character. For example, if a player creates a hole in a wall, sound will subsequently travel through that opening to the player if this newly created path now offers a shorter route.49 The modified sound might offer the player more distinct auditory information about, for instance, the movements of nearby (but unseen) enemies, or where on the other side of the wall explosive charges are being planted. As one player notes,50 “[t]his is why you sometimes see people shotgun the tops of soft walls. It creates an alternative shorter path so you can hear through the wall more clearly and allows you to pinpoint where in the room a sound may be coming from.” The quality of auditory information reaching the player might be further modified by either enlarging the hole or by making another hole.

 

More conventionally, environmental sound in Siege is also modified by the properties of materials in the environment. For example, footsteps on carpet will sound noticeably different to similar footsteps on a hard stone floor. This means that, even without the implementation of true spatial audio, it is possible for the player to locate sounds in the vertical plane.49

 

4 DISCUSSION

 

If convincing examples relevant to Harris’ concept of turning inside-out5 can be found in a musical context, comparing Thief to Siege reveals a related yet essentially inverse shift in video game sound. In this transformation, sound moves inward from an external perspective to a more intimate one, drawing it closer to the player both psychologically and materially.

 

In Thief, sound plays a significant and arguably crucial role in gameplay, but it primarily exists as an external, environmental element to which the player responds. This includes the creaking of floorboards, distant footsteps, and echoing voices: sounds that are atmospheric and likely to aid immersion but are often remote (i.e. experienced at a distance) and sometimes disembodied. A merely secondary function of sound is tied to player-object interactions and the manipulation of certain objects. While this gives the player some limited freedom to decide how sound is used, the possibilities are largely predetermined by the designer, and player affordances are also largely binary. Moreover, rather than actively manipulate sound, the core stealth gameplay means that there is often more incentive for the player to try to make no sound at all.

 

In contrast, Siege makes sound malleable by integrating it with a highly destructible environment that is both fragile and brought to life through acoustic activation. The boundaries of this space are shaped not only by visible surfaces and structures but also by the presence and movement of sound. Furthermore, sound is drawn inward, placing it firmly within the player’s control. Almost every aspect of this architectural substance can be manipulated and reshaped in real-time, allowing players to gain subtle strategic advantages or create deadly traps; whether by enhancing their own auditory environment or by distorting that of opponents. The effect is that the external soundscape that once dictated the player's experience is internalised, made responsive, and subject to player actions.

 

Although there are parallels, these qualities differ from Harris’ concept of inside-out instruments. First, while Siege offers a kind of aural primacy, Harris sees inside-out space as inherently audiovisual, where navigation relies more on visual than auditory cues. Second, Harris sees inside-out space as offering equal engagement for both skilled players and unskilled spectators, noting that the audience “explores as one of the players.” By contrast, the outside-in environment of Siege offers nearly limitless opportunities for skilled players to manipulate sound and gain an edge over others, as well as allowing third party means of measuring and documenting virtuosity.51

 

Grimshaw1 was amongst the first theorists to focus on sound in FPS titles. Rather than a single or fixed relationship between player and the in-game sounds that collectively constitute its soundscape, he describes an acoustic ecology: a pluralistic network of relationships that shift based on player actions and surroundings. At the same time, a key characteristic of sound is its intangibility, and without fixed reference points (geometry) to ground it, the acoustic ecology would be nebulous and unclear. It is the presence of these relatively stable reference points and their interactions with simulated acoustic phenomena that enable the player to parse the soundscape and derive meaning.

 

We can gain some insight into this by playing The Devil’s Tuning Fork,52 a monochrome first-person exploration game that challenges players to navigate their surroundings using a visual interpretation of 3-D sound. The more recent title Limb53 uses a very similar mechanic and aesthetic. The player is equipped with a handheld device that emits rolling, animated sound waves that reveal the largely geometric features of the environment and objects within it. The player can only navigate and interact with the world by constantly reactivating the space with visual sound, but, at the same time, these visual sound waves only derive their meaning from the fixed surfaces and objects they encounter.

 

While the Siege audio system has not been copied or taken forward exactly, it has notably influenced subsequent titles. Call of Duty: Modern Warfare54 included similar directional audio systems. Valorant55 has used sonic details as a primary form of communication. Battlefield V56 has employed environmental destruction as a key part of gameplay and uses associated sound processing and sound cues to convey changes to players. Perhaps more important than any of these details is that the turning outside-in exemplified by Siege has changed contemporary expectations around audio, its possibilities, and its centrality to gameplay.

 

5 REFERENCES

 

  1. M. Grimshaw. The Acoustic Ecology of the First Person Shooter. PhD Dissertation, University of Waikato. (2007).

  2. K. Collins. Game Sound: An Introduction to the History, Theory, and Practice of Video Game Music and Sound Design. The MIT Press. (2008).

  3. A. Farnell. Designing Sound. The MIT Press. (2010).

  4. B. K. Probert. Video Game Acoustics: Perception-Based Sound Design for Interactive Virtual Spaces. PhD Dissertation, University of Adelaide. (2020).

  5. Y. Harris, ‘Inside-out instrument’, Contemporary Music Review, 25(1-2), 151-162. (2006).

  6. Eidos Interactive. Thief: The Dark Project. (1998).

  7. Ubisoft. Tom Clancy’s Rainbow Six Siege. (2015).

  8. P. Manning. Electronic and Computer Music, 3rd ed. Oxford University Press. (2013).

  9. K. Devine, ‘Imperfect Sound Forever: Loudness Wars, Listening Formations and the History of Sound Reproduction’, Popular Music 32(2), 159-176. (2013).

  10. E. Varèse and C. Wen-chung, ‘The Liberation of Sound’, Perspectives of New Music, 5(1), 11-19. (2013).

  11. T. Wilmering, D. Moffat, A. Milo and M. B. Sandler, ‘A History of Audio Effects’, Appl. Sci. 10(3), 791. (2020).

  12. H. Davies, ‘Creative Explorations of the Glitch in Music’, Proc. 2004 IEEE Conference on the History of Electronics, Bletchley Park. (2004).

  13. G. Milner. Perfecting Sound Forever: The Story of Recorded Music. Granta Books. (2010).

  14. D. Morrin. Maestro Fuzz Tone. (u.d.). https://sites.google.com/site/davidmorrinoldsite/home/trouble/troubleeffects/maestro-fuzz-tone

  15. C. Mead, G. Bromham and D. Moffat. ‘A History of Distortion in Music Production’. In G. Bromham and A. Moore (Eds.), Distortion in Music Production. Routledge, 3-12. (2023).

  16. K. McDonald and S.H. Kauffman. ‘‘Tomorrow never knows’: the contribution of George Martin and his production team to the Beatles’ new sound’. In R. Reising (Ed.), Every Sound There Is: The Beatles’ Revolver and the Transformation of Rock and Roll. Routledge. (2017). https://www.taylorfrancis.com/chapters/edit/10.4324/9781351218702-9/

  17. M. Pfaff, D. Malzner, J. Seifert, J. Traxler, H. Weber and G. Wiendl, ‘Implementing Digital Audio Effects Using a Hardware/Software Co-design Approach’. Proc. 10th Int. Conference on Digital Audio Effects (DAFx), Bordeaux. (2007).

  18. M. Buffa and J. Lebrun. ‘A Browser-based WebAudio Ecosystem to Dynamically Play with Real-time Simulations of Historic Guitar Tube Amps and Their Typical Distortions’. In G. Bromham and A. Moore (Eds.), Distortion in Music Production. Routledge, 28-46. (2023).

  19. J. Olivier, ‘The Diverting of Musical Technology by Rock Musicians: The Example of Double Tracking’, Popular Music 18(3), 357-365. (1999).

  20. C. Roads. The Computer Music Tutorial, 2nd ed. The MIT Press. (2023).

  21. P. Théberge. Any Sound You Can Imagine: Making Music/Consuming Technology. Wesleyan University Press. (1997).

  22. M. Leonard. ‘How Kevin Shields and My Bloody Valentine changed the course of guitar playing forever’. Guitar.com. (2021). https://guitar.com/features/opinion-analysis/how-kevin-shields-my-bloody-valentine-changed-guitar-playing/

  23. M. McGonigal. Loveless. Bloomsbury. (2010).

  24. T. Koozin. ‘Counterpoint and Expression in the Music of U2’. In Moylan, W., Burns, L., and Alleyne, M. (Eds.), Analyzing Recorded Music: Collected Perspectives on Popular Music Tracks (1st ed.). Focal Press. (2022).

  25. D. Kootnikoff. U2: A Musical Biography. Bloomsbury. (2010).

  26. M. R. Schroeder, ‘Natural sounding artificial reverberation’, J. Audio Eng. Soc. 10(3), 219-223. (1962).

  27. J. A. Moorer. ‘About This Reverberations Business’. Rapports IRCAM, Centre Georges Pompidou. (1985).

  28. V. Valimaki, J. D. Parker, L. Savioja, J. O. Smith and J. S. Abel, ‘Fifty Years of Artificial Reverberation’, IEEE Transactions on Audio, Speech, and Language Processing, 20(5), 1421-1448. (2012).

  29. S. Costello. Stability through Time Variation: Ursa Major Space Station. (2010). https://valhalladsp.com/2010/05/14/stability-through-time-variation-ursa-major-space-station/

  30. B. Blesser and L.-R. Salter. Spaces Speak, Are You Listening? Experiencing Aural Architecture. The MIT Press. (2007).

  31. G. Mumma and M. Fillion (Eds.). Cybersonic Arts: Adventures in American New Music. University of Illinois Press. (2015).

  32. P. Rebelo, ‘Haptic Sensation and Instrumental Transgression’, Contemporary Music Review 25(1-2), 27-35. (2006).

  33. Brainwashed Recordings. Various Artists: Wire in the Ear. Brainwashed 004. (2002).

  34. Guitar.com. Rig Diagram: Kevin Shields, My Bloody Valentine (1991). https://guitar.com/rig-diagrams/rig-diagram-kevin-shields-my-bloody-valentine-1991/

  35. D. Toop. Oceans of Sound: Aether Talk, Ambient Sound and Imaginary Worlds. Serpent’s Tail. (2005).

  36. Creation Records. My Bloody Valentine. Isn’t Anything. CRECD40. (1988).

  37. Creation Records. My Bloody Valentine. Loveless. CRECD60. (1991).

  38. K. R. Martin. ‘My Bloody Valentine Loveless’. The Quietus. (u.d.). https://thequietus.com/interviews/bakers-dozen/kevin-richard-martin-bakers-dozen-favourite-albums/12/

  39. J. Hadfield. ‘Dommune takes a new direction with My Bloody Valentine gig’. The Japan Times. (2013). https://www.japantimes.co.jp/culture/2013/09/25/music/dommune-takes-a-new-direction-with-my-bloody-valentine-gig/

  40. S. Shaviro. Doom Patrols: A Theoretical Fiction About Postmodernism. Serpent’s Tail High Risk Books. (1997).

  41. K.C. Collins, ‘In the Loop: Creativity and Constraint in 8-bit Video Game Audio’, Twentieth Century Music 4(2), 209-227. (2008).

  42. LucasArts. Monkey Island 2: LeChuck’s Revenge. (1991).

  43. M. Sweet. Writing Interactive Music for Video Games: A Composer's Guide. Addison-Wesley Professional. (2014).

  44. R. Broida, ‘The Monster Sound card surrounds you with sound’, Computer Shopper 16(10), 269. (1997).

  45. T. Leonard. ‘Postmortem: Thief: The Dark Project’. Game Developer. (1999). https://www.gamedeveloper.com/design/postmortem-i-thief-the-dark-project-i

  46. vfig. ‘Thread: Room Bushes - How do they actually work? (from a technical standpoint)’. TTLG Forums. (2021). https://www.ttlg.com/forums/showthread.php?t=151206

  47. Looking Glass Studios. Thief Dromed Official Tutorial. (u.d.). http://www.thief-thecircle.com/teg-old/guides/official/tutor.asp

  48. Rainbow Six Siege. Console Audio Update. (2022). https://x.com/Rainbow6Game/status/1537875089110310912

  49. Ubisoft. Tom Clancy’s Rainbow Six Siege - Sound Propagation - Community Corner #3 [EUROPE]. (2016). https://youtu.be/fI4YfurxVZU?t=146

  50. SneakyAlbaHD. r/SiegeAcademy. (2022). https://www.reddit.com/r/SiegeAcademy/comments/zyuo1k/ps5_players_or_anyone_who_knows_audio_ig_does_the/

  51. R6Analyst. R6 Analyst - A suite of utilities to improve your Rainbow Six: Siege experience. (2020). https://r6analyst.com/

  52. DePaul Game Elites. The Devil’s Tuning Fork. (2009). https://dge2.itch.io/devils-tuning-fork

  53. Bootur Games. Limb. (2024). https://store.steampowered.com/app/3205820/Limb/

  54. Activision. Call of Duty: Modern Warfare. (2019).

  55. Riot Games. Valorant. (2020).

  56. EA Games. Battlefield V. (2018).