Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Game Audio Post-Production

7.890 Aufrufe

Veröffentlicht am

Fanshawe College lecture on video game post-production

Veröffentlicht in: Unterhaltung & Humor, Business
  • Loggen Sie sich ein, um Kommentare anzuzeigen.

Game Audio Post-Production

  1. 1. <ul><li>Video Game Audio: </li></ul><ul><li>The who, what, when, where, hows and whys. </li></ul><ul><li>Karen Collins </li></ul><ul><li>Canada research chair </li></ul><ul><li>[email_address] </li></ul>
  2. 2. Outline <ul><li>WHAT: What is Game Audio? </li></ul><ul><li>WHO: Who cares? Who makes it? </li></ul><ul><li>WHEN: When does the audio team get involved? WHEN did these processes/techniques develop? </li></ul><ul><li>WHERE do games differ from linear media? </li></ul><ul><li>HOW and WHY do we fix the problems? </li></ul>
  3. 3. WHAT is game audio? <ul><li>Interface sounds </li></ul><ul><li>Music </li></ul><ul><li>Sound Effects </li></ul><ul><li>Dialogue </li></ul><ul><li>Ambience </li></ul><ul><li>Cinematics </li></ul><ul><li> all of these can function on several layers (like diegetic/non-diegetic) </li></ul>
  4. 4. WHO cares about game audio? <ul><li>83% of Adult Gamers listed sound as one of the most important video game console elements </li></ul><ul><li>47% of Game Console owners (18-25 yr) hook up their game console to a home theatre system </li></ul><ul><li>48% of Hardcore Gamers said surround sound is a purchase driver for next generation consoles </li></ul>
  5. 5. WHO makes it? The Game Audio Production Process <ul><li>Sound Director </li></ul><ul><li>Composer(s) </li></ul><ul><li>Sound Designer(s) </li></ul><ul><li>Programmer/Engineer(s) </li></ul><ul><li>Dialogue/VO director, actors. </li></ul><ul><li>Licensing/Contracting directors </li></ul><ul><li>Implementation specialists (emerging role) </li></ul>
  6. 6. Audio “Post”: Integration <ul><li>Sound design, mixing, integration typically handled by the same person. </li></ul><ul><li>Director/designer must make decisions regarding implementation of all audio assets (including music) </li></ul><ul><li>Often is a programmer as well </li></ul>
  7. 7. WHEN does the audio team get involved? <ul><li>Sometimes at end of game: populate game with sound </li></ul><ul><li>Sometimes at start or in middle: play a larger role in implementation of audio in the game, and can make critical decisions in regards to the development of the game and its sound </li></ul>
  8. 8. Music driving gameplay elements. New Super Mario Bros (Nintendo DS 2006) (Koji Kondo)
  9. 9. <ul><li>WHEN: </li></ul><ul><li>A Brief History of </li></ul><ul><li>Game Audio </li></ul><ul><li>Innovations </li></ul><ul><li>(What’s so different about </li></ul><ul><li>game audio??) </li></ul>
  10. 10. <ul><li>Computer Space! </li></ul><ul><li>(Nutting Associate 1971) </li></ul><ul><li>First game to have sound. </li></ul>
  11. 11. Space Invaders (Midway 1978) <ul><li>First use of continuous “background” music. </li></ul>
  12. 12. Technological Constraints <ul><li>Up ‘N Down (Sega 1983) </li></ul>
  13. 13. Atari VCS (2600) <ul><li>Up ‘N Down (Sega 1984) </li></ul>
  14. 14. Working With Constraints: Nintendo NES <ul><li>Metroid (Nintendo 1987) (Hip Tanaka) </li></ul><ul><li>vibrato (pitch modulation), tremolo (volume modulation), slides, portamento, echo effects </li></ul>
  15. 15. Ballblazer (LucasArts 1984) (Peter Langston) <ul><li>Algorithmic generation </li></ul><ul><li>“ Riffology” method (Optimized randomness) by Peter Langston </li></ul><ul><li>32 eight-note melody fragments </li></ul><ul><li>Algorithm chooses how fast, how loud, when to omit notes, when to insert rhythmic break </li></ul>
  16. 16. Working With Constraints: Commodore 64 Track 1 (e.g. “Level_One”) Instrument 1 (envelope, waveform, effect filters, etc.) Instrument 2 (envelope, waveform, effect filters, etc.) Rob Hubbard’s Module Format Pattern 1 (sequence of notes) Pattern 2 (sequence of notes) Pattern 3 (sequence of notes)
  17. 17. Combining modules (in MIDI) with control statements MIDI and the Creation of iMUSE Land, Michael Z. and Peter N. McConnell. Method and Apparatus for Dynamically Composing Music and Sound Effect Using a Computer Entertainment System . US Patent No. 5,315,057. 24 May, 1994.
  18. 18. Super Mario World (Nintendo 1991) (Koji Kondo) Musical layering technique Mario jumps on Yoshi & gets extra layer of music (SNES).
  19. 19. Legend of Zelda: Ocarina of Time (Nintendo 1999) (Koji Kondo) (N64) <ul><li>Proximity-based algorithms control cross-fades </li></ul>
  20. 20. State-of-the-Art Today <ul><li>7.1 to 8.1 surround sound </li></ul><ul><li>Combination of synth with orchestra, choir </li></ul><ul><li>At least 512 channels of sound </li></ul><ul><li>God of War (Gerard Marino, Sony 2006) </li></ul>
  21. 21. Techniques we can borrow from Film <ul><li>Spotting a game </li></ul><ul><li>Emotional reinforcement </li></ul><ul><li>Use of sound FX libraries </li></ul><ul><li>Field recording techniques, Foley </li></ul><ul><li>Mixing of cutscenes/cinematic sequences </li></ul><ul><li>Using multichannel surround* </li></ul><ul><li>Certain techniques from, for instance, Ride films—e.g. Prominent sub-woofer, etc.—”physicality” of sound to shock and awe </li></ul>
  22. 22. <ul><li>Mix Goals (shared with linear media) </li></ul><ul><li>Balance: among and between elements—games can’t undwerstand dialogue through music, explosions, etc. </li></ul><ul><li>Intelligibility </li></ul><ul><li>Believability: realistic and emotionally effective reduces illusion; characters for instance suddenly come in much more clearly, or too loud, etc. </li></ul><ul><li>  </li></ul><ul><li>  </li></ul><ul><li>Effective communication: what is the prioirity in the game right now > focus listener’s attention </li></ul><ul><li>Priorities </li></ul><ul><li>Gameplay objectives </li></ul><ul><li>  </li></ul><ul><li>Emotional delivery: audio tells player how to feel. </li></ul><ul><li>Avoid distractions </li></ul><ul><li>Avoid competitions to player’s attention </li></ul>
  23. 23. Where games differ from Film <ul><li>Linearity Vs. Non-Linearity/ unpredictability! </li></ul><ul><li>Interactivity with player/multiplayer </li></ul><ul><li>No “production” sound </li></ul><ul><li>Temporality (length) </li></ul><ul><li>Localization </li></ul><ul><li>Budgets </li></ul><ul><li>Delivery methods/technology </li></ul><ul><li>Listening environment </li></ul><ul><li>Mixing, dynamic range and the concept of “post” </li></ul>
  24. 24. What is Interactive Audio? <ul><li>sound events occur in direct reaction to a player’s movements. The player triggers the cue , and can repeatedly activate it, such as by making a character jump up and down. </li></ul>
  25. 25. Interactive Audio: Footsteps Enemy music cue
  26. 26. What is adaptive audio? <ul><li>“ adaptive” audio is generally referred to as sound that reacts to transformations in the gameplay environment—such as a change from day to night set by a game’s timer mechanism . Adaptive audio is not directly triggered by a player </li></ul>
  27. 27. Adaptive Audio Change in music day to night Wolf, crow
  29. 29. <ul><li>Problem: transitions from cue to cue </li></ul><ul><ul><li>No “post” = unpredictable timings </li></ul></ul><ul><ul><li>Abrupt jumps are jarring </li></ul></ul>
  30. 30. Solution: transitions: Cross fade
  31. 31. Cross Fades
  32. 32. Other solution: The Stab/SFX <ul><li>Sound effect (e.g. Explosion, gunshots, etc.) or stab (quick shock chord) used to cover transition time </li></ul>
  33. 33. Solution: Variable structure <ul><li>Create “transition matrix”: chart out all possible directions for each sequence, and create transitions for marker points/jumps </li></ul>
  34. 34. Parameter Based Music: Parameters <ul><li>Number/action of non-playing characters </li></ul><ul><li>Number/action of playing characters </li></ul><ul><li>Actions </li></ul><ul><li>Locations (place, time of day, etc.) </li></ul><ul><li>Scripted or unscripted events </li></ul><ul><li>Player health or enemy health </li></ul><ul><li>Difficulty </li></ul><ul><li>Timing </li></ul><ul><li>Player properties (skills, endurance) </li></ul><ul><li>Bonus objects </li></ul><ul><li>Movement (speed, direction, rhythm) </li></ul><ul><li>“ Camera” angle </li></ul>The transition matrix approach and the creation of transitional units
  35. 35. Example: Parameter-Based Music <ul><li>No One Lives Forever (Guy Whitmore 2000) </li></ul><ul><li>Six standard music states are based on number of NPC enemies: </li></ul><ul><li>Silence </li></ul><ul><li>Super ambient </li></ul><ul><li>Ambient </li></ul><ul><li>Suspense/sneak </li></ul><ul><li>Action/combat 1 </li></ul><ul><li>Action/combat 2 </li></ul>
  36. 36. Example: No One Lives Forever <ul><li>Earth Orbit: Ambush theme starts in music state 5 (combat 1), transitions to music state 2 (ambient: in </li></ul><ul><li>elevator) </li></ul><ul><li>then transitions to music state 6 (combat 2) </li></ul>
  37. 37. <ul><li>Problems: Multi-player interactivity </li></ul><ul><li>Unscripted events! </li></ul><ul><li>parameter-based music is difficult: </li></ul><ul><ul><li>Music cannot be tied to specific events or locations </li></ul></ul><ul><ul><li>Music cannot be tied to specific parameters </li></ul></ul><ul><ul><li>To some extent, this is a general audio problem in MMOGs—as audio is as much about emotion as realism. </li></ul></ul>
  39. 39. Problems with Game Sound Effects <ul><li>Ambience: Loops are boring. Loops are boring. Loops are boring. </li></ul><ul><li>Brain picks out patterns. </li></ul><ul><li>Users get bored with hearing same sounds BUT sound designers can’t possibly record enough variations of sounds (time, budget) </li></ul><ul><li>Users need a new experience every time they play the game (promised by LucasArts’ Euphoria technology) </li></ul><ul><li>Audio not responding to physics </li></ul>
  40. 40. Solution: Granular Synthesis
  41. 41. Granular Synthesis Examples <ul><li>Crowd </li></ul><ul><li>Tennis </li></ul><ul><li>Speech </li></ul>Crowd and speech examples borrowed from Leonard Paul at Vancouver Film School
  42. 42. Granular: Remaining Open Questions <ul><li>What elements in a sound effect can be varied while still maintaining the “ meaning ” of the sound? </li></ul><ul><li>How little of a sound can be changed to alter perception? </li></ul><ul><li>How can we create AI systems that are aware of these potential meanings, and make real-time adjustments to sounds in a game? </li></ul><ul><li>How to develop an “ audio physics engine ”: e.g. footsteps change based on how much player is carrying, etc. </li></ul>
  44. 44. Mixing Problems: -too many sounds poor dynamic range -poor variation -unpredictable timings -no “post production” mixing : results in “muddy”, clash of sounds
  45. 45. Dynamic range … in a popular film … in a popular game Graphics adapted from those supplied by Rob Bridgett of Swordfish Studios .
  46. 46. Dynamic Range: Problems <ul><li>Competition with wildly varying levels of other games: reference levels not effective for games—no widespread adoption of standards </li></ul><ul><li>producer/game designer tends to always want it “louder” </li></ul><ul><li>Listening environment in games more competition for attention—larger “noise floor” = more competition in acoustic environment- restricts dynamic range </li></ul><ul><li>listener fatigue </li></ul>
  47. 47. Mixing: Dynamic Range Solutions <ul><li>Reduce amount of sounds </li></ul><ul><li>Variable Volume </li></ul><ul><li>Evangelize need for good dynamic range </li></ul><ul><li>Use what standards you can find (e.g. Xbox Boot sound) </li></ul><ul><li>Be aware of possible noise-makers </li></ul><ul><ul><li>Non-game sounds: e.g. windows messenger pops up </li></ul></ul>
  48. 48. Variable volume
  49. 49. Solutions: Listener fatigue <ul><li>Learn how to use silence! </li></ul><ul><li>Noise floors—what is loudest background noise in environment? </li></ul><ul><li>Balance impact opportunity with intelligibility </li></ul><ul><li>Audio ‘breaths’ (overall/in specific frequencies—lose some sound e.g. bass to give a rest in particular frequency ranges) </li></ul><ul><li>Timeouts or gradual ducking for some elements (if it’s not important, time it out) </li></ul><ul><li>VARIABILITY! </li></ul>
  50. 50. Mixing Problem: Dynamic Range/ Variation <ul><li>Change DSP rather than just volume to change audio </li></ul><ul><ul><li>E.g. reverb to make it softer, dreamier </li></ul></ul><ul><ul><li>Phasing to create a “dazed” effect </li></ul></ul><ul><ul><li>Overdrive to make it more aggressive, etc. </li></ul></ul>
  51. 51. Solution: Location-Based Run-Time Mixing <ul><li>Real-time DSP to adjust sound </li></ul><ul><li>E.g. bottle drop on hard floor of kitchen or in next carpeted room </li></ul><ul><li>Factor in 5.1 surround to adjust real-time panning </li></ul><ul><li>REQUIREMENT : audio engine to pass parameters from game and from player back and forth to engine. </li></ul>
  52. 52. Problem: Mixing: Unpredictability, Variability
  53. 53. Problem: Ambience repetition <ul><li>No “production sound” recorded– ambience assembled from scratch. </li></ul><ul><li>Repetition of certain aspects can be detected by brain: patterns easy to discern and takes away from immersion </li></ul>
  54. 54. Solutions: Real-time Weighted Mixing <ul><li>Weighted permutations </li></ul><ul><ul><li>Predict which sounds can recur without making obvious. </li></ul></ul><ul><li>Example: </li></ul><ul><ul><li>Dialogue, Sound FX A. Sound FX B, player sounds, music, ambience </li></ul></ul><ul><ul><li>If dialogue = “run!”, set parameter to 1 </li></ul></ul><ul><ul><li>If gunshot is coming towards us, set parameter to 2 </li></ul></ul><ul><ul><li>If no action, fade out music and raise ambience </li></ul></ul><ul><ul><li>REQUIREMENT : “intelligent” Engine to predict and set weighting </li></ul></ul>
  55. 55. The big problem: Logjams <ul><li>Little definition for which elements are acoustically ‘most important </li></ul><ul><li>Too much in similar frequency ranges </li></ul><ul><li>Intelligibility vs. impact </li></ul>
  56. 56. Logjam solutions <ul><li>Control frequencies of sounds </li></ul><ul><li>Assign priorities in code </li></ul><ul><ul><li>E.g. dialogue = priority “1” </li></ul></ul><ul><ul><ul><li>Ambience = priority “4” </li></ul></ul></ul><ul><ul><ul><li>Gunshot = priority “2” </li></ul></ul></ul><ul><ul><ul><li>Music = priority “3” </li></ul></ul></ul><ul><ul><ul><li>Footsteps = priority “5” </li></ul></ul></ul><ul><li>Real-time ducking: Use notch filters to cut out frequencies that conflict (must organize in advance!) </li></ul>
  57. 57. Steps to organization <ul><li>Group sounds into multi-level categories: e.g. first-person sounds, third-person, which are first-person weapon, third-person dialogue, etc. </li></ul><ul><li>Enforce RMS/peak levels (level limits) per category of sounds—e.g. enemies can never get above -10dB, first person sounds never above -15dB, etc. </li></ul><ul><li>Balance categories of sounds amongst themselves—within each category—check to maintain balance level-wise for dialogue, etc.+ choose reference piece </li></ul>
  58. 58. Steps to organization <ul><li>Prioritize Sound Groups by Function </li></ul><ul><ul><li>E.g. Goal-relevant sounds, Instructive dialogue, Sounds that impact the player, Enemy weapon fire, Sounds that provide environment or emotional support, Musical score, Ambience, etc. </li></ul></ul><ul><li>Dynamic Priorities (Per-Sound) </li></ul><ul><ul><li>E.g. Distance / distance ranking, Total attenuation, Actual current peak/RMS, Number of playing instances, Game-specific priority schemes </li></ul></ul><ul><li>  </li></ul>
  59. 59. Steps to organization: the Mix <ul><li>Ducking </li></ul><ul><ul><li>Basic: Notification-driven </li></ul></ul><ul><ul><li>Duck when new higher priority sound starts </li></ul></ul><ul><ul><li>Duration/speed to ducked state </li></ul></ul><ul><ul><li>Target volume/percentage </li></ul></ul><ul><ul><li>Advanced: Peak/RMS-driven </li></ul></ul><ul><ul><li>Duck/unduck based on current monitored levels of higher priority category (attack/release) </li></ul></ul><ul><li>  </li></ul>
  60. 60. The mix: culling/muting <ul><li>Culling/Muting </li></ul><ul><ul><li>Alternative to simple ducking </li></ul></ul><ul><ul><li>Thins out sound field, Brings more focus to new sounds </li></ul></ul><ul><ul><li>Stop or mute using release envelopes/fades </li></ul></ul><ul><ul><li>Bandwidth concerns (mute) </li></ul></ul><ul><ul><li>Restart accuracy (stop) </li></ul></ul>
  61. 61. The Mix; EQ and Filtering <ul><ul><li>Per-category critical frequency range(s) </li></ul></ul><ul><ul><li>Can use offline or realtime analysis </li></ul></ul><ul><ul><li>Notch out these frequencies for other categories (duration for smoothing) </li></ul></ul><ul><li>‘ Blur’ lower priority sounds </li></ul><ul><li>Compression/Limiting </li></ul><ul><ul><li>Can combine attenuation with compression/limiting </li></ul></ul><ul><ul><li>Keeps low-pri sounds from falling off too significantly or obviously </li></ul></ul><ul><ul><li>Can conflict with ducking desire </li></ul></ul>
  62. 62. MIXING: Surround <ul><li>Surround as game tool: player can interact/depend on </li></ul><ul><li>HRTF: where is the player’s character> Where are they facing, etc.? </li></ul><ul><li>Remember the CHARACTER may be changing position, but the PLAYER may not be! How do you adapt to this?? </li></ul>
  63. 63. A final point on mixing <ul><li>MIXING IS MORE THAN REALISM. </li></ul><ul><ul><li>What are the psychological aspects of mixing? </li></ul></ul><ul><ul><li>The most effective mix is not always the most realistic. </li></ul></ul>
  64. 64. Audio Middleware
  65. 65. Wwise <ul><li>prototype the integration of audio in a game simulation before the game is even finished. </li></ul><ul><li>Environmental effects can be rendered in real-time, and occlusion and obstruction effects can be managed within the software, mixing up to four simultaneous environments per object. </li></ul><ul><li>Sound prioritization for real-time mixing is also included, as is randomization of various elements for effects such as pitch or volume, to enhance realism. </li></ul><ul><li>Real-time game parameter controls can also be set, to adjust sound properties based on parameter value changes in a game </li></ul><ul><li>Validate profiles (how expensive is your audio!?) </li></ul>
  67. 67. Problem: technological constraints <ul><li>MOBILE AUDIO: </li></ul><ul><ul><li>Trying to write songs and create sounds in a few kB for constrained technology such as mobile phones </li></ul></ul>
  68. 68. Cheese Racer <ul><li>Muting and unmuting multiple tracks in a MIDI file to produce various mixes. </li></ul><ul><ul><li>2 drum loop samples, </li></ul></ul><ul><ul><li>three tones: bass, melody and chords </li></ul></ul><ul><li>In the code for the first level, ten different combinations of tracks: percussion + bass, percussion + bass + melody, bass + melody + chords, etc. </li></ul>
  69. 69. (cont..) <ul><li>Notice how the mix changes every time the mouse gets a piece of cheese. </li></ul><ul><li>Music loop 40 seconds. 10 different combinations: therefore, each level has more than 6 minutes of different music mixes. “Because the mix changes depending on gameplay, the music will never play exactly the same way twice, thereby increasing variation and decreasing &quot;ear fatigue.&quot;” </li></ul><ul><li>In fact, there are two more sets of tracks playing two more styles of music, using the same tempo and percussion tracks as the first level. </li></ul><ul><li>Entire game contains almost 20 minutes of various music mixes, using only 68K of compressed sample and MIDI data. </li></ul>
  70. 70. Solutions: technological constraints: <ul><li>Playing a single sample at different pitches to produce variation. </li></ul><ul><li>The &quot;pitch it up, play it down&quot; technique for saving space. </li></ul>
  71. 71. Sampling to save size <ul><li>This game contains only a single &quot;trumpet fall&quot; sound effect </li></ul><ul><li>modify the playback sample rate so that each time you pick up a piece of cheese, the sound is played at a different pitch </li></ul><ul><li>randomly vary the pitch during gameplay, so there's no repeating pattern to it </li></ul><ul><li>The sound tends to mask the transition from one mix to another, helping to create a more seamless audio experience. </li></ul><ul><li>same kind of pitch-shifting effect applied to car horn beeps: (fig 1) </li></ul>