Contact

Montréal company Audiokinetic is the publisher of a popular game audio middleware called Wwise. In this article, I present instructions and sample assets meant to help guide a Unity developer through the process of integrating Wwise into their game project. The software versions used throughout this article are Unity 5.1.3.f1 and Wwise v2015.1.1 running on OS X 10.10.4.

Preview

You can see the gameplay and hear the final audio product of this article in the video below.

 

 

 

Overview

The article’s content will touch on the following areas of Wwise, which gives a small taste of what is possible with the software:

  • Wwise Editor views and terminology
  • Events
  • Game Syncs (RTPCs, States)
  • Interactive Music
  • Mixing with Audio Busses

Download the audio assets for this article: Audio assets.zip (6.7 MB)

These instructions make use of a modified version of Unity’s sample project called “Survival Shooter” available for free on the Asset Store. It’s an endless shooter scene with three enemy types and one weapon type, and some sound effects and music – which we are going replace with our own using Wwise. Let’s dive right in…

Survival Shooter game

First Steps – Making a sound with Wwise

In the Unity project’s root folder (which contains Assets), create and save a new Wwise project. Make sure that the appropriate platform is selected, in this case Mac. The newly created project will launch into the Design layout (F5).

Creating a new Wwise project

Navigate to the Audio tab of the Project Explorer view. By default, this is in the top left of the Wwise Authoring editor. Right-click on the object called “Default Work Unit” in the Actor-Mixer Hierarchy and choose “Import Audio Files…”. The Actor-Mixer Hierarchy is a structure within Wwise that contains all of the sound effects for the game project. Choose to add the rifle_shot.wav file from the sample assets, keeping it’s Object Type/Action as Sound SFX. Tip: A useful keyboard shortcut to import files is Shift + I. 

Importing an audio file

This is the only file that we’ll use for the sound of the gun firing. With the rifle_shot SFX object selected, you can audition the sound by tapping on the spacebar. The Transport Control located at the bottom of the Editor will reflect the name of the selected object in its title bar. You’ll also notice that upon selecting the SFX object, the Editor updates to show its Sound Property Editor and the Contents Editor views. Let’s make use of the Sound Property Editor to give the rifle sound a bit of random variation!

With the rifle_shot object still selected, click on the small circular icon under the Pitch property, which opens the Randomizer view. Enable the Randomizer, and set the Min/Max values to -150 and 150, respectively. These values are in cents – there are 100 cents between adjacent semitones in the chromatic scale, so we are effectively giving the Pitch property of rifle_shot a +/- 1.5 semitones of range in which it can be randomized. This will give some variation to what could otherwise be an annoyingly repetitive sound effect.

Adding a little randomness

Audition the rifle_shot SFX object again several times to hear the random changes in pitch. Congratulations, you have just learned how to use a basic building block of sound effects in Wwise – the SFX object!

Events: Connecting Unity and Wwise

We are now able to play a sound within Wwise, but how will it know to play that rifle_shot SFX object when the player is actually firing their weapon in the Survival Shooter game? The answer is Events, which is one way that the game can cause things to happen within the Wwise project. We will now add an Event called PlayerShoot, which will get called by the game code whenever the player’s weapon is fired.

Switch to the Events tab in the Project Explorer view. Right-click on the Default Work Unit in the Events folder and choose New Child > Play. Name the newly created event “PlayerShoot”. Note: It is important that the string representing the Event name is exactly as shown – we’ll be calling this Event by name from the code of the Survival Shooter game.

Making an Event

You may have noticed that the PlayerShoot Event has been automatically added to the Event Viewer view immediately below the Project Explorer. You can click on the PlayerShoot Event in either view to open the Event Editor view. By choosing to create a Play Event in the last step, Wwise has automatically added a Play action to the Event Actions list for us, however it is missing an Object reference – it currently doesn’t know what it should be playing when this Event gets called. You can click Browse… to select the rifle_shot SFX object from the Actor-Mixer Hierarchy.

A complete Event!

In the same way that you auditioned the rifle_shot SFX object, you can also audition the PlayerShoot Event by selecting it from the Event Viewer and tapping the spacebar. Now we know to how to create Events and add Actions to Objects, we are almost ready to call this Event from the game code. We just need to do one more very important thing in Wwise…

Generating SoundBanks

SoundBanks are the container for everything that you will create in Wwise, including the SFX objects themselves, Events, and more. It is the SoundBank that will ultimately be loaded by the game, so it is important that you generate a new version of your SoundBank whenever the Wwise project gets changed or updated. You can have multiple SoundBanks for a single game (typically each game level will have its own SoundBank), however since Survival Shooter only contains a single scene we will only ever need a single SoundBank.

Switch to the SoundBanks tab in the Project Explorer view. Right-clicking on the Default Work Unit in the SoundBanks folder will allow you to choose New Child > SoundBank. Go ahead and call it “MainSoundBank”.

Switch the Wwise Layout to SoundBank by selecting it from the Layouts drop-down menu or by tapping F7. Wwise will update its views to reflect the new work context.

Generating a SoundBank

You should be able to see the MainSoundBank object as a child of the Default Work Unit in the SoundBank Manager view. Drag and drop the Actor-Mixer Hierarchy folder from the Audio tab and the Events folder from the Events tab to the MainSoundBank object in the SoundBank Manager view.  Make sure that the SoundBank, its Platform, and Languages are selected in the SoundBank Manager view, and click Generate.

Adding Wwise to our Unity project

Switch over to Unity and open the Level 01 scene if you haven’t already. We are going to import the Unity Wwise Integration (WwiseUnityIntegration_v2015.1.1_Mac.unitypackage) as a Custom package.

Installing Wwise into our game

If all goes well, you should see the Wwise Setup screen appear. Notice how it has already located the path of our Wwise project that we saved in the game’s root directory. We’re going to keep the default settings, which means the SoundBank path should remain “Audio/GeneratedSoundBanks” and both checkboxes should remain selected – Wwise is going to automatically add some game objects to the Survival Shooter scene for us.

Click “Start Installation”.

Upon completion, you should see a new tab appear in Unity called Wwise Picker. This gives us a high level view of the Wwise project structure and everything that we have created in it so far.

Wwise Picker

If you look in the Scene Hierarchy, you’ll notice a new game object called “WwiseGlobal”, which contains two script components that handle initialization and termination of the Wwise session for the scene. We’ll leave this object as-is.

Ak Audio Listener

If you inspect the Main Camera object, you’ll also find that Wwise has replaced Unity’s Audio Listener with its own variant – Ak Audio Listener – and given it a Listener ID of 0. If you play the Unity scene now, you’ll only hear silence. We need to load our SoundBank and call the PlayerShoot Event from the game code in order to heard the rifle_shot SFX object.

Loading a SoundBank in Unity

Create an empty game object in the scene and name it “LoadSoundBank”. Attach the script component AkBank from the Wwise subfolder and choose “MainSoundBank” as the Bank Name, leaving the other properties as-is.

Loading the SoundBank in Unity

Open up the PlayerShooting.cs script in MonoDevelop. This is where we’ll add the call to the PlayerShoot Event that we just created. Locate the call to Shoot() on line 49 of this script. This is the function that gets called each time the weapon is fired, and is therefore an appropriate place to hook into the PlayerShoot Event.

Add the following code below Shoot():

AkSoundEngine.PostEvent("PlayerShoot", this.gameObject);

The PostEvent method expects two arguments: the name of the Event being called as a string, and a relevant game object.

Triggering an Event with AkSoundEngine

Go ahead and play the scene again, and you should be able to hear the rifle_shot SFX object play with subtle variations in pitch each time the player fires their weapon.

The Beauty of Wwise

Let’s step back and reflect on what we’ve just done. By using a single line of game code to call the PlayerShoot Event, we have effectively decoupled the job of the audio programmer (us) from that of the game programmer. This means that as long as the Event names are agreed upon, the game programmer can keep on programming the game logic while the audio team can focus on developing the best audio experience in parallel. Say we wanted to change the audio file used for the rifle sound, or perhaps layer several sounds together, or even add another Play action to the PlayerShoot Event which could be used to play the sound of the gun reloading after each shot (after a short delay). This can all be done now without any further input from the game programmer. The audio programmer would simply need to regenerate the SoundBanks with any changes or new elements, and the game audio will update accordingly.

Consider also the relatively simple process of randomizing the pitch of the rifle_shot sound, and what would be required to achieve a similar effect in C# game code. It would require several lines of code to generate a random number and assign it as the AudioSource’s pitch, and if we wanted to try a different audio file we would have to bother the game programmer to give them the new file so they could load it into the project assets and in the AudioSource itself. Hopefully you can appreciate the development time that this would save for the entire team.

Making more sounds…

Let’s switch back to our Wwise project, and change the Layout to Designer by tapping F5. Switch to the Audio tab of the Project Explorer.

We will now add some sounds that will be played when the player takes damage. This do this, we are going to use a Wwise object called a Random/Sequence Container. This object will contain multiple SFX objects of various grunt sounds, and randomly select and play one each time the player character takes damage. Import (Shift + I) the folder called “damage sounds”. The Wwise importer will automatically create a Random/Sequence Container as the parent object when importing a folder containing multiple audio files.

Creating a Random/Sequence Container in Wwise

With the “damage sounds” object selected, let’s turn our focus to its Property Editor. We can add some random pitch variation to the entire container using the Wwise Randomizer again. This time, let’s set the pitch variation to be strictly positive – we’ll assign a value of 300 cents, so that the grunt sounds have a playful chipmunk-like quality to them. We also want to make sure that the Play Type is set to Random Shuffle, and since we don’t want to hear the same grunt sounds consecutively, we can tell Wwise to “Avoid repeating the last 2 played”. We should also double check to make sure that the Play Mode property is set to Step rather than Continuous.

Random Container properties

Using the same approach as the PlayerShoot Event, we’ll create a new Event called PlayerDamage with an Action that calls Play on the “damage sounds” Container object.

Creating the PlayerDamage Event

Creating the Enemy SFX objects…

Next, we’ll give the three enemy types some sounds of their own. With the Default Work Unit of the Actor-Mixer Hierarchy selected again, import (Shift + I) the audio files named bunny.wav, bear.wav, and hellephant.wav. These will be the primary sounds for the three enemy types.

If you audition these three new SFX objects, you’ll notice that they’re all variations on a monster growl, and reflect the physical size of the enemy – the bunny sound is almost a breathy whisper, while the hellephant sound is longer, deeper, louder, and feels like a much larger animal. The bear sound falls somewhere in between the bunny and hellephant accordingly.

We want these sounds to loop indefinitely. Each SFX object has a Loop setting in the Property Editor which can be enabled, but rather than doing this for each individual sound we are going to use Wwise’s Multi-Editor tool to save time. Select all three of the enemy SFX objects in the Actor-Mixer Hierarchy and tap Ctrl + M to open the Multi-Editor window. Enable “Is Looping Enabled” under Audio > General Settings > Loop and click OK. The default setting is to loop an Infinite number of times, which is suitable for our needs.

Multi-Editor

Since we are looping these sounds, we should make sure that the loop is seamless in the sense that the start and endpoints of the loop have the same volume level. If we don’t do this, there will be an audible click/pop as the loop ends at one volume level and starts again at a different level. For each enemy sound, open up its Source Editor by double clicking the waveform icon beside its name in the Contents Editor view.

Creating Fades in the Sound Editor

The Source Editor lets you view the waveform of the audio file and adjust Trim points and Fade values. We are interested in adjusting the Fade values, represented by the blue triangles at the top of the waveform view. Dragging these will adjust the amount of time that the audio takes to fade in to full volume, and fade out to being inaudible. You’ll notice the waveform magnitude adjust itself according to these fade values.

Intro to ShareSets – Attenuations

EDIT (December 6th, 2015): The team at Audiokinetic has been kind enough to reach out to me and offer a greatly simplified approach to handling volume attenuations for each of the enemy sounds. Rather than calculating the distance between the player and each in enemy in code then passing those values to Wwise as RTPCs each frame, we are going to let Wwise handle the distance calculation for us!

The first step is to create 3 new Play Events, one for each of the enemy Sound SFX objects. I’ve named them PlayZombear, PlayZombunny, and PlayHellephant.

Creating new Events for the enemy Sound SFX.

Then, we’ll navigate to the ShareSets tab of the Project Explorer and create 3 new Attenuations, one for each enemy. The “Max distance” parameter will probably need to be reduced – I found 15 to be an appropriate value. Feel free to adjust the curve shape to taste. For example, it might be nice to have the Hellephant audible from a further distance since it is a much larger and noisier beast. The Zombunny and Zombear’s Attenuation curves could also be made to ramp up to full volume at a much closer distance if you wanted them to sneak up on the player character.

Attenuations for each enemy type

Next, we need to associate these Attenuation curves with their respective Sound SFX objects. For each of the 3 enemy Sound SFX objects, navigate to the Sound Property Editor’s “Positioning” tab. Make sure that the “3D” setting is enabled, and set the relevant Attenuation in the 3D settings. Update the “Mode” field to read “Use ShareSets”.

Assigning Attenuation curves to Sound SFX objects

Once the 3 enemy Sound SFX objects have been assigned an Attenuation, we’ll regenerate the SoundBank and then switch back to Unity.

Navigate to the Unity project’s Prefabs folder, and for each of the enemy prefabs, add a Wwise Ak Event component script to the prefab game object. Choose the appropriate Event Name from the 3 that we created earlier.

Adding Ak Event components to the enemy prefab game objects

We also need to hook into our PlayerDamage Event that we created earlier. Open up PlayerHealth.cs, and add the following code inside the TakeDamage() method:

AkSoundEngine.PostEvent("PlayerDamage", this.gameObject);

Save and play the scene. You should now be able to hear the three enemy sounds adjust their volume to get louder as they approach the player character (and therefore the Ak Listener component attached to the player character, which is the object that Wwise uses internally to compute distance to the Game Object that posts the Event), and a different grunt sound each time the player character takes damage!

Interactive Music

Wouldn’t it be great if we could add some music to the Survival Shooter game that would change based on the player action? We can do this by taking advantage of another Game Sync called States.

For simplicity’s sake, we’ll just use two player states to transition between two different segments of music – a simple string ensemble phrase for when the player is stationary, and a more complex string ensemble phrase for when the player is moving. The music ticks along at 124 BPM in 4/4 time, and is in the key of Bb minor. Wwise gives you the capability to transition between these separate states in a musical way, that is you can trigger states to change on the beat or bar of a musical phrase rather than a simple crossfade.

Switch back to Wwise and change the Layout to Interactive Music by tapping F10. Then, switch to the Game Syncs tab in the Project Explorer view.

We are going to create two new States (Idle, Movement). First, create a new State Group by right-clicking on the Default Work Unit in the States folder, and name it “MainStateGroup”. Then create two State child objects within this group and call them “Idle” and “Movement”.

Creating our States

Switch to the Audio tab of the Project Explorer view, and right-click on the Default Work Unit of the Interactive Music Hierarchy folder. Create a new Music Switch Container, and call it “SwitchContainer”. Then, right-click on the newly created SwitchContainer, and create two child objects of type Music Playlist. Name these “Music Playlist Idle” and “Music Playlist Movement”.

Creating some Interactive Music playlists in Wwise

We now need to associate the States that we created with the SwitchContainer object. Click on the SwitchContainer object and select the States tab in the Music Switch Container Property Editor. Click the button labelled “Add >>” and add the MainStateGroup that we created.

Associating States with the Music Switch Container

Next, switch to the Transitions tab. There should already be a default transition visible, so select it. You’ll notice two areas below the selected transition called “Source” and “Destination”. We want nice and clean musical transitions, so under “Source”, we’ll select Exit source at “Next Beat”. We can also add a Fade-out with a 1 second Fade time, an Offset of 0.5 seconds, and a logarithmic curve. You should end up with something that looks like this:

Adjusting the Interactive Music transition properties

We can also add a Fade-in with similar values to the Destination section. For the Destination, we will also change Sync to “Same Time as Playing Segment”. Next, we’ll need to add the MainStateGroup to the Music Switch Container Association Editor by clicking the button labelled “>>”. Finally, we’ll need to drag the two Music Playlist children objects from the Interactive Music Hierarchy to their respective States within the Music Switch Container Association Editor. This is a crucial step and one that I found easy to forget about.

Music Switch Container

Now, switch to the General Settings of the Music Switch Container Property Editor. Update the Tempo to 124 in the Time Settings. The Music Playlist children of the SwitchContainer will automatically update their Tempo properties to reflect the change in the parent object.

Now we can add the music files to the Wwise project. This is most easily done by dragging and dropping the correct files on to the Music Playlist objects, which will automatically create a new Music Segment containing a Music Track representing the actual music file. However, you can still add these children manually by right-clicking on the SwitchContainer and creating a new Music Segment child and then creating a Music Track child of the Music Segment.

Drag and drop “StringPhraseIdle.wav” on top of Music Playlist Idle, then drag and drop “StringPhraseMovement.wav” on top of Music Playlist Movement.

Adding our music files

Once the Music Segment children have been created, you still need to add them to the Music Playlist Editor. With the Music Playlist Idle object selected, drag and drop the “StringPhraseIdle” Music Segment to the Music Playlist Editor. Change the Loop Count on the Group to be Infinite by clicking the down arrow once. Repeat this process for the “Movement” Music Playlist.

Drag and drop the Music Segment to the appropriate Playlist Editor object

At this point, you can audition the interactive music score by selecting the SwitchContainer object, tapping play, and switching between the two states in the Transport Control view. Notice how the music changes on the beat!

One more thing…

We want this music to start when the game starts, so we’ll need to add a Play action to the StartGame event which will cause the SwitchContainer to start playing. Switch back to Designer Layout by tapping F5, and add the appropriate Action to the StartGame Event. It should look like this now:

Adding a Play action on the SwitchContainer object in the StartGame Event

You’ll need to add the Interactive Music Hierarchy folder to the SoundBank. Regenerate your SoundBank and switch back to Unity.

We need to add some code that will check if the player character is moving or not. A simple way to do this is to add the following code to the Move() method of PlayerMovement.cs:

if(h != 0 || v !=0)

{

AkSoundEngine.SetState("MainStateGroup", "Movement");

} else {

AkSoundEngine.SetState("MainStateGroup", "Idle");

}

The variables h,v represent the input axes values. If either variable is a non-zero value, the player character is moving and we update the State accordingly.

If you play the scene now, you should be able to hear the music change according to the player movements!

Audio busses & mixing: Side chaining/Ducking

Now that we’ve added some music to the scene, you may find that it gets a bit too busy sonically when  the player is firing their weapon. A possible solution to this is to attenuate the music volume as the player fires the weapon. This is referred to as side chaining, or ducking. Wwise makes it very easy to achieve this effect through creative use of audio busses.

Create two Audio busses as children of the Master Audio Bus in the Master-Mixer Hierarchy. Name them “Music Bus” and “Weapon Bus”.

Adding audio busses

We’ll need to update the rifle_shot SFX object and the SwitchContainer object to their respective audio busses. You can do this by changing the Output Bus property in the Sound and Music Switch Container Property Editors.

Configuring Output Bus properties

Configuring Output Bus properties

Finally, we’ll add the actual side chaining by selecting the Weapon Bus from the Master-Mixer Hierarchy and inserting the Music Bus in its Auto-ducking property. We can also set the target volume of the Auto-ducking: I chose -9 dB, which means that the Music Bus volume will attenuate by 9 dB when a signal passes through the Weapon Bus. You can also set the time that the side chaining takes to reach the target volume, as well as an attenuation curve shape. You should wind up with something like this:

Side chaining (Auto-ducking) in Wwise

Regenerate your SoundBank and play the scene in Unity. You should now be able to hear the music volume get turned down while the player fires the weapon, and return to full volume when the weapon is not in use.

Conclusion

This article serves as a quick introduction to Wwise integration with a Unity project and touches on a small portion of Wwise functionality. While the steps outlined above work within the Unity Editor, the SoundBanks that we generated would have to be moved to Unity’s Streaming Assets folder when deploying your finished project to your platform of choice. Thank you for reading!