Unlimited WordPress themes, graphics, videos & courses! Unlimited asset downloads! From $16.50/m
  1. Game Development
  2. Game Audio

Web Audio and 3D Soundscapes: Implementation

This post is part of a series called HTML5 Web Audio and 3D Soundscapes.
Web Audio and 3D Soundscapes: Introduction

In this tutorial we will wrap Web Audio in a simple API that focuses on playing sounds within a 3D coordinate space, and can be used for immersive interactive applications including, but not limited to, 3D games.

This tutorial is the second in a two-part series. If you have not read the first tutorial in the series you should do that before reading this tutorial because it introduces you to the various Web Audio elements we will be using here.


Before we get started, here's a small demonstration that uses the simplified API that we will be covering in this tutorial. Sounds (represented by the white squares) are randomly positioned and played in a 3D coordinate space using the head-related transfer function (HRTF) that Web Audio provides for us.

The source files for the demonstration are attached to this tutorial.


Because the simplified API (AudioPlayer) has already been created for this tutorial and is available for download, what we are going to do here is take a broad look at the AudioPlayer API and the code that powers it.

Before continuing this tutorial, please read the previous tutorial in this series if you haven't done so already and are new to the world of Web Audio.


The AudioPlayer class contains our simplified API and is exposed on the window object alongside the standard Web Audio classes if, and only if, Web Audio is supported by the web browser. This means we should check for the existence of the class before we attempt to use it.

(We could have tried to create a new AudioPlayer object within a try...catch statement, but a simple conditional check works perfectly well.)

Behind the scenes, the audioPlayer creates a new AudioContext object and a new AudioGainNode object for us, and connects the GainNode object to the destination node exposed by the AudioContext object.

When sounds are created and played they will be connected to the m_gain node, this allows us to control the volume (amplitude) of all the sounds easily.

The audioPlayer also configures the audio listener, exposed by m_context, so it matches the common 3D coordinate system used with WebGL. The positive z axis points at the viewer (in other words, it points out of the 2D screen), the positive y axis points up, and the positive x axis points to the right.

The position of the listener is always zero; it sits at the centre of the audio coordinate system.

Loading Sounds

Before we can create or play any sounds, we need to load the sound files; luckily enough audioPlayer takes care of all the hard work for us. It exposes a load(...) function that we can use to load the sounds, and three event handlers that allow us to keep track of the load progress.

The set of sound formats that are supported is browser dependant. For example, Chrome and Firefox support OGG Vorbis but Internet Explorer doesn't. All three browsers support MP3, which is handy, but the problem with MP3 is the lack of seamless sound looping—the MP3 format is simply not designed for it. However, OGG Vorbis is, and can loop sounds perfectly.

When calling the load(...) function multiple times, audioPlayer will push the requests into a queue and load them sequentially. When all of the queued sounds have been loaded (and decoded) the onloadcomplete event handler will be called.

Behind the scenes, audioPlayer uses a single XMLHttpRequest object to load the sounds. The responseType of the request is set to "arraybuffer", and when the file has loaded the array buffer is sent to m_context for decoding.

If the loading and decoding of a file is successful, audioPlayer will either load the next file in the queue (if the queue is not empty) or let us know that all the files have been loaded.

Creating Sounds

Now that we have loaded some sound files we can create and play our sounds. We first need to tell audioPlayer to create the sounds, and this is done by using the create(...) function exposed by audioPlayer.

We are free to create as many sounds as we need even if we have only loaded a single sound file.

The sound file path passed to the create(...) function simply tells audioPlayer which file the created sound should use. If the specified sound file has not been loaded when the create(...) function is called, a runtime error will be thrown.

Playing Sounds

When we have created one or more sounds, we are free to play those sounds whenever we need to. To play a sound, we use the aptly named play(...) function exposed by audioPlayer.

To determine whether to play a looped sound, we can also pass a Boolean to the play(...) function. If the Boolean is true, the sound will loop continuously until it is stopped.

To stop a sound, we can use the stop(...) function.

The isPlaying(...) function lets us know whether a sound is currently playing.

Behind the scenes, the audioPlayer has to do a surprising amount of work to get a sound to play, due to the modular nature of Web Audio. Whenever a sound needs to be played,audioPlayer has to create new AudioSourceBufferNode and PannerNode objects, configure and connect them, and then connect the sound to the m_gain node. Thankfully, Web Audio is highly optimized so the creation and configuration of new audio nodes rarely causes any noticeable overhead.

Playing sounds is obviously useful, but the purpose of audioPlayer is to play sounds within a 3D coordinate system, so we should probably set the sound positions before playing them. audioPlayer exposes a few functions that allow us to do just that.

Positioning Sounds

  • The setX(...) and getX(...) functions exposed by audioPlayer can be used to set and get the position of a sound along the coordinate system's x axis.
  • The setY(...) and getY(...) functions can be used to set and get the position of a sound along the coordinate system's y axis.
  • The setZ(...) and getZ(...) functions can be used to set and get the position of a sound along the coordinate system's z axis.
  • Finally, the helpful setPosition(...) function can be used to set the position of a sound along the coordinate system's x, y, and z axes respectively.

The farther a sound is from the center of the coordinate system, the quieter the sound will be. At a distance of 10000 (the Web Audio default) a sound will be completely silent.


We can control the global (master) volume of the sounds by using the setVolume(...) and getVolume(...) functions exposed by audioPlayer.

The setVolume(...) function also has a second parameter that can be used to fade the volume over a period of time. For example, to fade the volume to zero over a two second period, we could do the following:

The tutorial demo takes advantage of this to fade-in the sounds smoothly.

Behind the scenes, the audioPlayer simply tells the m_gain node to linearly change the gain value whenever the volume needs to be changed.

audioPlayer enforces a minimum fade time of 0.01 seconds, to ensure that steep changes in volume don't cause any audible clicks or pops.


In this tutorial, we took a look at one way to wrap Web Audio in a simple API that focuses on playing sounds within a 3D coordinate space for use in (among other applications) 3D games.

Due to the modular nature of Web Audio, programs that use Web Audio can get complex pretty quickly, so I hope this tutorial has been of some use to you. When you understand how Web Audio works, and how powerful it is, I'm sure you will have a lot of fun with it.

Don't forget the AudioPlayer and demonstration source files are available on GitHub and ready for download. The source code is commented fairly well so it's worth taking the time to have a quick look at it.

If you have any feedback or questions, please feel free to post a comment below.


Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.