64x64 icon dark hosting
Choose a hosting plan here and get a free year's subscription to Tuts+ (worth $180).

Noise: Creating a Synthesizer for Retro Sound Effects - Core Engine


Start a hosting plan from $3.92/mo and get a free year on Tuts+ (normally $180)

This post is part of a series called Noise: Creating a Synthesizer for Retro Sound Effects.
Noise: Creating a Synthesizer for Retro Sound Effects - Introduction
Noise: Creating a Synthesizer for Retro Sound Effects - Audio Processors

This is the second in a series of tutorials in which we will create a synthesizer based audio engine that can generate sounds for retro-styled games. The audio engine will generate all of the sounds at runtime without the need for any external dependencies such as MP3 files or WAV files. The end result will be a working library that can be dropped effortlessly into your games.

If you have not already read the first tutorial in this series, you should do that before continuing.

The programming language used in this tutorial is ActionScript 3.0 but the techniques and concepts used can easily be translated into any other programming language that provides a low-level sound API.

You should make sure you have Flash Player 11.4 or higher installed for your browser if you want to use the interactive examples in this tutorial.

Audio Engine Demo

By the end of this tutorial all of the core code required for the audio engine will have been completed. The following is a simple demonstration of the audio engine in action.

Only one sound is being played in that demonstration, but the frequency of the sound is being randomised along with its release time. The sound also has a modulator attached to it to produce the vibrato effect (modulate the sound's amplitude) and the frequency of the modulator is also being randomised.

AudioWaveform Class

The first class that we will create will simply hold constant values for the waveforms that the audio engine will use to generate the audible sounds.

Start by creating a new class package called noise, and then add the following class to that package:

We will also add a static public method to the class that can be used to validate a waveform value, the method will return true or false to indicate whether or not the waveform value is valid.

Finally, we should prevent the class from being instantiated because there is no reason for anyone to create instances of this class. We can do this within the class constructor:

This class is now complete.

Preventing enum-style classes, all-static classes, and singleton classes from being directly instantiated is a good thing to do because these types of class should not be instantiated; there is no reason to instantiate them. Programming languages such as Java do this automatically for most of these class types but currently in ActionScript 3.0 we need to enforce this behaviour manually within the class constructor.

Audio Class

Next on the list is the Audio class. This class in similar in nature to the native ActionScript 3.0 Sound class: every audio engine sound will be represented by an Audio class instance.

Add the following barebones class to the noise package:

The first things that need to be added to the class are properties that will tell the audio engine how to generate the sound wave whenever the sound is played. These properties include the type of waveform used by the sound, the frequency and amplitude of the waveform, the duration of the sound, and its release time (how quickly it fades out). All of these properties will be private and accessed via getters/setters:

As you can see, we have set a sensible default value for each property. The amplitude is a value in the range 0.0 to 1.0, the frequency is in hertz, and the duration and release times are in seconds.

We also need to add two more private properties for the modulators that can be attached to the sound; again these properties will be accessed via getters/setters:

Finally, the Audio class will contain a few internal properties that will only be accessed by the AudioEngine class (we will create that class shortly). These properties do not need to be hidden behind getters/setters:

The position is in seconds and it allows the AudioEngine class to keep track of the sound's position while the sound is playing, this is needed to calculate the waveform sound samples for the sound. The playing and releasing properties tell the AudioEngine what state the sound is in, and the samples property is a reference to the cached waveform samples that the sound is using. The use of these properties will become clear when we create the AudioEngine class.

To finish the Audio class we need to add the getters/setters:








You no doubt noticed the [Inline] metadata tag bound to a few of the getter functions. That metadata tag is a shiny new feature of Adobe's latest ActionScript 3.0 Compiler and it does what says on the tin: it inlines (expands) the contents of a function. This is extremely useful for optimisation when used sensibly, and generating dynamic audio at runtime is certainly something that requires optimisation.

AudioModulator Class

The purpose of the AudioModulator is to allow the amplitude and frequency of Audio instances to be modulated to create useful and crazy sound effects. Modulators are actually similar to Audio instances, they have a waveform, an amplitude, and frequency, but they don't actually produce any audible sound they only modify audible sounds.

First thing first, create the following barebones class in the noise package:

Now let's add the private private properties:

If you are thinking this looks very similar to the Audio class then you are correct: everything except for the shift property is the same.

To understand what the shift property does, think of one of the basic waveforms that the audio engine is using (pulse, sawtooth, sine, or triangle) and then imagine a vertical line running straight through the waveform at any position you like. The horizontal position of that vertical line would be the shift value; its a value in the range 0.0 to 1.0 that tells the modulator where to begin reading it's waveform from and in turn can have a profound affect on the modifications the modulator makes to a sound's amplitude or frequency.

As an example, if the modulator was using a sine waveform to modulate the frequency of a sound, and the shift was set at 0.0, the sound's frequency would first rise and then fall due to the curvature of the sine wave. However, if the shift was set at 0.5 the sound's frequency would first fall and then rise.

Anyway, back to the code. The AudioModulator contains one internal method that is only used by the AudioEngine; the method is as follows:

That function is inlined because it is used a lot, and when I say "a lot" I mean 44100 times a second for each sound that is playing that has a modulator attached to it (this is where inlining becomes incredibly valuable). The function simply grabs a sound sample from the waveform the modulator is using, adjusts that sample's amplitude, and then returns the result.

To finish the AudioModulator class we need to add the getters/setters:





And that wraps up the AudioModulator class.

AudioEngine Class

Now for the big one: the AudioEngine class. This is an all-static class and manages pretty much everything related to Audio instances and sound generation.

Let's start with a barebones class in the noise package as usual:

As mentioned before, all-static classes should not be instantiated, hence the exception that is thrown in the class constructor if someone does try to instantiate the class. The class is also final because there's no reason to extend an all-static class.

The first things that will be added to this class are internal constants. These constants will be used to cache the samples for each of the four waveforms that the audio engine is using. Each cache contains 44,100 samples which equates to one hertz waveforms. This allows the audio engine to produce really clean low frequency sound waves.

The constants are as follows:

There are also two private constants used by the class:

The BUFFER_SIZE is the number of sound samples that will be passed to the ActionScript 3.0 sound API whenever a request for sound samples is made. This is the smallest number of samples allowed and it results in the lowest possible sound latency. The number of samples could be increased to reduce CPU usage but that would increase the sound latency. The SAMPLE_TIME is the duration of a single sound sample, in seconds.

And now for the private variables:

  • The m_position is used to keep track of the sound stream time, in seconds.
  • The m_amplitude is a global secondary amplitude for all of the Audio instances that are playing.
  • The m_soundStream and m_soundChannel shouldn't need any explanation.
  • The m_audioList contains references to any Audio instances that are playing.
  • The m_sampleList is a temporary buffer used to store sound samples when they are requested by the ActionScript 3.0 sound API.

Now, we need to initialize the class. There are numerous ways of doing this but I prefer something nice and simple, a static class constructor:

If you have read the previous tutorial in this series then you will probably see what's happening in that code: the samples for each of the four waveforms are being generated and cached, and this only happens once. The sound stream is also being instantiated and started and will run continuously until the app is terminated.

The AudioEngine class has three public methods that are used to play and stop Audio instances:




And here come the main audio processing methods, both of which are private:


So, in the first if statement we are checking if the m_soundChannel is still null, and we need to do that because the SAMPLE_DATA event is dispatched as soon as the m_soundStream.play() method is invoked, and before the method gets a chance to return a SoundChannel instance.

The while loop rolls through the sound samples that have been requested by m_soundStream and writes them to the provided ByteArray instance. The sound samples are generated by the following method:


Finally, to finish things off, we need to add the getter/setter for the private m_amplitude variable:

And now I need a break!

Coming Up...

In the third and final tutorial in the series we will be adding audio processors the to audio engine. These will allow us to push all of the generated sound samples though processing units such as hard limiters and delays. We will also be taking a look at all of the code to see if anything can be optimised.

All of the source code for this tutorial series will be made available with the next tutorial.

Follow us on Twitter, Facebook, or Google+ to keep up to date with the latest posts.