Sound Engine: Difference between revisions

From Uzebox Wiki
Jump to navigation Jump to search
No edit summary
(17 intermediate revisions by 4 users not shown)
Line 1: Line 1:
==The mixer: Theory of Sound Generation==
==The mixer: Theory of Sound Generation==
===The Buffer===
===The Buffer===
The sound engine uses a circular buffer sometime called a [http://en.wikipedia.org/wiki/Circular_buffer ring buffer]. It is a byte array in RAM logically segmented in two parts. One half plays while the other half is mixed. Each halves contains exactly as many samples as there is scanlines in a video field (two interlaced fields makes a frame), in this case, 262. So at 15.7Khz ( the NTSC line rate) during the HSYNC pulse, a byte is read from the circular buffer and output to the sound port (by mean of [http://en.wikipedia.org/wiki/Pulse-width_modulation PWM]). HSYNC pulses happens non-stop even during blanking intervals. Also, during each VSYNC (once, at the beginning of each field) the first thing done is mix the music for the next field. Naturally, music mixer code is in assembler for optimal speed. A whole field worth of music is mixed "one-shot" and all four channels are mixed simultaneously without resorting to a temporary 16-bit signed buffer (not enough RAM!). This implies *all* registers are used during mixing.
The sound engine uses a circular buffer sometimes called a [http://en.wikipedia.org/wiki/Circular_buffer ring buffer]. It is a byte array in RAM logically segmented in two parts. One half plays while the other half is mixed. Each half contains exactly as many samples as there are scan lines in a video field (two interlaced fields makes a frame), in this case, 262. *UPDATE*There is now an option "-DMIXER=1" to mix during the HSYNC period which does not require this buffer and therefore saves ram.
 
So at 15.7Khz (the NTSC line rate) during the HSYNC pulse, a byte is read from the circular buffer and output to the sound port (by means of [http://en.wikipedia.org/wiki/Pulse-width_modulation PWM]). HSYNC pulses occur non-stop even during blanking intervals.  
 
During each VSYNC (once, at the beginning of each field) the first operation performed is the mixing of the music for the next field. Naturally, music mixer code is in assembler for optimal speed. A whole field worth of music is mixed "one-shot" and all four channels are mixed simultaneously without resorting to a temporary 16-bit signed buffer (not enough RAM!). This implies *all* registers are used during mixing.


===Sound Generation===
===Sound Generation===
The engine uses a table made of short, repeating waveforms for the first 3 channels. Each wave is exactly 256 samples long (8 bits signed) and are forced-aligned on an 8 bit boundary in ROM. Because of this, we only need a 8 bit pointer for the waveform's position. Position will wrap automatically, effectively giving "free-running oscillators". Using a tool like CoolEdit, its easy to create waveforms that can vary from a simple square wave to a sine or filtered triangle. The 4th channel is a noise channel and is based on a switchable 7/15 bits [http://en.wikipedia.org/wiki/LFSR LFSR]. The 7bit mode is more metallic sounding because all bit states are repeated each 127 samples. The 15 bits mode sound much more like white noise.
The engine uses a table made of short, repeating waveforms for the first 3 channels. Each wave is exactly 256 samples long (8 bits signed) and are forced-aligned on an 8 bit boundary in ROM. Because of this, we only need an 8 bit pointer for the waveform's position. Position will wrap automatically, effectively giving "free-running oscillators".  
 
Using a tool like CoolEdit, its easy to create waveforms that can vary from a simple square wave to a sine or filtered triangle.  
 
The 4th channel is a noise channel and is based on a switchable 7/15 bits [http://en.wikipedia.org/wiki/LFSR LFSR]. The 7bit mode is more metallic sounding because all bit states are repeated each 127 samples. The 15 bits mode sound much more like white noise.


Pitch is done like most MOD players out there, with a "step table". This ROM table consists of pre-calculated 8:8 fixed point values that represents the input sample's (i.e.: the 256 bytes wave) pointer increment per output sample (the mix buffer). There is one fixed point word per note, for a total of 127 notes. The wavetable is composed of 256 bytes waves and each wave models exactly one "sound cycle". I.e. for a triangle wave, it would contain : /\/ . Let say the mixing rate is 8Khz and we want to play a C5. We look in the step table for note 48 (C5). It says note frequency is 8Khz and its calculated stepping is hence 1.000. For each output sample, we increment the input pointer by exactly one. Now say we have a C6, an octave higher (so its double the frequency). The stepping of this note will be 2.000. That means that for each output sample, we increment the input by two samples, effectively skipping one of them. You get the idea. Note that for high stepping, a lot of sample are skipped and combined with wrapping it introduces aliasing. That can be somewhat minimized by using slow rising/ending waves.
Pitch is done like most MOD players out there, with a "step table". This ROM table consists of pre-calculated 8:8 fixed point values that represents the input sample's (i.e.: the 256 bytes wave) pointer increment per output sample (the mix buffer). There is one fixed point word per note, for a total of 127 notes. The wavetable is composed of 256 bytes waves and each wave models exactly one "sound cycle". I.e. for a triangle wave, it would contain : /\/ .  
 
Lets say the mixing rate is 8Khz and we want to play a C5. We look in the step table for note 48 (C5). It says note frequency is 8Khz and its calculated stepping is hence 1.000. For each output sample, we increment the input pointer by exactly one. Now say we have a C6, an octave higher (so its double the frequency). The stepping of this note will be 2.000. That means that for each output sample, we increment the input by two samples, effectively skipping one of them. You get the idea. Note that for high stepping, a lot of samples are skipped, and combined with wrapping it introduces aliasing. That can be somewhat minimized by using slow rising/ending waves.


===Mixing Procedure===
===Mixing Procedure===
So the mixer's goes like this for each of the 262 samples to mix:  
The mixing procedure for each of the 256 wave samples is as follows:  
* (For each channel)The next sample pointer is incremented according to the appropriate notes step (8:8 fixed point)
* (For each channel)The next sample pointer is incremented according to the appropriate notes step (8:8 fixed point)
* (For each channel)The next Sample is read form the wave table
* (For each channel)The next Sample is read from the wave table
* (For each channel)The sample is multiplied by it's volume and added to an 16 bits signed accumulator
* (For each channel)The sample is multiplied by it's volume and added to an 16 bits signed accumulator
* The accumulator is divided by 2 and clipped back to 8-bit
* The accumulator is divided by 2 and clipped back to 8-bit
Line 17: Line 27:


===Mixing on the Fly===
===Mixing on the Fly===
Note it could be possible to mix on the fly, hence not requiring the circular buffer. There's basically 2 reasons why mixing isn't (currently) done on the fly:
Note it could be possible to mix on the fly, hence not requiring the circular buffer. There are basically 2 reasons why mixing isn't (currently) done on the fly:
* When rendering a field (specially in mode 2), all cycles are taken. At the end of each scanline, there's a short period of time where the electron beam returns back to the beginning of the next line, the horizontal blanking. And the problem is there, there's not enough cycles left when rendering a field, specially considering just the setup required like loading all registers, etc. With that said, simple video modes like mode 1 has plenty of slack, so on-the-fly mixing could be possible.
* When rendering a field (specially in mode 2), all cycles are taken. At the end of each scanline, there's a short period of time where the electron beam returns back to the beginning of the next line, the horizontal blanking. And the problem is that there's not enough cycles left when rendering a field, especially considering just the setup required like loading all registers, etc. With that said, simple video modes like mode 1 has plenty of slack, so on-the-fly mixing could be possible.


* The rendering phase is coded in assembler because it requires "cycle-perfect" code, each clock cycle is important or the image will shear. For most code like the mixer and music engine, you want to put that outside the rendering path to allow easy modifications or optimizations. And currently they are executed right after the VSYNC "cycle-perfect" code is done. It last for about 25 scanlines and after control is returned to the main program for a bunch of other lines, then rendering begins again.
* The rendering phase is coded in assembler because it requires "cycle-perfect" code, each clock cycle is important or the image will shear. For most code like the mixer and music engine, you want to put that outside the rendering path to allow easy modifications or optimizations. And currently they are executed right after the VSYNC "cycle-perfect" code is done. It lasts for about 25 scanlines and after control is returned to the main program for a bunch of other lines, then rendering begins again.


==Music Replayer==
==Music Replayer==
The music replayer sit on top of the mixer and was designed to play [http://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface MIDI] streams. MIDI is a very compact and space efficient format for music. It is made of a continuous stream of events each of which is separated in time using a number of 'ticks' (a delta-time value). Events can be notes, tempo change, modulation change, etc. and can be associated to a specific channel or be global (like tempo event). Currently only the following type of events are supported, any other ones will be filtered out by the conversion tool:
The music replayer sits on top of the mixer and was designed to play [http://en.wikipedia.org/wiki/Musical_Instrument_Digital_Interface MIDI] streams. MIDI is a very compact and space efficient format for music. It is made of a continuous stream of events, each of which is separated in time using a number of 'ticks' (a delta-time value).
 
Events can be notes, tempo change, modulation change, etc. and can be associated with a specific channel or be global (like tempo event). Currently only the following types of events are supported, any other events like NOTEOFF will be filtered out by the conversion tool:
 
* Meta 0x2f : End of song
* Meta 0x2f : End of song
* Meta 0x06 : Marker (For start and end of loop. Only two value are accepted "S" for start and "E" for end of loop.)
* Meta 0x06 : Marker (For start and end of loop. Only two value are accepted "S" for start and "E" for end of loop.)
Line 30: Line 43:
* 0xC0      : Program Change (Patch change)
* 0xC0      : Program Change (Patch change)


The music engine also support direct input through a MIDI port. To enable this code, the MIDI interface must be built and support added to the kernel using the MIDI_IN=1
The music engine also supports direct input through a MIDI port. To enable this code, the MIDI interface must be built and support added to the kernel using the MIDI_IN=1


===MIDI Converter===
===MIDI Converter===
MIDI files can be converted to Uzebox format using a console Java-based conversion tool. A DOS batch file makes it easier to call the tool. Since is Java based you need to install the JRE (version 6+) and have it's /bin directory on your system's PATH.  
MIDI files can be converted to Uzebox format using a console Java-based conversion tool. A DOS batch file makes it easier to call the tool. Since the conversion tool is Java based you need to install the JRE (version 7+) and have it's /bin directory on your system's PATH.  


usage: midiconv [options] inputfile outputfile
    $ java -cp ~/uzebox/tools/JavaTools/dist/uzetools.jar org.uzebox.tools.converters.midi.MidiConvert -h
Converts a MIDI song in format 0 to a Uzebox MIDI stream outputted as a C
    Uzebox (tm) MIDI converter 1.1
include file.
    (c)2009 Alec Bourque. This tool is released under the GNU GPL V3.
-d        Prints debug info.
   
-e <arg>  Force a loop end (specified in tick). Any existing loop end in
    usage: midiconv [options] inputfile outputfile
            the input will be discarded.
    Converts a MIDI song in format 0 or 1 to a Uzebox MIDI stream outputted as
-f <arg>  Speed correction factor (double). Defaults to 30.0
    a C include file.
-h        Prints this screen.
    -d        Prints debug info.
-o        Include note off messages. Note off can be explicit note-off
    -e <arg>  Force a loop end (specified in tick). Any existing loop end in
            or note-on with zero volume.
                the input will be discarded.
-s <arg>  Force a loop start (specified in tick). Any existing loop
    -f <arg>  Speed correction factor (double). Defaults to 30.0
            start in the input will be discarded.
    -h        Prints this screen.
-v <arg>  variable name used in the include file. Defaults to 'midisong'
    -no1      Include note off events for channel 1
Ex: midiconv -s32 -vmy_song -ls200 -le22340 c:\mysong.mid c:\mysong.inc
    -no2      Include note off events for channel 2
    -no3      Include note off events for channel 3
    -no4      Include note off events for channel 4
    -no5      Include note off events for channel 5
    -s <arg>  Force a loop start (specified in tick). Any existing loop
                start in the input will be discarded.
    -v <arg>  variable name used in the include file. Defaults to 'midisong'
    Ex: midiconv -s32 -vmy_song -ls200 -le22340 c:\mysong.mid c:\mysong.inc


The tool is available in svn, just need midiconv.bat & uzetools.jar: [url]http://code.google.com/p/uzebox/source/browse/#svn/branches/rev-beta5/tools/UzeboxTools/dist[/url]
'''Notes''':  
* Tick per quarter note should ideally be 120.


'''Notes''':
* Since the converter strips out note-off events to save space, you'll end up with "stuck" notes if your instruments don't have fade-out envelopes. 3 possible solutions to this:
* a
1) Insure all your patches include a fade-out or note cut command
Since the converter strips out note-off events to save space, you'll end up with "stuck" notes if your instruments don't have fade-out envelopes. 3 possible solutions to this:
2) Add notes with zero volume at the very end of the song
* Insure all your patches include a fade-out or note cut command
3) In the conversion tool, leave note-off events by using the -no1, -no2, -no3, -no4, or -no5 switches
* Add notes with zero volume at the very end of the song
* In the conversion tool, leave note-off events by using the -o switch


===Using the Player===
===Using the Player===
When the convertion process is complete, it is pretty easy to play songs. Simply initialize the engine using:
When the conversion process is complete, it is pretty easy to play songs. Simply initialize the engine using:
  InitMusicPlayer(myPatches);
  InitMusicPlayer(myPatches);
Then start the song using:
Then start the song using:
Line 70: Line 89:


==Patch, Instruments & FXs==
==Patch, Instruments & FXs==
The music replayer engine works with the concept of "patches". Patches are a sequence of parameters that defines how your notes or sound effects evolves over time. There's already a couples of patches I've made for Megatris I've regrouped in an include file. Add this include to your program file:
The music replayer engine works with the concept of "patches". Patches are a sequence of commands that defines how your notes or sound effects evolves over time. There's already a couples of patches made for Megatris. Add this include to your program file:
  #include "data/patches.inc"
  #include "data/patches.inc"


Line 86: Line 105:
  };
  };


This is called a command stream. The first byte, is the sound type. 0 is for wavetable sounds, 1 for noise channels sound. (from beta3 and onwards, this byte is removed, more on that later). The rest is a sequence of [i]commands[/i]. Commands are made of 3 bytes: the first one is a time delta in term of frames (frames happens at 1/60 of a sec) to wait until this command is executed. The second byte is the command type and the last byte is the command value. So in this example we have a wavetable sound (0), then when the sound is triggered (time zero), set the volume envelope decay speed to -12. On each frame after wards, -12 will automatically be subtracted from the sound's volume. Then, after a wait of 5 frames, raise's the sound pitch by 12 semitones (one octave). Then wait again for 5 frames then lower the sound's pitch by an octave. And so on, until the last command, which must be a PATCH_END command (no value byte for this one). Have a look at the .h files for all the possible commands.
This is called a command stream. The first byte, is the sound type. 0 is for wavetable sounds, 1 for noise channels sound. (from beta3 and onwards, this byte is removed, more on that later). The rest is a sequence of [[Patch Commands Defined|commands]]. Commands are made of 3 bytes: the first one is a time delta in term of frames (frames happens at 1/60 of a sec) to wait until this command is executed. The second byte is the command type and the last byte is the command value. So in this example we have a wavetable sound (0), then when the sound is triggered (time zero), set the volume envelope decay speed to -12. On each frame after wards, -12 will automatically be subtracted from the sound's volume. Then, after a wait of 5 frames, raise's the sound pitch by 12 semitones (one octave). Then wait again for 5 frames then lower the sound's pitch by an octave. And so on, until the last command, which must be a PATCH_END command (no value byte for this one). Have a look at the .h files for all the possible commands.


From Beta3 the patch system has been tweaked to support a PCM channel which allows to play samples of arbitrary length. The sound type byte as been removed from the command stream and move into a special array of structs.  
From Beta3 the patch system has been tweaked to support a PCM channel which allows to play samples of arbitrary length. The sound type byte as been removed from the command stream and move into a special array of structs.  
Line 105: Line 124:
* Fourth (0) is the loop start position for PCM samples
* Fourth (0) is the loop start position for PCM samples
* Fifth (0) is the loop end position for PCM samples
* Fifth (0) is the loop end position for PCM samples
''
'''Note for PCM:''''' A bit like the old wavetable based sound cards (i.e.: AWE32 and GUS), looping is always on internally. If you do not want looping, you have to set both loop start and  loop end parameters to the sample size, effectively looping on the same byte at the end of the sample.


So, now let play some sounds! Add this line to your main() function:
So, now let play some sounds! Add this line to your main() function:
Line 116: Line 138:


Creating new patches is really trial an error. We suggest you make an empty project to create new patches. It will compile and flash faster.
Creating new patches is really trial an error. We suggest you make an empty project to create new patches. It will compile and flash faster.
All patch commands are described '''[[Patch Commands Defined|here]]'''.

Revision as of 23:58, 4 March 2017

The mixer: Theory of Sound Generation

The Buffer

The sound engine uses a circular buffer sometimes called a ring buffer. It is a byte array in RAM logically segmented in two parts. One half plays while the other half is mixed. Each half contains exactly as many samples as there are scan lines in a video field (two interlaced fields makes a frame), in this case, 262. *UPDATE*There is now an option "-DMIXER=1" to mix during the HSYNC period which does not require this buffer and therefore saves ram.

So at 15.7Khz (the NTSC line rate) during the HSYNC pulse, a byte is read from the circular buffer and output to the sound port (by means of PWM). HSYNC pulses occur non-stop even during blanking intervals.

During each VSYNC (once, at the beginning of each field) the first operation performed is the mixing of the music for the next field. Naturally, music mixer code is in assembler for optimal speed. A whole field worth of music is mixed "one-shot" and all four channels are mixed simultaneously without resorting to a temporary 16-bit signed buffer (not enough RAM!). This implies *all* registers are used during mixing.

Sound Generation

The engine uses a table made of short, repeating waveforms for the first 3 channels. Each wave is exactly 256 samples long (8 bits signed) and are forced-aligned on an 8 bit boundary in ROM. Because of this, we only need an 8 bit pointer for the waveform's position. Position will wrap automatically, effectively giving "free-running oscillators".

Using a tool like CoolEdit, its easy to create waveforms that can vary from a simple square wave to a sine or filtered triangle.

The 4th channel is a noise channel and is based on a switchable 7/15 bits LFSR. The 7bit mode is more metallic sounding because all bit states are repeated each 127 samples. The 15 bits mode sound much more like white noise.

Pitch is done like most MOD players out there, with a "step table". This ROM table consists of pre-calculated 8:8 fixed point values that represents the input sample's (i.e.: the 256 bytes wave) pointer increment per output sample (the mix buffer). There is one fixed point word per note, for a total of 127 notes. The wavetable is composed of 256 bytes waves and each wave models exactly one "sound cycle". I.e. for a triangle wave, it would contain : /\/ .

Lets say the mixing rate is 8Khz and we want to play a C5. We look in the step table for note 48 (C5). It says note frequency is 8Khz and its calculated stepping is hence 1.000. For each output sample, we increment the input pointer by exactly one. Now say we have a C6, an octave higher (so its double the frequency). The stepping of this note will be 2.000. That means that for each output sample, we increment the input by two samples, effectively skipping one of them. You get the idea. Note that for high stepping, a lot of samples are skipped, and combined with wrapping it introduces aliasing. That can be somewhat minimized by using slow rising/ending waves.

Mixing Procedure

The mixing procedure for each of the 256 wave samples is as follows:

  • (For each channel)The next sample pointer is incremented according to the appropriate notes step (8:8 fixed point)
  • (For each channel)The next Sample is read from the wave table
  • (For each channel)The sample is multiplied by it's volume and added to an 16 bits signed accumulator
  • The accumulator is divided by 2 and clipped back to 8-bit
  • The final value is stored in the mix buffer

Mixing on the Fly

Note it could be possible to mix on the fly, hence not requiring the circular buffer. There are basically 2 reasons why mixing isn't (currently) done on the fly:

  • When rendering a field (specially in mode 2), all cycles are taken. At the end of each scanline, there's a short period of time where the electron beam returns back to the beginning of the next line, the horizontal blanking. And the problem is that there's not enough cycles left when rendering a field, especially considering just the setup required like loading all registers, etc. With that said, simple video modes like mode 1 has plenty of slack, so on-the-fly mixing could be possible.
  • The rendering phase is coded in assembler because it requires "cycle-perfect" code, each clock cycle is important or the image will shear. For most code like the mixer and music engine, you want to put that outside the rendering path to allow easy modifications or optimizations. And currently they are executed right after the VSYNC "cycle-perfect" code is done. It lasts for about 25 scanlines and after control is returned to the main program for a bunch of other lines, then rendering begins again.

Music Replayer

The music replayer sits on top of the mixer and was designed to play MIDI streams. MIDI is a very compact and space efficient format for music. It is made of a continuous stream of events, each of which is separated in time using a number of 'ticks' (a delta-time value).

Events can be notes, tempo change, modulation change, etc. and can be associated with a specific channel or be global (like tempo event). Currently only the following types of events are supported, any other events like NOTEOFF will be filtered out by the conversion tool:

  • Meta 0x2f : End of song
  • Meta 0x06 : Marker (For start and end of loop. Only two value are accepted "S" for start and "E" for end of loop.)
  • 0x90  : Note on
  • 0xB0  : Controllers (Volume, Expression, Tremolo Level and Tremolo Rate)
  • 0xC0  : Program Change (Patch change)

The music engine also supports direct input through a MIDI port. To enable this code, the MIDI interface must be built and support added to the kernel using the MIDI_IN=1

MIDI Converter

MIDI files can be converted to Uzebox format using a console Java-based conversion tool. A DOS batch file makes it easier to call the tool. Since the conversion tool is Java based you need to install the JRE (version 7+) and have it's /bin directory on your system's PATH.

   $ java -cp ~/uzebox/tools/JavaTools/dist/uzetools.jar org.uzebox.tools.converters.midi.MidiConvert -h
   Uzebox (tm) MIDI converter 1.1
   (c)2009 Alec Bourque. This tool is released under the GNU GPL V3.
   
   usage: midiconv [options] inputfile outputfile
   Converts a MIDI song in format 0 or 1 to a Uzebox MIDI stream outputted as
   a C include file.
    -d         Prints debug info.
    -e <arg>   Force a loop end (specified in tick). Any existing loop end in
               the input will be discarded.
    -f <arg>   Speed correction factor (double). Defaults to 30.0
    -h         Prints this screen.
    -no1       Include note off events for channel 1
    -no2       Include note off events for channel 2
    -no3       Include note off events for channel 3
    -no4       Include note off events for channel 4
    -no5       Include note off events for channel 5
    -s <arg>   Force a loop start (specified in tick). Any existing loop
               start in the input will be discarded.
    -v <arg>   variable name used in the include file. Defaults to 'midisong'
   Ex: midiconv -s32 -vmy_song -ls200 -le22340 c:\mysong.mid c:\mysong.inc

Notes:

  • Tick per quarter note should ideally be 120.
  • Since the converter strips out note-off events to save space, you'll end up with "stuck" notes if your instruments don't have fade-out envelopes. 3 possible solutions to this:

1) Insure all your patches include a fade-out or note cut command 2) Add notes with zero volume at the very end of the song 3) In the conversion tool, leave note-off events by using the -no1, -no2, -no3, -no4, or -no5 switches

Using the Player

When the conversion process is complete, it is pretty easy to play songs. Simply initialize the engine using:

InitMusicPlayer(myPatches);

Then start the song using:

StartSong(mySong);

Stop/pause the song using:

StopSong();

And resume where you last stopped with:

ResumeSong();

Patch, Instruments & FXs

The music replayer engine works with the concept of "patches". Patches are a sequence of commands that defines how your notes or sound effects evolves over time. There's already a couples of patches made for Megatris. Add this include to your program file:

#include "data/patches.inc"

Look into it, you will see something like this:

//FX: "Echo Droplet"
const char patch01[] PROGMEM ={ 
0,
0,PC_ENV_SPEED,-12,
5,PC_NOTE_UP,12, 
5,PC_NOTE_DOWN,12,
5,PC_NOTE_UP,12, 
5,PC_NOTE_DOWN,12,
5,PC_NOTE_CUT,0,
0,PATCH_END
};

This is called a command stream. The first byte, is the sound type. 0 is for wavetable sounds, 1 for noise channels sound. (from beta3 and onwards, this byte is removed, more on that later). The rest is a sequence of commands. Commands are made of 3 bytes: the first one is a time delta in term of frames (frames happens at 1/60 of a sec) to wait until this command is executed. The second byte is the command type and the last byte is the command value. So in this example we have a wavetable sound (0), then when the sound is triggered (time zero), set the volume envelope decay speed to -12. On each frame after wards, -12 will automatically be subtracted from the sound's volume. Then, after a wait of 5 frames, raise's the sound pitch by 12 semitones (one octave). Then wait again for 5 frames then lower the sound's pitch by an octave. And so on, until the last command, which must be a PATCH_END command (no value byte for this one). Have a look at the .h files for all the possible commands.

From Beta3 the patch system has been tweaked to support a PCM channel which allows to play samples of arbitrary length. The sound type byte as been removed from the command stream and move into a special array of structs.

const struct PatchStruct patches[] PROGMEM = {
{0,NULL,patch00,0,0},
{0,NULL,patch01,0,0},
{0,NULL,patch02,0,0},
{0,NULL,patch03,0,0},
{1,NULL,patch04,0,0},
...
};

Let's interpret one entry:

{1,NULL,patch04,0,0}
  • First parameter (1) is the sound type, in this case its to be played on the noise channel (0=wave,1=noise,2=PCM)
  • Second parameter (NULL) is a pointer to the PCM data if it were a PCM patch
  • Third parameter (patch04) is the patch's command stream pointer
  • Fourth (0) is the loop start position for PCM samples
  • Fifth (0) is the loop end position for PCM samples

Note for PCM: A bit like the old wavetable based sound cards (i.e.: AWE32 and GUS), looping is always on internally. If you do not want looping, you have to set both loop start and loop end parameters to the sample size, effectively looping on the same byte at the end of the sample.


So, now let play some sounds! Add this line to your main() function:

InitMusicPlayer(patches);

And *then* you can trigger fxs anywhere in your code. If you look in patches.inc, patch 19 is the "t-spin" fx from megatris

TriggerFx(19,0xff,true);

In this case:

  • 19 is the patch number
  • 0xff is the volume
  • true is the 'retrig' attribute. It basically means that if another TriggerFx() call is made for the same patch *before* the previous one is finished playing, it will re-trigger it right away on the same channel instead of starting another simultaneous instance of the sound onto another channel (determined by the voice stealing "algorithm").

Creating new patches is really trial an error. We suggest you make an empty project to create new patches. It will compile and flash faster.

All patch commands are described here.