JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Primary and Secondary Sound Buffers

The DirectSound object that represents the sound card itself has a single primary buffer. The primary buffer represents the mixing hardware (or software) on the card and processes all the time, like a little conveyor belt. Manual primary buffer mixing is very advanced, and luckily you don't have to do it. DirectSound takes care of the primary buffer for you as long as you don't set the cooperation level to the highest priority. In addition, you don't need to create a primary buffer because DirectSound creates one for you, as long as you set the cooperation level to one of the lower levels, such as DSSCL_NORMAL.

The only drawback is that the primary buffer will be set for 22 KHz stereo in 8-bit. If you want 16-bit sound or a higher playback rate, you'll have to at least set the cooperation level to DSSCL_PRIORITY and then set a new data format for the primary buffer. But for now, just use the default because it makes life much easier.

Working with Secondary Buffers

Secondary buffers represent the actual sounds that you want to play. They can be any size that you want, as long as you have the memory to hold them. However, the SRAM on the sound card can only hold so much sound data, so be careful when you're requesting sounds to be stored on the sound card itself. But sounds that are stored on the sound card itself will take much less processing power to play, so keep that in mind.

Now there are two kinds of secondary buffers—static and streaming. Static sound buffers are sounds that you plan to keep around and play over and over. These are good candidates for SRAM or system memory. Streaming sound buffers are a little different. Imagine that you want to play an entire CD with DirectSound. I don't think you have enough system RAM or SRAM to store all 650MB of audio data in memory, so you'd have to read the data in chunks and stream it out to a DirectSound buffer. This is what streaming buffers are for. You continually feed them with new sound data as they are playing. Sound tricky? Take a look at Figure 10.10.

Figure 10.10. Streaming audio data.

graphics/10fig10.gif

In general, all secondary sound buffers can be written to static or streaming. However, because it's possible that the sound will be playing as you're trying to write to it, DirectSound uses a scheme to take this into consideration: circular buffering. This means that each sound is stored in a circular data array that is continually read from at one point by the play cursor and written to at another point (slightly behind the first point) by the write cursor. Of course, if you don't need to write to your sound buffers as they are playing, you don't have to worry about this, but you will when you're streaming audio.

To facilitate this complex, buffered real-time writing capability, the data access functions for sound buffers might return a memory space that's broken up into two pieces because the data block you're trying to write exists at the end of the buffer and overflows into the beginning of the buffer. The point is, you need to know this fact if you're going to stream audio. However, in most games all this is moot, because as long as you keep all the sound effects to a few seconds each and the musical tracks are all loaded on demand, you can usually fit everything into a few megabytes of RAM. Using 2–4MB of storage for sound in a 32MB+ machine isn't too much of a problem.

Creating Secondary Sound Buffers

To create a secondary sound buffer, you must make a call to CreateSoundBuffer() with the proper parameters. If successful, the function creates a sound buffer, initializes it, and returns an interface pointer to it of this type:

LPDIRECTSOUNDBUFFER lpdsbuffer; // a directsound buffer

However, before you make the call to CreateSoundBuffer(), you must set up a DirectSoundBuffer description structure, which is similar to a DirectDrawSurface description. The description structure is of the type DSBUFFERDESC and is shown here:

typedef struct
{
DWORD  dwSize;        // size of this structure
DWORD  dwFlags;       // control flags
DWORD  dwBufferBytes; // size of the sound buffer in bytes
DWORD  dwReserved;    // unused
LPWAVEFORMATEX  lpwfxFormat; // the wave format
}  DSBUFFERDESC, *LPDSBUFFERDESC;

The dwSize field is the standard DirectX structure size, dwBufferBytes is how big you want the buffer to be in bytes, and dwReserved is unused. The only fields of real interest are dwFlags and lpwfxFormat. dwFlags contains the creation flags of the sound buffer. Take a look at Table 10.3, which contains a partial list of the more basic flag settings.

Table 10.3. DirectSound Secondary Buffer Creation Flags
Value Description
DSBCAPS_CTRLALL The buffer must have all control capabilities.
DSBCAPS_CTRLDEFAULT The buffer should have default control options. This is the same as specifying the DSBCAPS_CTRLPAN, DSBCAPS_ CTRLVOLUME, and DSBCAPS_CTRLFREQUENCY flags.
DSBCAPS_CTRLFREQUENCY The buffer must have frequency control capability.
DSBCAPS_CTRLPAN The buffer must have pan control capability.
DSBCAPS_CTRLVOLUME The buffer must have volume control capability.
DSBCAPS_STATIC Indicates that the buffer will be used for static sound data. Most of the time you'll create these buffers in hardware memory if possible.
DSBCAPS_LOCHARDWARE Use hardware mixing and memory for this sound buffer if memory is available.
DSBCAPS_LOCSOFTWARE Forces the buffer to be stored in software memory and use software mixing, even if DSBCAPS_STATIC is specified and hardware resources are available.
DSBCAPS_PRIMARYBUFFER Indicates that the buffer is a primary sound buffer. Only set this if you want to create a primary buffer and you're a sound god.

In most cases you'll set the flags to DSBCAPS_CTRLDEFAULT | DSBCAPS_STATIC | DSBCAPS_LOCSOFTWARE for default controls, static sound, and system memory, respectively. If you want to use hardware memory, use DSBCAPS_LOCHARDWARE instead of DSBCAPS_LOCSOFTWARE.

NOTE

The more capabilities you give a sound, the more stops (software filters) it has to go through before being heard. This means more processing time. Alas, if you don't need volume, pan, and frequency shift ability, forget DSBCAPS_CTRLDEFAULT and just use the capabilities that you absolutely need.


Now let's move on to the WAVEFORMATEX structure. It contains a description of the sound that you want the buffer to represent (it's a standard Win32 structure also). Parameters like playback rate, number of channels (1-mono or 2-stereo), bits per sample, and so forth are recorded in this structure. Here it is for your review:

typedef struct
{
WORD  wFormatTag;      // always WAVE_FORMAT_PCM
WORD  nChannels;       // number of audio channels 1 or 2
DWORD nSamplesPerSec;  // samples per second
DWORD nAvgBytesPerSec; // average data rate
WORD  nBlockAlign;     // nchannels * bytespersmaple
WORD  wBitsPerSample;  // bits per sample
WORD  cbSize;          // advanced, set to 0
}  WAVEFORMATEX;

Simple enough. Basically, WAVEFORMATEX contains the description of the sound. In addition, you need to set up one of these as part of DSBUFFERDESC. Let's see how to do that, beginning with the prototype of the CreateSoundBuffer() function:

HRESULT CreateSoundBuffer(
 LPCDSBUFFERDESC lpcDSBuffDesc,   // ptr to DSBUFFERDESC
 LPLPDIRECTSOUNDBUFFER lplpDSBuff,// ptr to sound buffer
 IUnknown FAR *pUnkOuter);        // always NULL

And here's an example of creating a secondary DirectSound buffer at 11KHz mono 8-bit with enough storage for two seconds:

// ptr to directsound
LPDIRECTSOUNDBUFFER lpdsbuffer;

DSBUFFERDESC dsbd;   // directsound buffer description
WAVEFORMATEX  pcmwf; // holds the format description

// set up the format data structure
memset(&pcmwf, 0, sizeof(WAVEFORMATEX));
pcmwf.wFormatTag     = WAVE_FORMAT_PCM; // always need this
pcmwf.nChannels      = 1; // MONO, so channels = 1
pcmwf.nSamplesPerSec = 11025; // sample rate 11khz
pcmwf.nBlockAlign    = 1; // see below

// set to the total data per
// block, in our case 1 channel times 1 byte per sample
// so 1 byte total, if it was stereo then it would be
// 2 and if stereo and 16 bit then it would be 4

pcmwf.nAvgBytesPerSec =
                 pcmwf.nSamplesPerSec * pcmwf.nBlockAlign;

pcmwf.wBitsPerSample = 8; // 8 bits per sample
pcmwf.cbSize         = 0; // always 0
// set up the directsound buffer description
memset(dsbd,0,sizeof(DSBUFFERDESC));
dsbd.dwSize = sizeof(DSBUFFERDESC);
dsbd.dwFlags= DSBCAPS_CTRLDEFAULT | DSBCAPS_STATIC |
              DSBCAPS_LOCSOFTWARE ;

dsbd.dwBufferBytes    = 22050; // enough for 2 seconds at
                             // a sample rate of 11025

dsbd.lpwfxFormat    = &pcmwf; // the WAVEFORMATEX struct

// create the buffer
if (FAILED(lpds->CreateSoundBuffer(&dsbd,&lpdsbuffer,NULL)))
   { /* error */ }

If the function call is successful, a new sound buffer is created and passed in lpdsbuffer, which is ready to be played. The only problem is that there isn't anything in it! You must fill the sound buffer with data yourself. You can do this by reading in a sound file data stored in .VOC, .WAV, .AU, or whatever, and then parse the data and fill up the buffer. Or you could generate algorithmic data and write into the buffer yourself for a test. Let's see how to write the data into the buffer, and later I'll show you how to read sound files from disk.

Writing Data to Secondary Buffers

As I said, secondary sound buffers are circular in nature, and hence are a little more complex to write to than a standard linear array of data. For example, with DirectDraw surfaces, you just locked the surface memory and wrote to it. (This is only possible because there is a driver living down there that turns nonlinear memory to linear.) DirectSound works in a similar fashion: You lock it, but instead of getting one pointer back, you get two! Therefore, you must write some of your data to the first pointer and the rest to the second. Take a look at the prototype for Lock() to understand what I mean:

HRESULT Lock(
  DWORD dwWriteCursor,    // position of write cursor
  DWORD dwWriteBytes,     // size you want to lock
  LPVOID lplpvAudioPtr1,  // ret ptr to first chunk
  LPDWORD lpdwAudioBytes1,// num bytes in first chunk
  LPVOID lplpvAudioPtr2,  // ret ptr to second chunk
  LPDWORD lpdwAudioBytes2,// num of bytes in second chunk
  DWORD dwFlags);         // how to lock it

If you set dwFlags to DSBLOCK_FROMWRITECURSOR, the buffer will be locked from the current write cursor of the buffer. If you set dwFlags to DSBLOCK_ENTIREBUFFER, the entire buffer will be locked. This is the way to go. Keep it simple.

For example, say you create a sound buffer that has enough storage for 1,000 bytes. When you lock the buffer for writing, you'll get two pointers back along with the length of each memory segment to write to. The first chunk might be 900 bytes long, and the second might be 100 bytes long. The point is that you have to write your first 900 bytes to the first memory region and the second 100 bytes to the second memory region. Take a look at Figure 10.11 to clarify this.

And here's an example of locking the 1,000-byte sound buffer:

graphics/10fig11.gif

UCHAR *audio_ptr_1,  // used to retrieve buffer memory
      *audio_ptr_2;

int audio_length_1, // length of each buffer section
    audio_length_2;

// lock the buffer
if (FAILED(lpdsbuffer->Lock(0,1000,
    (void **)&audio_ptr_1, &audio_length_1,
    (void **)&audio_ptr_2, &audio_length_2,
     DSBLOCK_ENTIREBUFFER )))
   { /* error / }

Once you've locked the buffer, you're free to write into the memory. The data can be from a file or can be generated algorithmically. When you're done with the sound buffer, you must unlock it with Unlock(). Unlock() takes both pointers and both lengths, like this:

if (FAILED(lpdsbuffer->Unlock(audio_ptr_1,audio_length_1,
                           audio_ptr_2,audio_length_2)))
   { /* problem unlocking */}

And as usual, when you're done with the sound buffer, you must destroy it with Release(), like this:

lpdsbuffer->Release();

However, don't destroy the sound until you don't need it anymore. Otherwise you'll have to load it again.

Now let's see how to play sounds with DirectSound.

      Previous Section Next Section
    
    https://www.astash.com/services/seo-company/houston/ SEO Houston TX best SEO.


    JavaScript EditorAjax Editor     JavaScript Editor