Linux DevCenter    
 Published on Linux DevCenter (http://www.linuxdevcenter.com/)
 See this if you're having trouble printing code examples


An Introduction to Linux Audio

by John Littler
08/02/2007

Intro to Programming Linux Audio

Linux has come a long way in the last 10 years. At that time, if you were looking through the main audio and music applications on other operating systems, you would have struggled to find comparable, fully developed, apps on Linux. Nowadays, while no one would say the job was done, they could point to an assortment of high-quality applications that are getting real jobs done.

Having said that, there's still work to do on existing apps and the whole future ahead for those who want to try to get the future started now; which is to say that there is no fundamental law stating that there shall only be sequencers, loopers, and whatever. Whether designing the future or just having a play with sounds, Linux is a nice place to get started for practical as well as possible idealogical reasons. The practical reasons have to do with the variety of APIs available to let you get into Linux audio programming in a way that suits your ambitions and skills. Once you've acquired some skills, you could join an existing development team for an app you like, or head off into the wilderness hacking your own trail.

One side issue here is the business model. This is of interest to anyone hoping that what might start as an enjoyable hobby could have the possibility of being a job as well. First of all, there really aren't that many jobs in this field, and there still aren't that many if you include all the commercial audio software houses. They are there though. Academia is a similarly small area, but another possibility. One thing is for sure, project donations will not pay the rent. Consulting work gained as a result of your project work might do so, though. And if you come up with the next Big Thing, well, that's a different story.

Before we head into some specifics, and if you're unfamiliar with this general area, there are a couple of podcast chats with Fernando Lopez-Lescano of CCRMA at Stanford and another with Paul Davis of the Ardour project that cover a wide range of topics under this heading. The podcasts are available at Technical University Berlin and Mstation.org.

Now, let's have a look at what we're trying to do and the main options available for doing it.

The three main things to do are capturing (recording) audio, replaying it, and altering it. All of this comes under the heading of Digital Signal Processing (DSP). We'll be looking at the first two options: capturing and replaying.

What we want to do is talk to the sound card in the computer, tell it what to do, what sort of arrangement the data should have (bearing in mind the card's capabilities), and then store it somewhere.

This could be broadly shown as the following (from the Paul Davis tutorial on ALSA programming):

      open_the_device();
      set_the_parameters_of_the_device();
      while (!done) {
           /* one or both of these */
           receive_audio_data_from_the_device();
	   deliver_audio_data_to_the_device();
      }
      close the device

A look at the ALSA sound card matrix is a good starting point for learning about cards, but in what comes below, you frequently don't need to be that far into the machine (depending on what you want to accomplish).

Now, let's have a look at the ways we might get something happening.

OSS

Open Sound System (OSS), from 4Front Technologies, was the only supplier of sound card drivers for Linux until 1997 or '98. In those days, there was a free driver set and a commercial set, which offered a lot more. Now, OSS drivers are available for free for non-commercial use. The drivers are also open source since June 2007.

ALSA (Advanced Linux Sound Architecture) now provides the kernel drivers, and OSS use is deprecated, but there may be circumstances where OSS is useful, including doing work on an existing OSS application. Hannu Savolainen, the man behind OSS who was originally responsible for Linux sound support, writes a blog that provides a fascinating backlog to all of this. He maintains that lots of developers continue to use OSS because they don't like the ALSA API and don't need its added features (and complication). However, there are easier ways into ALSA, which we'll get to in the next section.

If you've ever seen a jokey reference to cat somefile > /dev/dsp then you've seen an OSS interface and, by the way, the results of doing just that with, say, a text file can be extremely ugly.

The OSS API consists of Posix/Unix system calls: open(), close(), read(), write(), ioctl(), select(), poll(), and mmap().

The following is a simple audio playback program from the OSS documentation that plays a continuous 1 kHz sine wave. The well-documented code, first of all, gives an example of synthesis and then goes on to set parameters, open and set the audio device, and finally to write the data to the device.

#include 
#include 
#include 
#include 
#include 

int fd_out;
int sample_rate = 48000;

static void
write_sinewave (void)
{

/*
This routine is a typical example of application routine that produces
audio signal using synthesis. This is actually a very basic "wave 
table" algorithm (btw). It uses precomputed sine function values for a
complete cycle of a sine function. This is much faster than calling 
the sin() function once for each sample. In other applications this 
routine can simply be replaced by whatever the application needs to do.
*/

  static unsigned int phase = 0;        /* Phase of the sine wave */
  unsigned int p;
  int i;
  short buf[1024];              /* 1024 samples/write is a safe choice */

  int outsz = sizeof (buf) / 2;

  static int sinebuf[48] = {

    0, 4276, 8480, 12539, 16383, 19947, 23169, 25995,
    28377, 30272, 31650, 32486, 32767, 32486, 31650, 30272,
    28377, 25995, 23169, 19947, 16383, 12539, 8480, 4276,
    0, -4276, -8480, -12539, -16383, -19947, -23169, -25995,
    -28377, -30272, -31650, -32486, -32767, -32486, -31650, -30272,
    -28377, -25995, -23169, -19947, -16383, -12539, -8480, -4276
  };

  for (i = 0; i < outsz; i++)
    {

/*
The sinebuf[] table was computed for 48000 Hz. We will use simple 
sample rate compensation. We must prevent the phase variable from 
growing too large because that would cause cause arihmetic overflows 
after certain time. This kind of error posibilities must be identified
when writing audio programs that could be running for hours or even 
months or years without interruption. When computing (say) 192000 
samples each second the 32 bit integer range may get overflown very 
quickly. The number of samples played at 192 kHz will cause an 
overflow after about 6 hours.
*/

      p = (phase * sample_rate) / 48000;

      phase = (phase + 1) % 4800;
      buf[i] = sinebuf[p % 48];
    }

/*
Proper error checking must be done when using write. It's also
important to report the error code returned by the system.
*/

  if (write (fd_out, buf, sizeof (buf)) != sizeof (buf))
    {
      perror ("Audio write");
      exit (-1);
    }
}

/*
The open_audio_device opens the audio device and initializes it for
the required mode.
*/

static int
open_audio_device (char *name, int mode)
{
  int tmp, fd;

  if ((fd = open (name, mode, 0)) == -1)
    {
      perror (name);
      exit (-1);
    }

/*
Setup the device. Note that it's important to set the sample format, 
number of channels and sample rate exactly in this order. Some 
devices depend on the order.
*/

/* Set the sample format */

  tmp = AFMT_S16_NE;            /* Native 16 bits */
  if (ioctl (fd, SNDCTL_DSP_SETFMT, &tmp) == -1)
    {
      perror ("SNDCTL_DSP_SETFMT");
      exit (-1);
    }

  if (tmp != AFMT_S16_NE)
    {
      fprintf (stderr,
               "The device doesn't support the 16 bit sample format.\n");
      exit (-1);
    }

/* Set the number of channels */

  tmp = 1;
  if (ioctl (fd, SNDCTL_DSP_CHANNELS, &tmp) == -1)
    {
      perror ("SNDCTL_DSP_CHANNELS");
      exit (-1);
    }

  if (tmp != 1)
    {
      fprintf (stderr, "The device doesn't support mono mode.\n");
      exit (-1);
    }

/* Set the sample rate */

  sample_rate = 48000;
  if (ioctl (fd, SNDCTL_DSP_SPEED, &sample_rate) == -1)
    {
      perror ("SNDCTL_DSP_SPEED");
      exit (-1);
    }

/*
No need for error checking because we will automatically adjust the
signal based on the actual sample rate. However most application must 
check the value of sample_rate and compare it to the requested rate.
Small differences between the rates (10% or less) are normal and the 
applications should usually tolerate them. However larger differences 
may cause annoying pitch problems (Mickey Mouse).
*/

  return fd;
}

int
main (int argc, char *argv[])
{

/*
Use /dev/dsp as the default device because the system administrator 
may select the device using the ossctl program or some other methods
*/

  char *name_out = "/dev/dsp";

/*
It's recommended to provide some method for selecting some other 
device than the default. We use command line argument but in some 
cases an environment variable or some configuration file setting may 
be better.
*/

  if (argc > 1)
    name_out = argv[1];

/*
It's mandatory to use O_WRONLY in programs that do only playback. 
Other modes may cause increased resource (memory) usage in the driver.
It may also prevent other applications from using the same device for 
recording at the same time.
*/

  fd_out = open_audio_device (name_out, O_WRONLY);

  while (1)
    write_sinewave ();

  exit (0);
}

Copyright (C) 4Front Technologies, 2002-2004. Released under GPLv2/CDDL.

ALSA

ALSA was started to fill a perceived gap for free, open source sound drivers. In 1998, for example, the OSS free drivers would not allow you to use full duplex on your sound card (being able to monitor a prerecorded track as well as what you're recording). Also, the OSS drivers at the time would not address the more sophisticated parts of the higher-end cards like the RME Hammerfall which were becoming available. A thought at the time was that OSS was fine for users who just wanted to play CDs or MP3s, and ALSA was what you needed if you were going to do serious studio work.

The ALSA API could thus be said to be lower-level than OSS although in both APIs there are both generic and specific ways to talk to hardware; and the specific way (mmap in OSS, hw in ALSA) is more verbose and requires detailed knowledge of the card--which makes sense if you're trying, for example, to use specific features of a high-end audio card (to see if a specific card is supported see the sound card matrix mentioned earlier). Bear in mind also that these APIs enable the machine to talk to the sound card via the drivers; the generic way of talking to them isn't magic, it's just addressing information already contained in the drivers.

Below is an example of a minimal, interrupt-driven program from the Paul Davis Tutorial on the ALSA API.'This program opens an audio interface for playback, configures it for stereo, 16 bit, 44. 1kHz, interleaved conventional read/write access. It then waits till the interface is ready for playback data, and delivers random data to it at that time. This design allows your program to be easily ported to systems that rely on a callback-driven mechanism, such as JACK, LADSPA, CoreAudio, VST, and many others.


#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <poll.h>
#include &ltalsa/asoundlib.h>
              
snd_pcm_t *playback_handle;
short buf[4096];
        
int
playback_callback (snd_pcm_sframes_t nframes)
{
    int err;
        
    printf ("playback callback called with %u frames\n", nframes);
       
    /* ... fill buf with data ... */
        
    if ((err = snd_pcm_writei (playback_handle, buf, nframes)) < 0) {
          fprintf (stderr, "write failed (%s)\n", snd_strerror (err));
    }
        
    return err;
}
              
main (int argc, char *argv[])
{
       
    snd_pcm_hw_params_t *hw_params;
    snd_pcm_sw_params_t *sw_params;
    snd_pcm_sframes_t frames_to_deliver;
    int nfds;
    int err;
    struct pollfd *pfds;
        
    if ((err = snd_pcm_open (&playback_handle, argv[1], SND_PCM_STREAM_PLAYBACK, 0)) < 0) {
          fprintf (stderr, "cannot open audio device %s (%s)\n", 
                   argv[1],
                   snd_strerror (err));
          exit (1);
    }
                   
    if ((err = snd_pcm_hw_params_malloc (&hw_params)) < 0) {
          fprintf (stderr, "cannot allocate hardware parameter structure (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
                                 
    if ((err = snd_pcm_hw_params_any (playback_handle, hw_params)) < 0) {
          fprintf (stderr, "cannot initialize hardware parameter structure (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
        
    if ((err = snd_pcm_hw_params_set_access (playback_handle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED)) < 0) {
          fprintf (stderr, "cannot set access type (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
        
    if ((err = snd_pcm_hw_params_set_format (playback_handle, hw_params, SND_PCM_FORMAT_S16_LE)) < 0) {
          fprintf (stderr, "cannot set sample format (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
        
    if ((err = snd_pcm_hw_params_set_rate_near (playback_handle, hw_params, 44100, 0)) < 0) {
          fprintf (stderr, "cannot set sample rate (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
        
    if ((err = snd_pcm_hw_params_set_channels (playback_handle, hw_params, 2)) < 0) {
          fprintf (stderr, "cannot set channel count (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
        
    if ((err = snd_pcm_hw_params (playback_handle, hw_params)) < 0) {
          fprintf (stderr, "cannot set parameters (%s)\n",
                   snd_strerror (err));
    exit (1);
    }
        
    snd_pcm_hw_params_free (hw_params);
        
    /* tell ALSA to wake us up whenever 4096 or more frames
       of playback data can be delivered. Also, tell
       ALSA that we'll start the device ourselves.
    */
        
    if ((err = snd_pcm_sw_params_malloc (&sw_params)) < 0) {
          fprintf (stderr, "cannot allocate software parameters structure (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
    if ((err = snd_pcm_sw_params_current (playback_handle, sw_params)) < 0) {
          fprintf (stderr, "cannot initialize software parameters structure (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
    if ((err = snd_pcm_sw_params_set_avail_min (playback_handle, sw_params, 4096)) < 0) {
          fprintf (stderr, "cannot set minimum available count (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
    if ((err = snd_pcm_sw_params_set_start_threshold (playback_handle, sw_params, 0U)) < 0) {
          fprintf (stderr, "cannot set start mode (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
    if ((err = snd_pcm_sw_params (playback_handle, sw_params)) < 0) {
          fprintf (stderr, "cannot set software parameters (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
        
    /* the interface will interrupt the kernel every 4096 frames,
       and ALSA will wake up this program very soon after that.
    */
        
    if ((err = snd_pcm_prepare (playback_handle)) < 0) {
          fprintf (stderr, "cannot prepare audio interface for use (%s)\n",
                   snd_strerror (err));
          exit (1);
    }
       
    while (1) {
        
          /* wait till the interface is ready for data, or 1 
             second has elapsed.
          */
        
          if ((err = snd_pcm_wait (playback_handle, 1000)) < 0) {
                  fprintf (stderr, "poll failed (%s)\n", strerror (errno));
                  break;
          }                  
        
          /* find out how much space is available for playback 
             data 
          */
        
          if ((frames_to_deliver = snd_pcm_avail_update (playback_handle)) < 0) {
                  if (frames_to_deliver == -EPIPE) {
                          fprintf (stderr, "an xrun occured\n");
                          break;
                  } else {
                          fprintf (stderr, "unknown ALSA avail update return value (%d)\n", 
                                   frames_to_deliver);
                          break;
                  }
          }
        
          frames_to_deliver = frames_to_deliver > 4096 ? 4096 : frames_to_deliver;
      
          /* deliver the data */
        
          if (playback_callback (frames_to_deliver) != frames_to_deliver) {
                  fprintf (stderr, "playback callback failed\n");
                  break;
          }
    }
        
    snd_pcm_close (playback_handle);
    exit (0);
}

In the example after this one, Paul goes on to talk about full-duplex, or combining both capture and playback to achieve what we talked about a little earlier. Achieving this by combining ordinary capture and playback is, in his opinion, deeply flawed, and interupts are the way to go with this. However, doing it this way is complex and he recommends JACK.

JACK

If you listened to the podcast chat with Paul Davis, you'll have caught me saying something like: What do you think is the best way into Linux Audio programming, ALSA? And him saying that, actually, JACK would usually be the best way to go, for the reason that there's a higher level of abstraction, so dealing with the sound card stuff is easier and requires less code.

JACK Audio Connection Kit was put together by Paul Davis and others from the Linux Audio Dev mailing list. The primary ideas of the thing are to allow audio apps on POSIX-compliant operating systems (e.g., Linux, BSD, Mac OS X) to interchange data while running, and also to provide a higher level of abstraction so that developers can concentrate more on the core functionality of their programs. An added advantage is that porting from Linux to Mac OS X is trivial if you're using X11 or it's a command line app.

Here is a simple client that shows the basic features of JACK. The typical steps as listed on the JACK site are:

The main interface is jack.h, which is the only call made in the following example (from this LXR project page on SourceForge). Needless to say, a thorough study of this in conjunction with existing code (code reuse!) is a good place to start.

/** @file simple_client.c
  *
  * @brief This is very simple client that demonstrates the basic
  * features of JACK as they would be used by many applications.
  */
   
  #include 
  #include 
  #include 
  #include 
  #include 
  #include 
  
  jack_port_t *input_port;
  jack_port_t *output_port;
  
  /**
   * The process callback for this JACK application.
   * It is called by JACK at the appropriate times.
   */
  int
  process (jack_nframes_t nframes, void *arg)
  {
       jack_default_audio_sample_t *out = (jack_default_audio_sample_t *) jack_port_get_buffer (output_port, nframes);
       jack_default_audio_sample_t *in = (jack_default_audio_sample_t *) jack_port_get_buffer (input_port, nframes);
  
       memcpy (out, in, sizeof (jack_default_audio_sample_t) * nframes);
          
       return 0;      
  }
  
  /**
   * This is the shutdown callback for this JACK application.
   * It is called by JACK if the server ever shuts down or
   * decides to disconnect the client.
   */
  void
  jack_shutdown (void *arg)
  {
  
       exit (1);
  }
  
  int
  main (int argc, char *argv[])
  {
       jack_client_t *client;
       const char **ports;
  
       if (argc < 2) {
               fprintf (stderr, "usage: jack_simple_client \n");
               return 1;
       }
  
       /* try to become a client of the JACK server */
  
       if ((client = jack_client_new (argv[1])) == 0) {
               fprintf (stderr, "jack server not running?\n");
               return 1;
       }
  
       /* tell the JACK server to call `process()' whenever
          there is work to be done.
       */
  
       jack_set_process_callback (client, process, 0);
  
       /* tell the JACK server to call `jack_shutdown()' if
          it ever shuts down, either entirely, or if it
          just decides to stop calling us.
       */
  
       jack_on_shutdown (client, jack_shutdown, 0);
  
       /* display the current sample rate. 
       */
  
       printf ("engine sample rate: %" PRIu32 "\n",
               jack_get_sample_rate (client));
  
       /* create two ports */
  
       input_port = jack_port_register (client, "input", JACK_DEFAULT_AUDIO_TYPE, JackPortIsInput, 0);
       output_port = jack_port_register (client, "output", JACK_DEFAULT_AUDIO_TYPE, JackPortIsOutput, 0);
  
       /* tell the JACK server that we are ready to roll */
  
       if (jack_activate (client)) {
               fprintf (stderr, "cannot activate client");
               return 1;
       }
  
       /* connect the ports. Note: you can't do this before
          the client is activated, because we can't allow
          connections to be made to clients that aren't
          running.
       */
  
       if ((ports = jack_get_ports (client, NULL, NULL, JackPortIsPhysical|JackPortIsOutput)) == NULL) {
              fprintf(stderr, "Cannot find any physical capture ports\n");
              exit(1);
       }
 
       if (jack_connect (client, ports[0], jack_port_name (input_port))) {
              fprintf (stderr, "cannot connect input ports\n");
       }
 
       free (ports);
         
       if ((ports = jack_get_ports (client, NULL, NULL, JackPortIsPhysical|JackPortIsInput)) == NULL) {
              fprintf(stderr, "Cannot find any physical playback ports\n");
              exit(1);
       }
 
       if (jack_connect (client, jack_port_name (output_port), ports[0])) {
              fprintf (stderr, "cannot connect output ports\n");
       }
 
       free (ports);
 
       /* Since this is just a toy, run for a few seconds, 
       then finish */
 
       sleep (10);
       jack_client_close (client);
       exit (0);
 }
 

Python: PyAudio and PyGst with Gstreamer

Even higher level than JACK is the idea of using the Python bindings for Gstreamer, or the PyAudio bindings for PortAudio, a cross-platform audio library.

Here is an example from the PyAudio docs. We import the audio libraries first and then sys which enables us to read and write files. Then we do what we've done in every example, which is to set the format of the audio and the audio device. We open a stream, grab the stuff, and then save it.

""" Record a few seconds of audio and save to a WAVE file. """
import pyaudio
import wave
import sys

chunk = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
RECORD_SECONDS = 5
WAVE_OUTPUT_FILENAME = "output.wav"

p = pyaudio.PyAudio()

stream = p.open(format = FORMAT,
                channels = CHANNELS,
                rate = RATE,
                input = True,
                frames_per_buffer = chunk)

print "* recording"
all = []
for i in range(0, RATE / chunk * RECORD_SECONDS):
    data = stream.read(chunk)
    all.append(data)
print "* done recording"

stream.close()
p.terminate()

# write data to WAVE file
data = ''.join(all)
wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
wf.setnchannels(CHANNELS)
wf.setsampwidth(p.get_sample_size(FORMAT))
wf.setframerate(RATE)
wf.writeframes(data)
wf.close()

Note how minimal this code is. There is no GUI, of course, so that helps as well.

A good introduction to Gstreamer is Jono Bacon's article that describes the workings of Gstreamer and the PyGst bindings. Gstreamer offers quite a lot of creative possibilities and, unlike ESD, offers the possibility of having more than one audio stream playing at one time, amongst many other features.

Below is an example of code from the PyGst docs of a simple audio player. We grab the sys, windowing, and pygst stuff, set up the window and the player, and away we go. Note that some implementations, particularly for small devices, are missing crucial bits. For example, I was interested to do a command line version for a Nokia 770 to play loops and wasn't able to.

#!/usr/bin/env python

import sys, os, os.path
import pygtk, gtk, gobject
import pygst
pygst.require("0.10")
import gst

class GTK_Main:
        
        def __init__(self):
             window = gtk.Window(gtk.WINDOW_TOPLEVEL)
             window.set_title("Audio-Player")
             window.set_default_size(400, 300)
             window.connect("destroy", gtk.main_quit, "WM destroy")
             vbox = gtk.VBox()
             window.add(vbox)
             self.entry = gtk.Entry()
             vbox.pack_start(self.entry, False, True)
             self.button = gtk.Button("Start")
             self.button.connect("clicked", self.start_stop)
             vbox.add(self.button)
             window.show_all()
                
             self.player = gst.element_factory_make("playbin", "player")
             fakesink = gst.element_factory_make('fakesink', "my-fakesink")
             self.player.set_property("video-sink", fakesink)
             bus = self.player.get_bus()
             bus.add_signal_watch()
             bus.connect('message', self.on_message)
                
        def start_stop(self, w):
             if self.button.get_label() == "Start":
                   filepath = self.entry.get_text()
                   if os.path.exists(filepath):
                         self.button.set_label("Stop")
                         self.player.set_property('uri', "file://" + filepath)
                                self.player.set_state(gst.STATE_PLAYING)
             else:
                   self.player.set_state(gst.STATE_NULL)
                   self.button.set_label("Start")
                                                
        def on_message(self, bus, message):
             t = message.type
             if t == gst.MESSAGE_EOS:
                   self.player.set_state(gst.STATE_NULL)
                   self.button.set_label("Start")
             elif t == gst.MESSAGE_ERROR:
                   self.player.set_state(gst.STATE_NULL)
                   self.button.set_label("Start")

gtk.gdk.threads_init()
GTK_Main()
gtk.main()

Here we actually have a GUI, but you can see the amount of code needed is still relatively small compared to what would be required with some lower-level language.

Conclusion

Which way you might go to get into Linux Audio depends on what you'd like to achieve. If you're planning to just have some fun while learning something, then wherever you start will be OK. If the fun leads to ideas, then you'll soon figure out whether you need to be deeper into the machine or whether you just need to be sketching on the surface. Just bear in mind that the higher the level you're on, the more you're bound by what other people think about how things should be. This is mostly good in terms of ease and implementation time, but could be bad in terms of constricted horizons.

A good mailing list to join if you get serious is the Linux Audio Dev list, which describes itself like this: "The Linux Audio Developers (LAD) list is dedicated to sound architecture and application development for the Linux Operating System. With its proven stability and scalability, it is a perfect foundation for the handling and processing of large amounts of audio data. Our goal is to encourage widespread code re-use and cooperation, and to provide a common forum for all audio related software projects and an exchange point for a number of other special-interest mailing lists."

The archives of this list can also be useful for answering questions, as can using the likes of krugle.com or Google's Codesearch.

At this point, you might say to yourself, well, actually, what I really want to do is make noises or non-standard kinds of music. In that case you could have a look at Csound or Pure Data. You could also look into the synthesis and DSP side of programming, which is something we haven't looked at in this article.

John Littler is chief gopher for Mstation.org.


Return to LinuxDevCenter.com.

Copyright © 2009 O'Reilly Media, Inc.