Digital Audio

Music Synthesis by ( Multi DSP from Analog Devices, SHARC DSP ) using 6 SHARC DSP such that 128 or more voice synthesis was feasible

Multi DSP was amazing choice then to perform Music synthesis with 128 Voice or more

Common external memory ( SRAM and DRAM( then) )

3 SHARC version

4 SHARC version

3 SHARC version

6 SHARC version

Above listed boards were used to perform Digital Audio Synthesis .

Vercoe, B.L., Haidar, M., Kitamura, H., Jayakumar, S.: Multiprocessor Csound: Audio-Pro with Multiple DSPs and Dynamic Load Distribution. In: Conference on Parallel and Distributed Processing Techniques and Applications, Las Vegas (2003)

Vercoe, B.: Audio-Pro with Multiple DSPs and Dynamic Load Distribution. British Telecom Technology Journal 22(4), 180–186 (2004).

MPEG-4 SAOL: Compiler and Sound Synthesis Machine Implementation


Jayakumar Singaram, Rudy Lauwereins and Geert Deconinck



Music Composition has been very native to individual composers and there was no universal tool to represent music composition. From the early eighties, Musical Instrument and Digital Interface (MIDI) become a popular format in Key board industry and the same format is used to represent music composition. Though a MIDI Byte stream is well defined, the rendering of a given MIDI Byte stream is not well defined until the emergence of SoundFont technology. Tools used by music composers were not good enough to use modern computing infrastructure. In fact, popular music formats like MP3, AAC (Advanced Audio Coding) are not a tool to composer, but on the contrary helped piracy to the limit and make composers feel nerves. For more than three decades, music synthesis has been research topic in many music schools. In particular, MIT Media Lab is a prime place where the concept of a language for music composition is established. There have been few versions of Languages in MIT Media Lab, till Structured Audio Orchestra Language (SAOL) become part of MPEG-4 Standard. Though MPEG-4 based SAOL provides a language structure for music composition, it does not provide information or well defined standard on “how to perform rendering”. This motivates or provides scope to bring new tools to perform “efficient rendering”. In this direction, there are contributions from various research groups, but these contributions do not provide a clean methodology to separate orchestra compilation and rendering. Mentioned separation is critical for real time performance during music synthesis. In this Research effort, we have designed and implemented Compiler for SAOL. Added to this, we also designed and implemented SSM. With this introduction of our new tools for SAOL based music composition and rendering, we expect Music composer community to use advanced silicon technologies such as Digital Signal Processors or Reconfigurable Processors