Introduction

Sound synthesis and sound design

Music has brought pleasure and entertainment to mankind throughout the whole of history. Each person is by nature equipped with one of the most elaborate and emotional musical instruments; the human voice. Whenever people feel good music seems to fit the occasion, and it is considered quite natural to hum or sing a song. Musical instruments have brought their own moods to music and at the current moment in human evolution there is an enormous variety of musical instruments available. The twentieth century has seen the development of a range of new and exciting electronic musical instruments. These electronic instruments are very flexible, they can produce a wide range of timbres and can be amplified to whatever loudness level sounds best for the occasion. Most of these electronic instruments are played by a keyboard, but in essence the keyboard can be replaced by any electromechanical device that is able to transform a movement caused by a human interaction into an electrical signal that can drive the sound generating core of the electronic instrument.

All sorts of technical and scientific developments have helped to create electronic instruments and the human interface to play them. Still, music is an art and not really a hard science, although music and sound have for a long time been subject to various scientific research. An important realization is that science can not really explain why much music is such a pleasure to listen to and such a joy to make. Which is not a bad thing, as probably no one is waiting for science to take the fun out of music by applying formalized rules and templates on what is also subject to ‘feel’. So, although this book covers techniques that lean heavily on scientific research, the application of these techniques will in general be aimed at creating fun. There are a lot of professionals working with sound and even more people that make music for their personal enjoyment. Mastery of sound synthesis is valuable to all of them. Still, it won’t be easy to please everyone with one single book, as some people will be more interested in how things work and others might want practical examples that just work. The aim of this book is that it can at least be used as a practical guide in workshops and courses in electronic music, covering some essential basics that are needed to operate the equipment used in sound synthesis in a way that makes some sense. Additionally it can be used to explore techniques to find out how they can help in the development of one’s own musical style.

Sound synthesis is the art of creating sounds by using suitable electronic means, using either analog or digital electronic devices. Sound design is the art of creating particular sounds using sound synthesis techniques. The definition of sound design as used here might be confusing to some, as the name sound design is also used in the discipline in industrial design that occupies itself with how mass produced objects should sound. Examples are how the sound of cars or ladyshaves are ‘designed’ to sound pleasing while in use. Which of course has nothing to do at all with music or sound synthesizers. This book puts the emphasis on the various synthesis techniques for musical purposes and how to setup sound synthesizers to create a large range of characteristic musical sounds. The art of musical sound design is left to the artist.

Psychoacoustics

Most scientific research has been concentrated on what is named psychoacoustics, which is basically the research on how all sorts of sonic phenomena are perceived by the human mind. It should never be forgotten that the human mind is the final link in any audio chain. Meaning that the most important property of any artificial sound is ‘how it sounds’, no matter how complex or simple it is to create that artificial sound. This ‘how it sounds’ is basically equivalent to how the sound is actually perceived in the human mind. The ultimate mastery of sound synthesis is to be able to create sounds that sound good to the ear. Those sounds don’t necessarily have to be made with complex techniques or equipment that is difficult to understand, the basic idea is that when it sounds good it simply sounds good. And if it doesn’t there is still some work to be done. Anyway, whatever makes a sound sound good to the ear is valid.

From a psychological point of view sound is a manifestation in the human awareness. This means that when a sound is heard it is exclusively the perception itself that manifests in the human mind. All that is involved in making music will eventually induce this perception and the nature of the perception will fill part of the human awareness. What happens in the brain is not really part of the synthesis process itself, but the synthesis process should take into account that the human brain acts like a filter that molds the perception into a form that depends on the condition of the human mind. E.g. one must be in the mood for music to enjoy it fully. Matters like personal taste, fatigue, the social surroundings, etc., will all influence the enjoyment of music. Another and more general factor is how the brain itself processes the incoming auditory information on a ‘raw data’ level. The original function of hearing is not to enjoy music but to gather information from the immediate surroundings. Sounds will draw the attention to things happening around us, enabling the human mind to e.g. detect danger. This process works on a half-concious level, meaning that the attention is drawn before the mind can start to think about it. This mechanism has been useful in prehistoric times to warn for immediate dangers like hungry ferocious animals sneaking up from behind. In modern times it is still functional, e.g. when driving a car all sorts of sounds enter the mind at a half-conscious level and cause immediate reaction to avoid dangerous situations. In detecting danger through hearing the sense of space and distance is very important. A soft rustling sound that is very close can mean a more immediate danger as a low roaring sound heard at a long distance. However, another type of soft rustling sound might actually give a comfortable feel. So, a very important property of a particular sound is how it focuses the attention and what sort of sense it will in general introduce in the human mind, again taking into account the state and surroundings a person is in. As this process of focusing happens before one can even think about it, it can be stated that each sound itself has a property that defines how it will by default focus the attention. The wondrous thing about the human mind is that it can focus on so many different sounds and immediately give them some meaning in a vast range of settings.

Happy accidents

There is still a lot of unexplored territory in sound synthesis, as there is such a broad range of flexible sound synthesis techniques available. Creating artificial sounds by electronic means often leads to unexpected results. Some results sound very good and others very bad, while many will be somewhere in between. Happy accidents in sound synthesis are quire rewarding, as they can be immediately explored musically and lead to new forms or compositions. It is not a bad thing to be inspired by some weird sound and try to weave a musical pattern around it. In fact, this is a valid musical improvisation technique. To be able to reproduce the happy accident later it is quite important to be able to detect when such an accident happens and to quickly grasp the nature of the accident. This requires experience, when starting to use synthesis techniques happy accidents will often happen but be quickly gone and leave one wondering why it did sound so good and how that came about. When experience starts to give more grip on what is happening the nature of happy accidents gets understood more quickly and eventually become a new technique that can be used at will. This gives a lot of fun, so much that experimentation and electronic improvisation can become quite addictive. Still, music is often a mix of many different and sometimes delicate sounds and it is always important to judge a sound on how it works out in a musical arrangement.

Technology and sound design

Research on the various technical ways that specific sounds can be generated and processed by electronic means, sometimes referred to as sonology, has provided the musician and composer with many new musically useful techniques and helped to develop new electronic musical instruments that are now taken for granted in today’s music. These electronic instruments employing sound synthesis techniques have become known as sound synthesizers or synths. Sometimes the instrument exists as computer software only, in which case the instrument is named a softsynth. Application of sound synthesis techniques to create sounds for musical purposes has become known as sound design, which is a form of art where musical sounds are created and built from the ground up, sounds with the purpose of being used in some musical way. Sound design covers the whole process of creating the sounds to play with or to use in compositions, design refers to the creative process as a whole and synthesis refers to the more technical side of the creative process. Let’s take as an example the design of a hornlike sound to be played on an electronic keyboard. To create such a sound, the sound designing artist can choose from several available tools and techniques. What makes sound design an art is that the ear is always the final judge, although a lot of knowledge can be used to initially set up the sound. The last tweaks on the sound must be done by ear and not according to scientific rules. In the end the only rule that applies is if it sounds good to the ear and the sound has the right feel.

The name synthesizer refers to several classes of electronic musical instruments, classes that can be based on totally different technical concepts. The popular notion of a synthesizer is that of a musical instrument with lots of flickering lights, knobs and buttons. This romantic image is perhaps caused by the association with the imagery of science fiction in the fifties and sixties of the twentieth century. There is also some vague notion of ‘the typical synthesizer sound’, but on closer inspection this type of sound might as well have been made by an electric guitar or an acoustic recording immersed in an array of spatial sound effects. In fact, there is no such thing as ‘the typical synthesizer sound’, sound synthesizers can produce such a huge number of totally different sounds that not one of them can distinctly characterise ‘the sound of the synthesizer’.

Types of synthesizers

As said, in this book sound synthesis literally means the process of creating musical sounds using a dedicated sound synthesizer, provided this synthesizer has all the necessary tools to offer dynamic and detailed control of the created sounds. The most flexible type of synthesizer to use for this purpose is definitely the modular synthesizer. Today’s modular synthesizers appear in three instances, the traditional analog modular, the digital modular based on DSP techniques and the modular softsynth running as a software-only application on a personal computer. The last two instances are commonly referred to as virtual modular synthesizers, as they emulate to some extend the traditional analog modular synthesizer. All three instances have their little sonical advantages and disadvantages, but the synthesis techniques themselves are basically the same on all three. Analog modular synthesizers are really a collection of small and independently working devices, named modules, housed in one single cabinet. These modules can be freely reconfigured and reconnected to suit any musical need. This freedom offers endless sonic possibilities, some of the produced sounds are great while others might sound like nothing at all. There is a similarity to the palette of a painter, although there might be paint in many colours on the palette, that doesn’t yet say anything about the final painting. The art of painting is how to paint a picture with the available paint by mixing the right colours from the basic colours on the palette. The technique of painting is obviously a part of the art of painting, but for a person looking at the finished picture, the palette and brushes the painter has used are in general totally irrelevant. Still, for the painter these are quite essential, simply as they define what the painter can and can not do. It is exactly the same with a musician using a modular synthesizer, the artist has to learn to interprete and use the possibilities of the instrument to be able to put it to a musical use. Additionally, a sound that sounds very bad in one musical context can sound great in another musical context.

All techniques discussed later in this book will to some extend be possible on the earlier mentioned three instances of the modular synthesizer, provided the necessary modules are present in the system. Most digital modular systems have the advantage that if an extra module is needed it can be instantly created as a new instance in the software. In contrast, on the analog modular it is necessary to go to the shop and buy the extra module. Still, the feel of working with an analog modular is still highly valued and many musicians are still willing to pay vast sums of money for a traditional analog modular system.

The fun with any modular synthesizer is that everything is allowed, there are no rules of what or what not to do with a sound synthesizer. Instead, there is the complete freedom to connect the modules in whatever way one feels like. Experimenting with less obvious connections is definitely part of the fun. The range of possible sounds is endless, there will always be new sounds left to be discovered and musically explored.

Short history of electronic musical instruments

Nineteenth century

Before a new technique is developed it is necessary that the underlying physical principles are discovered and examined first. The nineteenth century was a time where there was the social freedom to question the nature of natural phenomena, including the physical nature of sound. E.g. the first attempts to understand why equally pitched sounds can sound completely different took place in the nineteenth century. In 1822 the scientist Jean Baptiste Joseph Fourier published a study about how wave phenomena like soundwaves can be mathematically described and analysed by series of harmonically related sine and cosine functions. This mathematical method will become known as the Fourier Transformation. The method is used in 1863 by Hermann Ludwig Ferdinand von Helmholtz in his research on sound and acoustics. Helmholtz proves with an experiment that all pitched sounds are made up of a number of sinewaves with certain pitch relations, named harmonics. The Helmholtz experiment can isolate a single harmonic sinewave by a simple device that will become known as the Helmholtz resonator, in its most simple form a hollow glass ball with a little hole. The air in the ball’s cavity can resonate at a certain pitch, the pitch depending on the dimensions of the ball. Helmholtz’ study shows that the resonator can convert the kinetic energy of the vibrating air into warmth. When a harmonic component in a sound is equal to the resonant frequency of the resonator, the resonator will damp the loudness level of that harmonic component by converting the sound energy of the harmonic into warmth in the cavity of the ball, which causes the temperature of the ball to be increased. Helmholz noticed that this experiment also resulted in a change in timbre of the sound. So, this experiment also proved that the timbre of a sound depends on the relationship between loudness levels of the harmonic components that are present in the sound. Using modern digital measuring devices the loudness levels of these harmonic components can be calculated by taking a sample of one cycle of the waveform and then apply the Fourier transformation on the sample.This principle is the foundation for a technique named additive synthesis, a method where any conceivable sound can be synthesized by separately generating all the necessary harmonic components and mixing them together in certain volume ratios. Another popular technique that relies heavily on the Fourier transformation is convolution. This convolution technique makes it possible to superimpose characteristics of one sound on another sound. Convolution needs to do an enormous amount of calculations, but by using the Fourier math the amount of necessary calculations can be dramatically reduced. It is interesting to note that techniques like convolution, that have only become practical because of the advent of fast computers, do many times have their roots a long, long time ago.

First half of the twentieth century

Musical instruments reflect to a certain extend the technological level of the culture using the instrument. Up to the beginning of the twentieth century it is mainly the materials wood, metal, ivory, leather, ceramics, etc., that are used to build musical instruments. It is no surprise that when electronics becomes a common technology in the twentieth century it is used extensively in new types of musical instruments. The development of electronic musical instruments walks along with the refinements of electronic technology, spanning a period of over a hundred years. In the year 1906 Lee DeForest invents the triode vacuumtube, which he names the Audion. This device is capable of amplifying electrical signals, enabling the design of ‘active’ electronic devices like the audio amplifier and the radio. The oscillator circuits and filters that are used in radio technology inspire the russian inventor Lev Thermen in the early twenties to invent a completely new type of musical instrument, the Theremin. The instrument is fully electronic, without any mechanical parts used to generate sound. The Theremin is played by moving the hands towards two antenna’s. One antenna controls the pitch while the other controls the volume of the sound. The way pitches are generated is based on what is named the superheterodyne principle, a technique where two radio frequencies are mixed, resulting in signals that contain the difference and sum of the original frequencies. Thermen chooses the radio frequencies in such a way that the resulting difference frequency is within the human hearing range. Detuning one of the original frequencies by waving a hand near an antenna results in a gliding pitch change. The Theremin is a very difficult instrument to master, only few musicians dare to play it. One of the mysterious aspects of the instrument is that during play it is not touched by the musician, which at the time added much to its futuristic image. In the year 1929 the american inventor Laurens Hammond starts to develop an organ based on tonewheels. The very stable electromotor he invented earlier is used to rotate the tonewheels in a precisely controlled manner. The use of tonewheels had already been used by e.g. Thaddeus Cahill in his Telharmonium, built around the year 1900. But the Telharmonium was gigantic in size, as it was constructed of big electricity generators that occupied a complete building. Hammond used vacuum tubes as amplifiers, enabling him to build his organ in a much more manageable size. After the Hammond tonewheel organ is brought to the market in 1935 it immediately starts to play an important role in popular music. The big difference with the Theremin is that the Hammond organ can be readily played by anyone knowing how to play a piano or organ keyboard, so there is an immediate market of the instrument. The tonewheels generate sinewaves that are mixed in certain ratios, making the Hammond organ an example of an electronic musical instrument based on the principles of additive synthesis. As the pitches which the organ can produce depend on both mechanical and electronic devices, this class of instruments is named an electromechanical instrument. Later on, in the year 1939 Hammond develops the Novachord, a version of his organ where the tonewheels are replaced by electrical circuits, making it a completely electronic organ.

The first half of the twentieth century sees the invention of more electronic and electromechanic musical instruments, some with more appeal to the public than others. The main property that all these instruments have in common is that they all are to be played live by a musician.

The fifties and sixties

Around the year 1950 the taperecorder becomes available. And although the taperecorder is not perceived as a musical instrument, its invention soon turns out to be a very important event in the history of music, as the taperecorder offers the ability to manipulate recordings in a way that was unconceivable before. Tapes can easily be played speeded up, speeded down or played in reverse. These manipulations change the original timbre of the recorded sounds in a dramatic way. New pitches can be created by changing the playback speed and subsequent recording on a second taperecorder. Manipulations by changing playback speed had already been done before by using wire recorders and gramophones, e.g. by Edgar Varèse, but using a taperecorder turned out to be more practical. However, the real new thing the taperecorder offered was the possibility to splice the tape in parts and assemble these parts in a different order. With this splicing technique a composer is able to assemble a melodic composition from snippets of sounds by splicing the tape, making overdubs and rerecording at different speeds. This made the taperecorder immediately the central component in the recording studio.

Also new was that the whole setup in the recording studio became like one, new instrument for composers, offering them a totally new concept for composing. In contrast, before 1950 virtually all music is composed to be played live by musicians. Recordings on gramophone had to be done in one single take for the whole orchestra at once. After 1950 recordings on tape can be made in different places at different moments in time and be manipulated and assembled later in the studio. Many composers readily understood the new possibilities and started to experiment with this new medium. This resulted in new musical genres like tape compositions and electronic music.

The recorded source material to be manipulated can be recordings of literally anything. Like the sounds everyday objects make when hit, bowed, scratched, crushed, crashed, etc. Another source of sounds are electronic laboratory instruments normally used for measurements in electronic circuits, like tone generators, noise generators and audio filters. When material is rerecorded on a second taperecorder the sound can be manipulated during the transfer. Manipulations like audio filtering, distortion, amplitude modulation and the addition of echo or reverberation, can drastically change the colour of the timbre and add spatial characteristics to the sounds. These manipulations were named treatments and would soon become more and more important in the composing process. Although a treatment is the actual manipulation done to a sound, the ‘box’ that did the manipulation was referred to as treatment as well.

The typical fifties experimental recording studio consists of a big table with two or more taperecorders and a tape splicing device. Microphones are present to do acoustic recordings. Next to these are a mixing desk and a collection of tonegenerators and treatments. Equipment didn’t necessarily have to be in the same room. Like in the beginning of the fifties, when the WDR broadcasting company in Cologne, Germany, where the composer Stockhausen did a lot of his work, didn’t have tonegenerators in their studio for electronic music. In fact, an engineer had to go to the laboratory department on another floor and route the output of a tonegenerator to the audio cabling system that ran through the building. In the recording studio the signal could be picked up from this cabling system. Then, the taperecorder operator and the tonegenerator operator had to communicate instructions to each other over an internal telephone line. It was tricky to interconnect the tonegenerators and treatments, as these were often not designed for this use. Signal levels could differ considerably resulting in excessive noise, unwanted distortions or even the premature death of a piece of equipment. In the WDR studio it wasn’t even allowed for a composer to directly operate the equipment, perhaps because the management feared for liability issues in case of premature death of the composer by electrocution. Instead a special composer’s assistant was appointed to operate the equipment under the direction of the composer.

It soon became clear that there was a need for standardization of signal levels. Another need that arose was to be able to remotely control the functions on the taperecorders and treatments. At the time using electrical voltages seemed the best way to do this, resulting in what will become known as voltage control. The first use of voltage control is to toggle relays that start and stop the taperecorders or distribute the audio signals. The actual value of the controlling voltage didn’t matter as long as it had enough power to toggle the relay. Some time later voltages are used to directly control the loudness contour of the audio by using lamps and light sensitive resistors or photo cells. In this case the actual voltage level does matter, as it directly represents the actual volume. In the late fifties there is the notion that every function like pitch or timbre could be made to depend on the actual level of a voltage, meaning that any musical property can be expressed by a voltage of a certain level. Additional advantage of voltage control is that the controlling voltages can be processed before being used. Many treatments that could be used on audio signals could be used on the controlling signals as well. Other possible treatments on control voltages are similar to the processes found on the analog computers of that time, like adding, subtracting, offsetting and multiplying of voltage levels. Voltage control turned out to be very useful in especially serialist composing techniques. In the period between 1960 and 1965 transistors start to replace vacuumtubes in electronic circuitry. The transistor makes voltage control much easier to implement and by the year 1965 voltage controlled equipment seems as much part of the electronic studio as the taperecorder.

The modular synthesizer

There is a clear link between the collection of equipment surrounding the taperecorders in the early experimental electronic studios and the first sound synthesizers. Around 1965 the equipment is redesigned to be assembled into singular standardized systems, with as much functions controlled by voltage levels as is technically feasible. Influential electronics designers and manufacturers in this period are Don Buchla and Robert Moog. The Moog systems become known to the public as synthesizers. Although Buchla initially opposes the name synthesizer, he names his system the Buchla Box, the word synthesizer soon becomes the brand name for Moog and Buchla systems and similar systems from other manufacturers.

Splicing tape is a tedious process and there was a clear need for a technique that could replace parts of the tapesplicing process. This leads to the development of a device named a sequencer. This is a box that can generate a short sequence of individually programmable voltage values. The time that a voltage is available is named a step and can have a fixed or variable length in time. After programming the voltage values the sequence can be started by hand to ‘step’ through the sequence, or it can be set to loop the sequence forever. The voltage values can represent a note sequence, e.g. short arpeggio’s or programmed melodies, or any other musical events that can be controlled through a control voltage. Raymond Scott, a composer and inventor from New York, had already built a huge sequencing machine in the fifties, which he named the UJT-Relay sequencer. It used oscillators driven by relays to play preprogrammed melodies and rhythms. Scott was a commercial composer and he kept his invention secret for many years, hoping that his machine would give him an advantage over other commercial composers.

Don Buchla’s Music Box, which he developed for the San Fransisco Tape Centre in 1965, included a voltage sequencer module, with the purpose of replacing some steps in the tape splicing process. Bob Moog didn’t incorporate a sequencer in his system until some time later, though Moog was one of the very few people who had seen the sequencer built by Raymond Scott. The story goes that as being a friend of Scott and out of respect for his friend, Moog omitted a sequencer in the first systems he built.

The first generation of synthesizers are referred to as being modular synthesizers because all sound generators and treatments are available as independently working modules. Both Moog and Buchla could assemble a system including any number of different modules to the needs (and budgets) of the customers. The modules in the system can be interconnected with cables that were named patchcords. Using patchcords allows for free routing of signals through the available modules. It also allows for feedback and the use of audio signals as controlling signals. With some fifteen different types of modules in a typical system an enormous range of sounds and sound effects can be made by different interconnections and knobsettings. The total of connections and knobsettings is named a patch. It became custom to draw schematics of the module interconnections and knobsettings to be able to reproduce the same patch later. These schematics were named patchsheets.

Moog used a method of voltage control where the relation between the voltage level and the function is exponential, normalising an increase of exactly 1 Volt to a raise in pitch of exactly one octave, the Volt/Oct norm. This system makes it easy to play the pitch of the sound generators in equally tempered scales. All inputs and outputs of modules use the range of this Volt/Oct normalization, so all signals can be used as controlling signals as well. Controlling a function on a module by a varying signal generated by another module is named modulating. It adds tremendously to the sonic power of synthesizers, enabling the generation of totally new and as yet unheard sounds. Buchla on the other hand used a method of voltage control where the relation between the voltage level and the function is linear, named the Volt/Hertz norm. This system makes it difficult to play the pitch in equally tempered scales, but makes it much easier to play in just tuned scales and make tunings based on harmonic relations that work very well with certain more advanced synthesis techniques. This made the Buchla system more oriented towards sound designers for sound effects and advertisements and the more experimentally minded composers. But no matter the normalization used, voltage control makes it possible to control the synthesizer by literally anything that can produce voltages. This is important to realize as it means that the musician’s interface is in essence not a part of the synthesizer itself, the synthesizer can be connected to a vast range of musician’s interfaces or elctronic or electromechanic sensors. It also allows the synthesizer to be played by other machines, as long as they can produce the necessary controlling voltages in a sensible voltage range. So, the synthesizer can also be played by another synthesizer. This means that a modular synthesizer is in essence an open-ended system with unlimited expansion possibilities. A modular synthesizer also allows for feedback, where the output of a module is used to operate upon its own input, creating a recursive operation upon itself. Proper feedback of processed control voltages allows the synthesizer to compose by itself. To do so the composer ‘feeds the synthesizer a set of rules’ to which the machine has to adhere, and then lets the synthesizer run by itself. These rules can e.g. be implied in the way feedback is applied.

In the second half of the sixties some performing musicians express their wish to be able to play the synthesizer live. For Bob Moog this is a commercial market he couldn’t ignore, so the organ keyboard is adapted in a way that it can generate the necessary control signals to enable the synthesizer to be played live. More experimental interfaces are developed, like e.g. the ribbon controller, but the keyboard will prove to be the most successful commercially.

The prepatched synthesizer

The modular synthesizer is in essence a studio instrument and developed as a composers tool. It is hard to use on the road, as it is bulky and very sensitive to changes in temperature. The first modular systems didn’t have temperature compensation and needed constant retuning while performing. Repatching to get a different sound is tedious work and very difficult during a live performance. Around 1969 a smaller and portable type of synthesizer appears, the prepatched synthesizer, which is much more a musician oriented performance instrument. It became clear that a certain type of patch was used many times by keyboardists and these smaller synthesizers had this patch hardwired internally, hence the naming prepatched. This reduced the need for patching cables as different sounds could easily be created by only throwing a couple of switches and tweaking the knobs. Three instruments from different manufacturers appeared almost at the same time around 1969, the Minimoog by Moog, the ARP2600 by ARP and the british VCS3 by EMS. The Minimoog is completely hardwired internally. The ARP2600 is still partially modular as patchcords could be used to override the internal interconnections. The VCS3 has no internal hardwiring but instead uses a small pin matrix to make the connections between the small set of modules it houses, so in fact it is still a true modular synthesizer.

These three instruments mark the beginning of a new generation of synthesizers. Very important to the musician is that these synthesizers are in essence monophonic. This might appear a limitation, but it in fact it enables keyboard players to play the same type of solo’s like saxophonists and guitarists play, and so get a bit more in the spotlight on stage. Synthesizers like the Minimoog have added play controllers like pitchbenders and modulation wheels that let the musician bend and modulate notes in ways that allow for very expressive soloing. Another feature is that the sound of these synthesizers has enough power to stand out against other heavily amplified instruments in the typical electric bands of the seventies. These features quickly makes this generation of synthesizers very popular amongst keyboard players and the prepatched synthesizer becomes one of the basic instruments in the electric popband. Manufacture of modular systems is soon ceased in favour of manufacture of these portable prepatched synthesizers. Still, the much greater flexibility of modular synthesizers compared to prepatched synthesizers is up to this day highly valued. Using a modular synthesizer these days, no matter if it is analog or digital, is still considered playing topleague in sound synthesis.

The polysynth and preset synthesizers

Around 1978 the prepatched synthesizer becomes polyphonic, the polysynth. In the first half of the eighties digital techniques and mass production make the polysynth a fully matured, reliable and wellrespected musical instrument. The new chip technology enables the manufacture of complete analog modules into single chips and these match enough to be used in a polyphonic system, where each voice has to match the other voices exactly. Two chip manufacturers supply the synthesizer industry with these chips, Solid State Music and Curtis Electromusic Specialties. Some of their chips, prefixed by the codes SSM or CEM, are still manufactured and available up to today. Wellknown polysynths around 1980 are the six voice polyphonic Memorymoog and the five voice polyphonic ProphetV. The Prophet V is built by Sequential Circuits, the company of synthesizer designer Dave Smith.

Digital technology is needed to control a polyphonic system. Digital chips are used to scan the keyboard for chords and to distribute the correct control voltages for a particular key to the modules. There is a crucial difference between the architecture of a polysynth and the monophonic prepatched synthesizer, which by this time gets named as the monosynth. While on a monosynth the knobs connect directly to the sound generating and modifying circuits, in the polysynth a little computerchip known as a microcontroller is put between the knobs and the sound circuitry. This microcontroller has the intelligence programmed into it on how to measure the control voltages or sources and process them digitally into new values that are distributed to their respective destinations. The source values and their destinations are in fact the patch, and in this way control the final sound. These values and destinations can be stored together in a preset memory connected to the microcontroller and can be recalled as a single entity, named a preset. Recalling a preset takes only a few milliseconds, fast enough to be done while playing. This is an enormous improvement over the patching of cables by hand on a sixties modular synthesizer. On the polysynth of the early eighties digital technology is used only to process the control signals. The microcontroller does not yet do digital soundgeneration or processing of audio signals, sound synthesis itself is still done by using analog electronics.

The multitimbral synthesizer and MIDI

Synthesizers can be used to play different instruments in an arrangement. To do this live several synthesizers are needed, each one set to the sound of one of the instruments in the arrangement. In the first half of the eighties the polyphonic preset synthesizer is adapted in a way that each voice can play a different instrumental sound. By splitting the keyboard in sections, and assigning each section to a different sounding voice, it is possible to use the instrument in a multitimbral way. It is also possible to stack different sounds upon each other, resulting in very thick symphonic textures. However, there is still only a limited number of voices available on the polysynth, typically four to eight voices, and with this technique one runs easily out of voices. Connection of polyphonic synthesizers to each other by means of control voltages and patchcords is in practice too complicated to be feasible. For this reason Sequential Circuits developed a digital means of connecting synthesizers to be able to have one synthesizer play several others. More manufacturers, like the Japanese instrument building company Roland, see the sense of this idea and after adding some minor modifications they together decide to promote this digital connection as an industry standard, to be used on every new synthesizer. The connection is baptised MIDI, an acronym for Musical Instrument Digital Interface.

MIDI is both a hardware and a software specification. The hardware is simple, very similar to the way printers and telephone modems are connected to computers. But the power is in the software. Through MIDI a synthesizer can send a set of commands to another synthesizer, e.g. a command to play a certain note. This set of commands is named the MIDI Protocol. Each command is assigned to a MIDI channel of which there are sixteen. A synthesizer can be set to react to commands in one specific channel only, or to act on commands received in any of the sixteen channels.

In the MIDI software specification symbols are assigned to possible musical events, the symbol being represented by a short digital code. The specification defines how values can be added to the symbol to send well-formed commands. Technically the command symbol is expressed as a hexadecimal digit. There is a symbol for the pressing of a key, together with a channel number, a value denoting which key is actually pressed and a value denoting the velocity of the keypress. This symbol is paired with another symbol that stands for the release of a key, again with a channel number, a value to identify which key is released and the velocity at which it is released. The number of the channel in which the command should act is embedded with the command symbol in the first part of the command. There are seven commands that can act in a single channel; NoteOn, NoteOff, PolyphonicKeyPressure, ControllerChange, ProgramChange, ChannelPressure and PitchwheelChange. Next to these commands there are commands that act globally and are not tight to a single channel. These global commands have to do with timing information and start/stop control for automated devices like sequencers and recording devices. Additionally there is a SysEx command that can encapsulate manufacturer and model-specific information of variable length. The SysEx command enables the transfer of presets to another synthesizer or to a computer. SysEx also enables the transfer of short samples of sound. It is up to the manufacturer to decide on which commands a specific synthesizer model should react and which commands it can send to other devices. This is written down in a table named the MIDI specification sheet that should be somewhere in the back of the manual of every MIDI equipped synthesizer.

When several MIDI equipped synthesizers are used, only one of them requires a keyboard, as all the other synths can be played by that keyboard through the MIDI cabling. This leads to a new type of synthesizer named a MIDI expander. It is a synthesizer without the keyboard but with a MIDI connection to be able to play it from another synth. Omitting the keyboard makes expanders less expensive than their keyboard equipped versions. Most expanders are rackmountable and the name racksynthesizer becomes another common name for the expander. After the year 1985 many new synthesizers come in both a keyboard and the cheaper rack version.

Another important feature of MIDI is that the MIDI commands themselves can be recorded on a computer. The introduction of MIDI is around the same time that personal computers become available. Around the year 1984 the computer manufacturer Atari does a very clever move by including MIDI interface connectors on their new ST series of budget computers. Very soon software programs appear for the Atari computer that allow to record, edit and playback the MIDI information. This means that an arrangement can be played live into the computer, the software recording only the play information. After recording the play information the arrangement can be heavily edited and new parts added or deleted at will. Intermediate states of the arrangement can be conveniently saved on a floppy disk to be recalled at a later time. Many popular music software programs in use today saw their first implementations on the Atari ST computers.

To control synthesizers that did not yet have MIDI, like the older analog modular synthesizers, devices named MIDI to CV converters are developed. Such devices are capable of converting the incoming stream of commands into one or more control voltages and gate signals that can directly control the analog modules.

Digital sound synthesis techniques

The first steps in this field were done in 1957 by Max Matthews at Bell Labs in the United States. Mathews had written the program Music I as a ‘socially desirable’ side project next to his official job at Bell Labs. The first rendering of a 17 second long audio file using Music I is said to be the first computer generated sound. Mathews kept on developing his Music software through different versions over many years, having a decisive influence on what is now known as computer music. In the early sixties many universities and research institutes that had access to computers started to experiment with calculating soundwaves directly by computer programs. The technique of generating and manipulating soundwaves in the digital domain is based on the principle of chopping the soundwave in a sequence of very small timeslices, named samples. Every sample becomes in fact a single value that represents the average mean of the sound signal during the short period the sample is pending. The device that can slice and measure the timeslices is named an analog to digital or AD converter. When the rate of slicing is about two and a half times the highest pitch perceivable by the human ear, the sequence of samples is perceived as a continuous audio signal, in the same way as in a movie twentyfive still pictures a second appear to project a fluid motion to the human eye. This means that in practice the sound signal must be sampled at least between fourtythousand and fiftythousand times a second. The number of measurements per second is named the samplerate of the digitized sound.

Another requirement is a high enough accuracy for the measurement of the mean value of the signal during a single sample period. This accuracy must be somewhere around the noisefloor of the signal to be sampled. The noisefloor is the point where a signal is so low in level that it starts to become indistinguisable from the natural noise present in the analog parts of the signal chain. The accuray or resolution of digital numbers is represented as the number of bits used to represent the value, the more bits the higher the accuracy, and if the values represented by the bits are fixed point or floating point values. In any case, the measurement has to span the whole dynamic range of the signal. In practice the dynamic range is the space between the loudest level that can be recorded without distortion and the noisefloor. In the case of fixed point values there is a simple relation between the amount of bits in the digital number representing the value and the dynamic range of the signal; each extra bit will increase the dynamic range by 6 dB. For a professional taperecorder the dynamic range is about 60 dB, which means that at least ten bits of resolution would be needed to represent this range. But there is a bit more to it than this simple assumption, recording tape can be overdriven, causing the tape to be saturated. This tape saturation is not really problematic when it happens now and then. In fact, a little tape saturation effect is said to sound good. But when a signal is digitised with an AD converter and there is a peak in the signal that exceeds the measurement range, then there will be an effect named clipping. Clipping sounds awful and must be avoided at all costs during a recording. To reduce the chances of clipping some extra headroom is needed, requiring some extra bits. These days it is common to use 24 bit converters for professional level audio recording, not only to reduce noise as 24 bit is well below the noisefloor of the human ear, but specifically for offering more headroom during the recording and mixing. For the final mixed recording an average resolution of at least 14 to 15 bits is needed, as the digitization process itself adds its own sort of digital noise, adding to the noisefloor. This has become the standard for a Compact Disk with its sample rate of 44.1 kHz and an average resolution of around 15 bits. To go back from the digital numbers to an analog audio signal that can be fed to a loudspeaker a device named a digital to analog or DA converter is used. To take an analogy with a tape recorder, the AD converter is functionally similar to the recording head and the DA converter to the playback head, the recording tape being some appropriate type of memory device in the computer or some type of mass memory storage like a harddisk, a CD, a DVD, a flash-memory card, an optical disk, etc.

The whole idea of digital sound synthesis is to have the computer calculate the list of values or samples that together in one long row represent the sound signal. The calculations are in general rather simple, but they have to be repeated for each single sample, still requiring a very powerful computer. In the sixties computers were definitely not yet up to the task to make digital recordings with a high enough sample rate, simply as the memory was rather slow and way too expensive to be waisted on a snippet of ordinary sound. However, the method of generating sound was feasible by having the little programs run maybe fivethousand times a second and recording the DAconverted results on a taperecorder running at a relatively low speed. After the recording the tape is played back at a speed some eight times faster to produce the required quality. Rerecorded on another tape would create the master tape for a record or to be played during a presentation, radio broadcast or concert.

Digital signal processors

After the first silicon chips came available in the sixties chip technology has developed in an incredible speed. Around the start of the eighties the VLSI or ‘very large scale integration’ technique is available for mass production of digital chips, enabling manufacture of chips with millions of transistors on an area the size of a poststamp. In the early eighties a special type of very powerful computerchip is developed, optimized to do repeated calculations like those used in sound synthesis and sound modification. This type of chip is named a Digital Signal Processor or DSP. The initial reason why synthesizer manufacturers are interested in this technology is because analog oscillators are hopelessly temperature sensitive, making their pitches drift constantly. The temperature compensation techniques needed in especially polysynths put quite a burden on their manufacture. A DSP can be programmed to emulate an oscillator without the dreaded temperature drifts, finally enabling the use of promising synthesis techniques which need rockstable oscillators, like the linear FM technique. The first commercially available synthesizer based on a DSP chip is the Yamaha DX7, its synthesis based on the linear FM technique, already researched in the late sixties by John Chowning. The sixteen voice polyphonic and MIDI equipped DX7 became immensely popular overnight, though it was a drag to program useful sounds oneself. But it came with a big factory preset library on board with reasonably convincing electric piano, organ and brass sounds. One of the main reasons why it became such a popular instrument was its relatively light weight; it was sÛ easy to take it to a gig and provide the average keyboard musician with the most common ‘bread’n butter’ sounds. Being able to produce relatively light weight instruments is definitely a big advantage of using DSP chips. At the moment almost every new synthesizer uses a DSP somewhere in its internals, either for sound synthesis or to add effects like chorus, echo and reverberation.

The sampler

Another development in the early eighties extends directly on the taperecorder and the tape manipulation techniques developed in the fifties. This development goes back to the late sixties when an instrument named the Mellotron is developed and marketed. The Mellotron houses a mechanism of small tapes and playback heads, each one dedicated to a key of the small organ-type keyboard. On each tape is a fixed recording of some sound at a certain pitch, and if the corresponding key is pressed the sound is played back. After a key is released its corresponding tape is quickly rewound. The Mellotron came with factory recorded tapes with a choice of orchestral ensembles, string sections, brass sections, silver flutes and the like. By using a Mellotron a recording studio didn’t have to hire an orchestra for budget recordings, saving immensely in time and money. The Mellotron also became popular with the symphonic and psychedelic rockbands at the end of the sixties. On request the factory could fit the Mellotron with custom recordings. Much of the sound effects of the popular British television series Dr. Who were put in a Mellotron, so they could be easily reproduced on demand.

The big disadvantage of the Mellotron is that it is a mechanical device. Both the tapes and mechanics wear quickly over time, needing expensive servicing. Taking the instrument on a tour wasn’t very healthy either. Around 1980 digital techniques offer a solution and a new type of instrument is developed, named a sampler. The basic idea of the sampler is in fact not much different to that of the Mellotron, the tape being simply replaced by digital memorychips. The playback heads are replaced by a DSP chip that reads digitized sounds from the digital memory and routes them to a DA converter. An interesting feature is that all digitized sounds can share the same memory, and the DSP can play a single digitized sound polyphonically at different pitches. In the beginning period of samplers two instruments are starring the stage, the Fairlight CMI and the NED Synclavier. Both are in essence quite traditional computer systems completed with dedicated hardware for AD and DA conversion for recording and playback of audio. Both have a organ-type keyboard to play notes, but control and programming sounds is done by means of a video monitor and an ASCII keyboard. Both came in a big 19î system rack, with the typical late seventies computerlook. Noisy fans and eight inch diskettes made the scene complete. The big advantage over the Mellotron was that different sounds could be quickly and conveniently loaded from a computer disk, while replacing sounds on the Mellotron was a complicated and time consuming mechanical operation. Sounds could be sampled instantly on the spot and trimmed and saved for later use. But there was more, sounds could be manipulated by the system processor and a copy of the manipulated sound saved as a new, independent sound. It was even possible to generate sounds by sound synthesis programs run on the system processor and again save the results for later use. The big disadvantage of both systems was their cost, a very substantial sum of money had to change hands before a musician could name him/herself the proud owner of a Fairlight or Synclavier system.

The first serious competition is the Emulator. With the appearance of the average polysynth and a pricetag that, although still pretty stiff, is way below the Fairlight/Synclavier pricetag, it does pave the way for the average professional keyboard musician to explore sampling. But the breakthrough for the sampler is in 1986 when Akai releases the S900 sampler, a rack device with a very reasonable pricetag, but with outstanding specifications for that time. Around 1986 sampling had much appeal, maybe as before the S900 it was so far beyond the budget of most musicians. This quickly made the S900 immensely popular, similar to the success of the Yamaha DX7.

The treatments offered by a modular synthesizer, like filterings and distortions, can be applied to any sound source, no matter if it an internal sound source, another synthesizer, a sound recording or a sampler. In practice modular synthesizers and samplers turned out to be ideal companions, as the modular synthesizer can be used as the source to be sampled, and at play time the modular can be used to manupulate samples made earlier. The combination of sampler and modular synthesizer is in fact very similar to the electronic studio setup that was developed in the fifties of the last century. The sampler being akin to the taperecorder and the modular synthesizer to the equipment that surrounded the taperecorder.

A variation on the sampler is the sample-based drumcomputer. Shortly after the release of the Emulator Emu uses the Emulator technique to design the Drumulator. At about the same time Roger Linn releases the Linndrum. Both instruments use recordings of real drums, making the sound very convincing. Recording real drums in a recording studio is a tedious process, first of all the drums have to be mic’ed up and then the right mixing balance and sound has to be found. This takes lots of expensive studiotime. The drumcomputer in contrast can be plugged in directly and it is go. Active drumpads were developed that could be plugged in at the back of a drumcomputer or through a ‘drum to MIDI’ converter box. These drumpads allow drummers to play the drumcomputer like a familiar drumkit. Another feature of the drumcomputer is that patterns can be recorded or preprogrammed as MIDI information and arranged in songs. The Linndrum concept is later bought by Akai and is still available in their popular MPC range of products.

Sampling and drum programming has had an enormous influence and initiated new styles of music. Still, the techniques employed with a sampler are in essence the same as the tape techniques developed in the fifties and sixties electronic studio’s, though the genres of music they are used for are now quite different. However, using a sampler is much more convenient and straightforward than using a taperecorder, accounting for the sampler’s immense and ever growing popularity. Today’s laptop computers gradually overtake the territory samplers have claimed for some two decades, but this is only because a laptop with the appropriate software is itself also a sampler, it does in essence the same thing with the same technology. Still, reliable hardware samplers like the Akai MPC range offer extra play controllers like drumpads and knobs to instantly tweak the sounds, making them ideal to use during live performance. And the samplers’ musician friendly and dedicated operating systems will undoubtedly keep them from extinction for some time to come.

Digital effect units

Many treatments are based on manipulations of time delays or time displacements. Well known effects are the creation of echo and reverberation. Techniques that use a cyclic digital memory and a DSP to read and write signals from and to this memory allow the creation of high quality and natural sounding time displacement treatments. Echo, reverberation and related effects are popular with all musicians, so they appear in separate boxes that can be used by synthesizer players, guitar players, vocalists, etc. These days most synthesizers have an effect unit built in, although these are generally not of the same quality as the high end studio devices.

Next to effects based on time displacements, digital effect units can offer filterings like multiband filters, equalizers and vocoders. Some can also manipulate the dynamics of audio signals like applying compression, and others can even correct pitches of vocals.

The cheaper effects units may have some hundred or more types of effects, but only one effect can be used at the same time and most of its parameters are fixed. The more expensive units developed for professional studio use can have several different effects working together and the parameters can be finetuned. On high end equipment effects can be freely chained in any order, much like the patching on a modular synthesizer.

Synthesizers and digital effect units are commonly used together, the synthesizer creating the timbres and the effect unit creating the spatial effect by placing the sound in an acoustic space of a certain characteristic. The main difference in manipulation is that a synthesizer works on the level of individual notes and single voices, while the effect unit works on the total of the sound.