Musical Techniques

Up until the 20th century, all music was composed and realised by the skills of musicians, employing instruments that had been created with a matching level of skill. The arrival of new technology allowed composers to create alternative forms of music, sometimes of dubious artistic value, often produced by the manipulation of existing sounds or by using material created by electronic circuitry.

Musique Concrète

During World War II, whilst radio reporters in the battlefields of Europe were preparing their recordings on shellac-coated disks, the Nazi propaganda machine was broadcasting material prepared on the recording machine of the future. This was the Magnetophon, the first real tape recorder.

At the end of hostilities, the arrival in Britain of these advanced machines came as a shock, persuading EMI to build the British Tape Recorder 1 or BTR⁄1, which was mainly based on the original German design. As in continental machines, the tape heads on this recorder faced away from the operator, making tape editing very tricky. This was corrected in the company’s next model, the massive BTR⁄2, many of which remained in service at the British Broadcasting Corporation (BBC) until the 1970s.

Miniature valves made it possible for EMI’s later machine, the TR⁄90, to fit into a standard 19-inch rack or into a mobile trolley. All these professional machines incorporated three tape heads (erase, record and replay), allowing the user to check the quality ‘off tape’ whilst creating a recording. But in Britain, the most significant machines were those destined for the semi-professional or amateur market, notably the Ferrograph, beginning with the Series One, remaining almost unchanged until the Series Five, followed later by the more modern Series Six and Series Seven machines.

Indeed, it was the enthusiastic amateur and experimenter who often saw the real potential for the tape recorder. Although the use of tape and a dextrous razor blade had been originally used for generating propaganda, it also could be employed creatively to change the nature of recorded sound.

In 1948, Pierre Schaeffer used tape manipulation of natural and mechanical sounds to make a pioneering radio programme. His new techniques, known in artistic circles as musique concrète, used tape recorders to create new sounds from old. He used ‘spooling noise’, played tapes backwards or at different speeds, or turned the spools by hand. By using a machine with a variable-speed capstan motor, the pitch of a sound could be modified with musical accuracy. Later on, the BBC Radiophonic Workshop used a Leevers-Rich recorder with a rotary switch calibrated in musical intervals.

Tape Sampling

All the principles of what we now call sampling were now established. Any source material could be used and then processed in any manner, and the only limitation was the producer’s imagination. Most sources of sound were familiar to drama studios, including breaking glass; gravel in boxes; percussive noises produced by musical instruments, bottles or metal tanks; machinery and street sound.

Samples, unlike synthesised material, contain the complex harmonics, and harmonic decay, of natural sounds. The effect can be disconcerting or dramatic, as in the sounds of dinosaurs used for the film Jurassic Park, created from real animals. But as the pitch is moved further away from that of the original, it develops characteristics different from real sounds: the pitch change alters the subjective ‘size’ of a sound but ignores the physical properties of the materials that created it. And of course, the harmonic content of real sounds varies across the musical scale. For example, every note on an acoustic piano is different, each string vibrating the others differently, changing as the note decays.

Then, as now, you needed a clean recording of every sample. In addition, the beginning and end of each sample would have to be carefully trimmed using a razor-blade. Extra samples could be produced by dubbing (copying) the original recording onto another machine. To change a sample’s pitch, the original recording would be dubbed from a varispeed machine onto another recorder, using a separate ‘pass’ for each required pitch change. Finally, all the samples, modified or otherwise, would be edited together into a continuous sequence using a razor-blade, editing block and splicing tape.

To create a long cross fade, a special editing block with an exceptionally shallow splicing angle could be used. For normal editing, a cut at 90 or 45 degrees was common, although the latter would often give an unacceptable ‘jump’ in the image of a stereo recording: fortunately, most early examples of musique concrète were produced in monaural (mono) sound. Any sample could be made into a continuous sound by carefully splicing together the ends of a recording to form a tape loop.

Time Delays

Having discovered the useful delay introduced between the record and replay heads of a tape machine, the concrète pioneers explored the possibilities of running a tape directly from the left hand spool of one tape machine to the right hand spool of another, passing both sets of record and replay heads.

By drawing the tape out between two machines on a sprung loop stand (or bottles, if stands weren’t available), the delay on the output from the second machine could be extended. Also, the audio output of the second machine could be carefully mixed back into the input of the first machine, so creating rising and falling ‘waves’ of sound. The guitarist Robert Fripp, an ex-member of King Crimson, was so enamoured with this trick he christened it Frippertronics, many years after it was first used.

Phasing and Flanging

Two other effects that briefly saw popularity were phasing and flanging, both caused by upsetting the azimuth (the vertical angle of the tape head) during recording or playback, usually by touching the flange of a tape spool. Neither could be described as musical, but they were very dramatic.

Phasing was the result of combining the input and output of a tape machine, or the two outputs of a stereo machine. As the phase between the signals changed, the output at certain frequencies, and their harmonics, was cancelled out, an effect identical to a comb filter. Flanging was similar, but relied on feeding some the tape machine’s output into the input, almost causing oscillation at some frequencies. The result was more metallic than phasing and was used on several pop records in the sixties.

Stretching the Tape

The success of tape manipulation spawned some novel devices. One example, the Binson Echorec Baby, had a spinning metal drum, surrounded by tape heads that produced multiple delays. Another device, the Tempophon, was strapped to the side of a tape machine, with the tape passing its spinning replay head. Since the tape speed in relation to this head was set by the Tempophon itself, irrespective of the actual speed, you could vary the pitch of a sound without altering its tempo.

The BBC’s Programme Effects Generator (PEG) provided spot-effects for radio drama, including The Archers. This device used a separate tape cartridge for each effect, the tape being pulled out of the cartridge, played and then drawn back in again. A further development of PEG was the Mellotron, a keyboard instrument with cartridges of sampled instruments. The Moody Blues used this successfully in their sixties ‘symphonic’ rock music, despite its sluggish mechanism. Roland also used tape cartridges, this time as a loop, providing very long delays in some of their effects devices.

At the BBC Radiophonic Workshop, established in 1958 to explore the possibilities of music concrète, sound montages were created using several tape machines and an audio multiplexer. This specially-constructed device contained a circle of fixed capacitor vanes, connected to the outputs of tape recorders. The multiplexer’s output came from a rotating vane, driven by a variable-speed motor. As this turned, each signal was heard in sequence, one sound fading into the next.

Early Sound Processing

Reverberation was popular, disguising small blemishes and giving a consistent atmosphere to a completed recording. Echo rooms, plates and springs were commonly used. Most echo springs were awful, although hitting them could often generate interesting sounds! One popular trick involved copying a tape backwards and adding reverb, then playing it forwards to give reverse echo.

Equalisers, preferably of the ‘graphic’ type, were much in vogue for musique concrète. Passive versions, comprising simply of coils and capacitors, often provided a remarkable degree of quality (Q), enabling dramatic changes to be made to any sound. And at the Radiophonic Workshop, the mechanical ‘Dalek’ voices for Doctor Who were created using a simple ring modulator, consisting of three transformers and a ring of four diodes. An untreated speech signal was connected to the main input, with a low frequency (usually upwards of 15 Hz) applied to the carrier input.

Sequencing

The idea of recording a musical performance isn’t new. Fairground organs, orchestrions, player pianos and musical boxes used rolls, cards, drums and discs long before audio recording appeared. Despite modern technology, the principles remain unchanged: sequencing creates a record of a musician’s performance (the notes and their timing), not the actual sound of the music. Before the phonograph and the gramophone, this was the only way to record anything.

Mechanical Sequencers

Digital sequencing began with the clockwork musical box, a development of the striking clock. This usually contained a rotating cylinder whose tooled projections struck a comb-like metal plate. During the nineteenth century punched metal disks, paper rolls and cards were used extensively. Punched cards, for example, were used in weaving and lace-making. But the most popular application was the pianola or player piano, mainly because few people could play an instrument. Today, some of these rolls and cards constitute the only true record of how music was played in Victorian times.

Similar technology was used in the barrel-organ, involving a pin-studded cylinder that was turned by hand and linked to a mechanism that opened organ pipes and struck metal tongues. This idea was developed into the steam-powered fairground organ, which generated a huge range of sounds, whilst the orchestrion was designed to imitate the instruments of an entire orchestra. Unfortunately, nineteenth-century technology lacked electronics. This meant, for example, that Charles Babbage’s incredibly advanced calculating machine, although feasible in theory, couldn’t be created.

Surprisingly, punched cards and paper tape were still used in computers of the sixties and seventies, some early mainframe machines being programmed via a teletype machine and paper tape. Cards were often programmed by means of a crude form of hand-puncher, frequently using obscure key combinations for some characters. The most common card was the Hollerith card, as invented by Herman Hollerith (1860-1929), as originally used in the US census of 1890. In its modern form it eventually had 12 rows and 80 columns of possible hole locations. This heritage exists today, as text formatted with 80 characters per line. Hollerith’s company eventually became part of IBM.

Analogue Electronic Sequencers

The earliest sequencing device used in electronic music was the step sequencer, as used with a voltage-controlled synthesiser. It required the musician to enter both the pitch and duration of each note and then to step on to the next event. It worked, but couldn’t be described as ‘user friendly’.

In comparison, the Sequencer 256, as designed by EMS for the Synthi 100 synthesiser, was highly sophisticated. It recorded a real-time performance on three ‘layers’, each conveying a control voltage (CV), defining the note played, and a gate signal, indicating how long the key had been held. Despite limitations of a restricted memory (the composer had to compromise between timing accuracy and the length of a sequence), this vastly expanded the scope of the analogue synthesiser.

Ken Gale’s Wavemaker range of equipment included one of the first digital devices. His Digital Recording Module (DRM) took the output from a digital musical keyboard and recorded the performance on an audio tape recorder by using frequency shift keying (FSK). The material could be ‘bounced’ from one track of the tape to another, whilst adding further performances.

The limitations of analogue synthesisers and associated sequencers was apparent to anyone who used them. Such problems were solved by the appearance of devices that contained a microprocessor.

MIDI

In the early eighties, a group of interested parties issued a specification for the Musical Instrument Digital Interface (MIDI), a system that allowed universal communication between instruments, computers and other devices. The basis for this standard was purely commercial and within a few months of its introduction the floodgates were overwhelmed by new and affordable products.

One of the first MIDI sequencers was Yamaha’s QX1. This incorporated a single MIDI input for a keyboard and eight individual MIDI outputs for connecting instruments. Unfortunately, all the sequencing operations had to be monitored through a small liquid crystal display (LCD).

But one machine was to do much more for MIDI and sequencing than anything so far. In January of 1984, Apple Computer Inc unveiled its latest creation, the Macintosh desktop computer.

Computer Musicians

The use of computers was tentative at first. And early MIDI interfaces were very simple, conveying a single MIDI circuit over either or both serial ports of the computer. Pioneering software included Performer, a sequencing application by Mark of the Unicorn, and Composer, designed for working on a musical script. Sequencing software allowed a musician to record a real-time keyboard performance as a sequence in the computer. This could then be edited freely, saved onto disk in various versions, and finally employed to ‘play’ the synthesisers. The opportunities for editing were almost endless. For example, the length or pitch of any note could be changed, or sections of music could be reversed, repeated or inserted at another point, or the tempo could be changed.

The diagram below shows a typical MIDI installation, complete with optional MIDI Thru box and MIDI merger. Although the Thru outputs of most MIDI devices could be ‘looped’ to another device, a Thru box prevented the timing problems that could be caused by such a connection. A merger, on the other hand, simply combined MIDI data from the outputs of several devices.

The way music was presented on a Macintosh computer varied with the sequencing application. And because of the complexity of a musical score, the full script was rarely displayed. Instead, notes were shown as bars (of varying lengths to match their duration) or as a list of MIDI events. Almost all sequencers could save a sequence onto disk as a MIDI file. This kind of file didn’t contain any information for a particular sequencer, just pure MIDI data that described the sequence itself.

But MIDI could do so much more. In fact, it could provide automation for an entire studio, especially since effects devices could also be controlled via MIDI. System Exclusive (Sysex) messages could tap into synthesisers and samplers, allowing sounds and samples to be manipulated and modified with a software sound editor. The Yamaha DMP7, an 8-channel MIDI-controlled mixer, used standard MIDI note and controller messages for access to every control, also allowing the user to define a parameter list, assigning specific MIDI messages to particular controls.

Such products and sequencers pushed the computer and MIDI interfaces to the limit. Faster machines, such as the Mac Quadra, arrived in the nineties, as well as multi-channel MIDI interfaces, such as the Opcode Studio 5. These new computers had NuBus slots, suitable for sampling and digital audio recording cards, using either the Mac’s own hard disk or a separate SCSI drive.

These developments led to a greater integration of the electronic music studio, with sounds recorded, edited and then ‘bolted into’ a MIDI sequence. The final result was desktop composing, as provocative to the music industry as desktop publishing was to printing. And, although MIDI is being pushed to the peripherals of such advances, its robust design assures it a challenging future.

©Ray White 2004.