Before looking at the history of recording it’s necessary to consider the underlying principles, which consist of storing a replica of the sound vibrations in different forms. This can involve the use of mechanical devices, magnetic materials or optical systems, sometimes using media in the form of disk, tape or film, and using analogue or digital technology. Once recorded, such material can be played back via loudspeakers or headphones to create an illusion of the original sounds.
Modern recording equipment uses one or more microphones to convert the original vibrations of air into electrical signal that can be blended with other signals in an audio mixer. Other signals are created in purely electronic form using synthesisers, computers and other devices.
The performance of a recording device is measured by its dynamic range (the capacity to accommodate sounds of various intensities) and frequency response (the ability to reproduce high and low frequencies). In some applications, such as public address (PA) systems, these parameters are intentionally restricted, although high-fidelity (hi-fi) systems are expected to reproduce the original sound as closely as possible.
The effectiveness of a recording can be enhanced by using multiple microphones and loudspeakers for recording and playback. Although a single microphone, producing monophonic (mono) sound, may be sufficient, most modern equipment use two channels to create stereophonic (stereo), or even more for quadraphonic reproduction or surround sound.
Common formats for recording sound include:-
As used in a wax cylinder or phonograph, in which a stylus is made to vibrate during recording, creating a varying groove on the surface of the medium. During playback a similar stylus picks up these variations, so reproducing the original sounds.
As in tape recording, where a paper or plastic tape, coated with a magnetic material is passed in front of a recording head, which magnetises the material in proportion to the sound intensity. During playback a separate playback head can be used.
As in cinema film, where a narrow slit, whose width changes with the intensity of the sound, varies the amount of light falling on the film. During playback a ‘sound lamp’ illuminates the film and the changing signal is detected by a photocell.
As used in Compact Disc (CD), Digital Versatile Disc (DVD) and other magneto-optical (MO) disks. The sound, usually in the form of digital data, is recorded as a series of optical variations that can be detected using a laser and a optical sensor.
Audio information can be conveyed in several ways, using analogue or digital techniques. In the former the volume of the recorded sound is related directly to the mechanical movement, magnetism or intensity of light produced by the medium. Sadly, such systems suffer from imperfections in the medium, such as dust or scratches on a disk or the granular nature of a magnetic recording tape, as well as from mechanical or electrical interference that can be produced by other devices.
Digital systems, which involve the use of binary codes, are better, especially sine they usually incorporate error detection and error correction to ensure the perfect transmission of a signal.
In 1877, Thomas Edison made his first phonograph recording, beginning with the memorable words ‘Mary had a little lamb…’. Although Edison’s patent of 1878 covered both cylinders and disks, he concentrated on the disk, believing that it could be used as a dictation machine. The original device consisted of a pre-grooved cylinder wrapped in tinfoil and turned by hand, which was cut by means of a blunt needle attached to a thin diaphragm: this in turn was connected to a horn into which Edison spoke. The same mechanism was used to play back the recording from the cylinder.
The sound quality left much to be desired, leading to developments in the following decade, financed by Alexander Graham Bell, in which the the foil was replaced by a wax cylinder, allowing a greater depth of cut and improved dynamic range. The hand-cranking mechanism was also replaced by an electric motor, ensuring the recording was at an even pitch.
The wax cylinder had two advantages: firstly, it could be ‘shaved’ to erase the recording for reuse and secondly it could be plated in metal so as to create a mould, allowing copies to be produced. The latter feature transformed the humble phonograph into a popular consumer product.
Although the wax cylinder was an improvement over its predecessor, its mass production wasn’t easy. Fortunately, in 1888, Emile Berliner demonstrated his gramophone, which used a flat disk containing a spiral groove, with the sound represented by side-to-side movements. In 1901 he established the Victor Talking Machine Company, which produced spring-driven disk players and disks that could be cheaply pressed by means of metal moulds. Although cylinders produced better sound quality and remained in production until the twenties, by 1910 the gramophone had become the market leader.
Until the twenties, all recordings used acoustic technology, involving a horn, diaphragm and stylus. Sounds with a lot of energy, such as a human voice or brass instruments, reproduced easily, but those with less power, such as stringed instruments, didn’t fare so well. Although special horns were tried, they didn’t entirely solve the problem.
In 1912, Lee de Forest invented the Audion vacuum tube, which formed the basis of the valve amplifier. Bell Telephone Laboratories developed this technology to operate with electric microphones, which could be easily positioned to record low-level sounds. In 1928, the Radio Corporation of America (RCA) acquired the Victor Talking Machine Company, creating RCA Victor, later concentrating on radio, a medium that depressed the record industry for much of the thirties.
The use of amplification and electromagnetic loudspeakers, the latter patented by General Electric in 1928, allowed an auditorium to be easily filled with sound. And by 1935 Hollywood was using separate woofer and tweeter units to reproduce low and high frequency bands of sound.
The first stereo recordings were made in 1931 at the Bell Telephone Laboratories. The author remembers hearing pre-war stereo recordings at the BBC’s training centre at Evesham: unlike modern recordings these used up-and-down (hill and dale) movement for one channel and side-to-side (lateral) motion for the other, causing some distortion, although the results were still very impressive.
World War II caused a shortage of shellac, forcing manufactures to explore other materials for making disks. The most successful was vinyl, a petroleum product that had a very low background noise.
Until now, only five minutes could be fitted on each side of a 12-inch (305 mm) disk. However, in 1948, CBS introduced the long play (LP) record, using micro-groove technology to give up to 20 minutes. This disk had closely-spaced 0.003-inch (0.076 mm) grooves, with a rotational speed reduced from 78 revolutions per minute (rev/min or rpm) to 33 rev/min.
Although the LP required a special player with a low-mass tone arm and a tracking force of under 0.5 ounces (14 g), it eventually received universal acceptance. And by the eighties most players were using a tracking force of just one gramme.
In 1949, RCA introduced a 7-inch (178 mm) format, which rotated at 45 rev/min, and known as a single. This was later supplemented by the 33 rev/min extended play (EP) disk of the same size. As a result, most players had to operate at speeds of 78, 45 and 33 reverse/min, and required a turn-over cartridge that incorporated both standard and micro-groove styli.
Most electrical recordings employed equalisation to improve the frequency response of the recording. However, no universal standards existed until the arrival of the LP, when full frequency range recording (FFRR) became a reality.
The introduction of magnetic tape recording in around 1950 allowed recordings to be created and then edited or spliced together as required. Although dual-groove stereo LPs appeared in the early fifties, stereo wasn’t popular until the arrival of the single-groove version in 1957. Although there were experiments in the seventies with quadraphonic forms of surround sound, as well as binaural sound for headphones, none of the proposed systems were ever adapted.
The LP remained standard until overtaken by later technology, such as Compact Cassette, Compact Disc (CD) and Digital Versatile Disc (DVD). The latter, when used for video, accommodated true surround sound via the Dolby Digital system.
The use of magnetism for recording was originally described by Oberlin Smith in 1888, although it took until 1898 for Valdemar Poulsen to patent a recorder called the Telegraphone that used steel wire as the medium. During the twenties and thirties more advanced machines using 1⁄2-inch (12.7 mm) steel tape were developed, but the cost and weight of the media made such systems impractical.
One example was the Blattnerphone, which can be found at London’s Science Museum. Another such machine once lived in Room 13 at the BBC’s Maida Vale studios, prior to the area becoming part of the Radiophonic Workshop. In order to avoid the risk of being ensnared in flying metal, the engineers would shelter in an adjacent room as recording progressed. Joining the tape was also tricky, as this involved welding the metal with an anvil. And each reel of tape weighed 22 pounds.
The idea of using a thin paper tape coated with iron powder was patented in Germany in 1928. BASF replaced this fragile paper by cellulose acetate film and the iron powder, which was flammable, by finely-ground iron oxide. In 1936, AEG Telefunken created the Magnetophon tape recorder to use the new tape, although the quality was only suitable for speech.
In 1939, Walter Weber, while experimenting with the Magnetophons used for German radio, discovered that a high-frequency signal added to the signal during recording improved the quality. The reason for the success of his AC bias was a bit of a mystery, although the extra ‘excitation’ of the magnetic particles by the bias signal clearly improved the linearity of the recording process.
So, during World War II, whilst radio reporters in the battlefields of Europe were preparing their recordings on old-fashioned shellac-coated disks, the Nazi propaganda machine was broadcasting material prepared on a modern tape recording machine.
At the end of hostilities, the arrival in Britain of the advanced Magnetophon came as a shock, persuading EMI to build the British Tape Recorder 1 or BTR⁄1, mainly based on the German design. As in some continental machines, the tape heads on this recorder faced away from the operator, making tape editing very tricky. This was corrected in the company’s next model, the massive BTR⁄2, many of which remained in service at the British Broadcasting Corporation (BBC) until the 1970s.
The US Army Signal Corps, on arrival in Germany, were also amazed at the sound quality of the Magnetophon. This feature, which made it possible to avoid repeated ‘live’ performances for radio in the eastern and western halves of the USA, persuaded the ABC radio network and Bing Crosby to approach Ampex, who eventually produced a suitable machine. As with other recorders of the time, this employed 1⁄4-inch (6.4 mm) tape running at 30 inches per second (in/s or ips).
Miniature valves made it possible for EMI’s later machine, the TR⁄90, to fit into a standard 19-inch rack or into a mobile trolley. All these professional machines incorporated three tape heads (erase, record and replay), allowing the user to check the quality ‘off tape’ whilst creating a recording.
In Britain, the most significant machines were those destined for the semi-professional or amateur market, notably the Ferrograph, beginning with the Series One, remaining almost unchanged until the Series Five, followed by the more modern Series Six and Series Seven machines.
Indeed, it was the enthusiastic amateur and experimenter who often saw the real potential for the tape recorder. Although the use of tape and a dextrous razor blade had been originally used for generating propaganda, it also could be employed creatively to change the nature of recorded sound.
As recording tape improved it was possible to reduce tape speeds, always in conjunction with appropriate equalisation. As time progressed, speeds were repeatedly halved, to 15, 71⁄2, 33⁄4, 17⁄8 and finally 15⁄16 in/s, corresponding to 38, 19, 9.5, 4.76 and 2.38 cm/s. In addition, extra parallel tracks were added, firstly in the two-track format used for professional stereo recordings and secondly in the four-track bidirectional stereo format, as devised for domestic use in 1955.
The use of wider tapes allowed multi-track recorders to be developed after the sixties, including the 1-inch (25.4 mm) 8-track format, and the 2-inch (50.8 mm) 16, 24 and 32 track formats.
In 1964, Philips introduced the Compact Cassette, which avoided the need to thread a tape between the two spools and across the heads of a machine. The tape, 0.15 inch (3.8 mm) wide, travelled at 17⁄8 in/s (4.76 cm/s), providing a playing time of one hour, later extended to 90 minutes or more. And by 1970 the Compact Cassette had developed into a stereo hi-fi format, while the smaller Micro Cassette, running at 15⁄16 in/s (2.38 cm/s) was adopted for use in dictation machines.
The Compact Cassette became incredibly popular, reaching its zenith with the introduction of Sony’s Walkman player in 1980, which provided the listener with stereo sound at any location.
The Bell Telephone Laboratories developed digital technology, where a signal is represented as a sequence of binary codes, in the fifties and sixties but it only became viable in the 1970s with the arrival of the integrated circuit (IC), which allowed the necessary circuitry to be miniaturised.
The earliest attempts at digital recording, made by Sony and Japan Victor Corporation (JVC), involved the use of video recorders and digital converters. Sony’s PCM-F1 converter of 1981 allowed such recordings to be made on a consumer video cassette recorder (VCR), but their Digital Audio Tape (DAT) recorder of 1987 recorded sound directly in digital form.
Although originally designed as a domestic format, DAT eventually became a professional recording system. It used a 16-bit coding system, similar to Compact Disc (see below), recording the data onto a 4 mm wide tape. It employed helical scan technology, as used in standard video recorders.
In the mid-seventies Philips and MCA Laboratories introduced the LaserVision Videodisc, whose optical principles formed the basis for Compact Disc (CD), which was a 4.7-inch (120 mm) disk containing up to 74 or 80 minutes of music, developed by Philips and Sony and launched in 1983. Other variations of the format were developed, including the 3-inch (76 mm) mini-CD, which accommodated 20 minutes of sound and CD-Video (CD-V), containing 20 minutes of music and 5 minutes of sound, as well as the now-familiar CD-ROM and CD-Interactive (CD-I) computer formats.
Digital audio systems usually incorporated error detection and error correction systems, so as to avoid any disruptions to the signal that could be generated by flaws or other damage to the recording or media. The simplest detection method involved the calculation of a value from the data itself, which would then be cross-checked during playback. Data samples were also interleaved, so that adjacent samples were no longer next to each other, and the order corrected during playback. This caused the errors to be spread over a greater number of samples, making them easier to detect.
Compact Disc and earlier systems all used pulse code modulation (PCM), in which the analogue signal was sampled at a regular rate, often 44.1 kHz, stored in a sample and hold circuit and then quantised into binary code. The usual 16-bit PCM system accommodated 65,536 possible signal levels, equating to a data rate of around 1.4 megabits per second (Mbit/s).
By the early nineties, methods were developed that could reduce the data rate without significantly damaging the quality. Such systems used digital filters that divided the signal into several frequency bands, the intensity of each being compared with the ear’s sensitivity at these frequencies. This allowed the system to delete any information normally masked by sounds at other frequencies. This technique, known as perceptual coding, could reduce the data rate to around 400 kbit/s.
This technology was used in the abortive Digital Compact Cassette (DCC) format devised by Philips and launched in 1992. This was similar to a Compact Cassette tape and ran at the same speed, employing a data rate of 384 kbit/s and eight narrow tracks across the tape. Fortunately, a standard DCC player could also play the older analogue Compact Cassette recordings.
Perceptual coding was subsequently used in Sony’s MiniDisc (MD) of 1993, a 21⁄2-inch (64 mm) disk format. It was also used for digital radio broadcasting, for Dolby Digital surround sound on DVD-Video disks and for MP3 recordings that could be transferred over the Internet.
1997 Grolier Multimedia Encyclopedia, © 1997, Grolier Inc.
Radiophonic Workshop: An Engineering Perspective, ©Ray White 2001.
The BBC Radiophonic Workshop, The First 25 Years by Desmond Briscoe and Roy Curtis-Bramwell, ©BBC, London, United Kingdom, 1983, ISBN 0 563 20150 9.
©Ray White 2004.