Speech synthesis confers a number of benefits to technology end users. It allows individuals with impaired eyesight to be able to operate radios and computers. For those who cannot speak, and who may also have trouble using sign language, speech units such as the device employed by Stephen Hawking allow a person to communicate in ways unthinkable a century ago. For these individuals speech synthesizers play an integral role in adding quality to their day to day lives. On our local repeaters synth voices make announcements about nets and club events, and speech synths read the weather on the NWS frequencies. Beyond these specialized uses, one of the ways everyone can share in the joy of chip talk is through the medium of music.
The IBM 704 was the first computer to sing. It was first introduced in 1954 and 140 units had sold by 1960. The programming languages LISP and FORTRAN were first written for this large machine that used vacuum tube logic circuitry. Bell Telephone Laboratories (BTL) physicist John Larry Kelly coaxed the 704 into singing Daisy Bell aka A Bicycle Built for Two using a vocoder program he wrote for the 704.
Lovely as the a cappella computer was, it was deemed in need of instrumental accompaniment. For this part of the song the expertise of fellow BTL employee Max Vernon Mathews was sought out. Max was an electrical engineer whose first love of music enabled him to become a pioneer in electronic and computer music. In 1954 he wrote the first computer program for sound generation, MUSIC, also used on the IBM 704. The accompaniment to the voice portion of Daisy Bell was programmed by Max in 1961 using the IBM 7090.
The IBM 7090 was the transistorized version of the 709 vacuum tube mainframe. The 7090 series was designed for "large-scale scientific and technological applications." The first of 7090's was installed in late 1959 at a price tag of close to $3 million. Adjusted for inflation the price today would be a whopping $23 million buckaroos. Besides its musical capabilities, the 7090's other accomplishments included being used for the control of the Gemini and Mercury space flights. IBM 7090's were also used by the Air Force for the Ballistic Missile Early Warning System up until the 1980s. Daniel Shanks and John Wrench used it to calculate the first 100,000 digits of pi. Yet none of the above uses compare, in my mind, to the beauty of the IBM 704 joining forces with the IBM 7090 on the song Daisy Bell.
Another computer, HAL 9000, still gets most of the credit for this electronic version of Daisy Bell. Arthur C. Clarke, author of 2001: A Space Odyssey, happened to be visiting his friend and colleague John Peirce at BTL when John Larry Kelly was making his demonstrations of speech synthesis with the IBM 704. He was so fascinated by witnessing this computational marvel that six years later he wrote that version of Daisy Bell into his screenplay, as sung by HAL in the middle the machines climactic mental breakdown. The song was on the vinyl platter "Music from Mathematics" put out by the Decca label a handful of decades ago (listen to video above.)
Daisy Bell went on to have a notable reprise for the Commodore 64 when Christopher C. Capon wrote his program "Sing Song Serenade". The sounds for his version were played direct on the hardware by rapidly moving the read/write head of the computer. . The resulting audio was emitted from the floppy disk drive.
Max Mathews continued to make strong contributions to the humanities in the realms of music and technology. In 1968 he developed Graphic 1, a graphical system that used a light pen for drawing figures that could be converted into sound. In 1970 Mathews developed GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) with F. R. Moore. GROOVE was the first fully developed music synthesis system for interactive composition and realtime performance. It used 3C/Honeywell DDP-24 (or DDP-224) minicomputers.
An algorithm written by Mathews was used by Roger M. Shepard to synthesize Shepard Tones. These tones (named after Roger) consist of a superposition of sine waves separated by octaves. When the base pitch of the tone is played moving upward or downward, it is known as the Shepard Scale. Playing this scale creates an auditory illusion of a tone that continually ascends or descends in pitch, yet seems to get no higher or lower. It is the musical version of a barber pole or of the Penrose stair, a type of impossible object in geometry, made famous in the drawing Ascending and Descending by M.C. Escher.
Max also made a controller, called a Radio-Baton and Radiodrum, used to conduct and play electronic music. Developed at BTL in the 1980s it was originally a kind of three-dimensional mouse. The device has no inherent sound of its own, but produces control signals that are used to trigger sounds, sound-production, effects and the like. The Radio-Baton is similar to a theremin. Magnetic capacitance is used to locate the position of the conductors baton, or mallets in the case of the drum. The two mallets are antennas transmitting on slightly different frequencies. The drum surface, also electronic, acts as another set of antennas. The combination of these antenna signals is used to derive X, Y and Z, and these are interpreted according to the assigned musical parameters.
Many of these mainframe musical programs are now available in the program Max that can run off a laptop.
Besides the use of Daisy Bell in the soundtrack for 2001, director Stanley Kubrick used a wide range of work by modern composers. The piece Atmospheres written by Gyorgy Ligeti in 1961 was used for the scenes of the monolith and those of deep space. Ligeti's earlier electronic work Artikulation, though not used in the film, shares an interesting connection to some of the ideas behind speech synthesis. Artikulation was composed in 1958 at the Studio for Electronic Music of West Deutsche Radio in Cologne with the help of Cornelius Cardew, an assistant of Karlheinz Stockhausen (whose works involving shortwave radios will be explored in time). The piece was composed to be an imaginary conversation of multiple ongoing monologues, dialogues, many voices in arguments and chatter. In it Ligeti created a kind of artificial polyglot language full of strange whispers, enunciations and utterance.
Music from Mathematics: Played by IBM 7090 Computer to Digital Sound Transducer, Decca LP 9103.
Gyorgy Ligeti: Continuum / Zehn Stucke fur Blaserquintett / Artikulation / Glissandi / Etude fur Orgel / Volumina, Wergo 60161, 1988.
This installment continues the exploration of the development of speech synthesis. So far I've investigated the invention of the Vocoder and how it was used in the SIGSALY program in WWII. In this episode I explore the other side of the speech synthesis coin, speech recognition. Without the ability for machines to recognize speech on the one hand and the ability to synthesize it on the other, the wunderkind of today's consumer electronics, Siri, Dragon and Alexa, would not be possible. With both in place humans can now speak, and sometimes yell with exasperation, to a wide range of interconnected devices and our smart phones and Echo Dots will speak back to us. As developments in Artificial Intelligence take off the little computer in your pocket soon speak up for itself and yell back.
In a way it could be said that speech recognition systems began in the 19th century when sound waves were first converted into electrical signals. By 1932 Harvey Fletcher was researching the science of speech perception at that temple of telecommunications, Bell Laboratories. His contributions in this area showed that the features of speech are spread over a wide frequency range. He also developed the articulation index to quantify the quality of a speech channel. Articulation indexes are used in measuring the effectiveness of hearing aids and in industrial settings. Harvey is credited with the invention of an early electronic hearing aid, and is notable for overseeing the creation of the first stereophonic recordings and live stereo sound transmissions, for which he was dubbed the "father of stereophonic sound".
Interest in speech recognition didn't end with Fletcher. In 1952, over half a century before Siri or Alexa could respond to a voiced question of where to find the best noodle shop in town (or when the end of the world will be), AUDREY was on the scene. She derived her name from her special power: Automatic Digit Recognition. She was a collection of circuits capable of perceiving numbers spoken into an ordinary telephone. Due to the technological limits of the time she could only recognize the spoken numbers of "0" through "9". When the digits were uttered into a mic on the handset AUDREY would respond by illuminating a corresponding bulb on the front panel of the device. It sounds simple, but this marvel was only achieved after overcoming steep technical hurtles.
S. Balashek, R. Biddulph, and K. H. Davis were the creators of AUDREY. One of the obstacles they faced was to craft a system capable of recognizing the same word when it is said with subtle variations. The spoken digit "7" for example, when said multiple times by even one person is subject to slight differences. Duration, intonation, quality, volume and timing all change the sound of the word with each individual utterance. To recognize speech amidst all these variables AUDREY focused on the sound parts within the words that have the most minimal variation. In this way the machine did not need to have an exactly spoken match. Roberto Pieraccini put it this way, saying there is less variety "across different repetitions of the same sounds and words than across different repetitions of different sounds and words."
The exact matches came from the part of speech known as formants. A formant is a harmonic of a note that is augmented by the resonance of the vocal tract when speaking or singing. The information that humans require to distinguish speech sounds can be represented in a spectrogram by peaks in the amplitude/frequency spectrum. AUDREY could locate the formant in the spectrum of each utterance and use that to make a match.
AUDREY also required that there be pauses between words. She couldn't isolate or separate individual words when said in a string. In addition designated talkers had to be assigned, talkers who could produce the specific formants, otherwise she might not recognize a digit. For each speaker the reference patterns of the formants drawn electronically and stored within her memory had to be fine tuned. Yet despite all the limitations around her use, the researchers proved that building a machine capable of recognizing human speech wasn't a pipe dream.
AUDREY was expensive because she was state of the art and all analog. The six-foot high relay rack she kept occupied with all her vacuum-tube circuitry required a lot of upkeep. And she drew a lot of power that really hiked up the electric bill. The invention never really went anywhere in terms of being used as a tool in Ma Bell's vast monopoly. It could have been used by toll operators or wealthy customers of the telephone to voice dial, but manual dialing was simple, fast, and cheap.
Creating a system that had uniform recognition of words as uttered by multiple people was a dream that had to be fulfilled by other researchers down the line. They built on the sweat equity and foundation of those who went before. The fact that a machine can be made to decipher strange human vocalizations at all is sheer wonder. While others may be fond of Siri, Dragon and Alexa it is AUDREY who will always remain in my heart.
The Voice in the Machine: Building Computers That Understand Speech by Roberto Roberto Pieraccini, MIT Press 2012
This edition of the Music of Radio continues to explore developments around electronically generated speech. Homer Dudley, an engineer and acoustics researcher who worked for Bell Telephone Laboratories (BTL), made significant contributions to this field beginning with his invention of the Vocoder and Voder. The development of these two instruments was detailed in last month's column. Now I will turn my attention to how the Vocoder was employed in encrypting the transmissions of high ranking officials during WWII for the SIGSALY program. SIGSALY, by-the-way, is simply a cover name for the system and is not an acronym.
In 1931 BTL had developed the A-3 scrambler that was used by Roosevelt and Churchill, but the security of this device was eventually compromised by German's at a radio post in South Holland who had been intercepting the Prime Ministers telephone calls. The A-3 had worked with the Trans-Atlantic Telephone by splitting speech up into different bands, but it wasn't difficult to reassemble as the Germans proved in 1941, making the situation surrounding communications security to become intolerable to the Allies.
In 1942 the Army contracted BTL to assist with the communication problem and create "indestructible speech" or speech that could withstand attempts at code breaking. From this effort the revolutionary 12-channel SIGSALY system was born. To create SIGSALY workers sifted through over 80 patents in the general area of voice security. None of these fit the needs of the allies, but Homer Dudley's Vocoder did and formed the basis of the system. For SIGSALY a twelve-channel Vocoder system was used. Ten of the channels measured the power of the voice signal in a portion of voice frequency spectrum (generally 250-3000 Hz). Two channels were devoted to "pitch" information and whether or not unvoiced (hiss) energy was present. The Vocoder enciphered the speech as it went out over phone or radio. In order to be deciphered at each end of the conversation an audio crypto-key was needed. This came in the form of vinyl records.
From the standpoint of music history it is interesting to note, as Dave Tompkins did in his book How to Wreck A Nice Beach: The Vocoder fromWWII to Hip-hop, that the SIGSALY system employed two-turntables alongside the microphone/telephone. The classified name for this vinyl part of the operation was SIGGRUV. The turntables were used to solve the problem of needing a cryptographic key. They played vinyl records produced by the Muzak Corporation, a company famous for the creation of elevator music. The sounds on these records weren't aimed at soothing weekend shoppers or people sitting in waiting rooms. Muzak had been contracted into pressing vinyl that contained random white noise, like channel 3 on an old television set. The noise was created by the output of very large mercury-rectifier tubes that were four inches in diameter, and over a foot high. These generated wide band thermal noise that was sampled every twenty milliseconds. The samples were then quantized into six levels of equal probability. The level information was converted into channels of a Frequency Shift Keyed audio tone signal recorded onto a vinyl master. From the master only three copies of a key segment were made. If these platters had been commercial entertainment masters thousands would have been pressed from its blueprint. If any SIGGRUV vinyl still exists, and for security reasons they shouldn't have, those grooves are critically rare.
It had to be insured that no pattern could be detected so the records had to be random noise. If the equipment had somehow been duplicated by the Axis powers, the communications would still be uncompromised as the they required the crypto key of the matching vinyl, required at each terminal. This made the transportation of these records, via armored truck, the most secure since Edison invented the Phonograph. Just as the masters were destroyed after making three keys, each vinyl key was only ever to be played once, as operators were instructed to burn after playing. The official instruction read, "The used project record should be cut-up and placed in an oven and reduced to a plastic biscuit of 'Vinylite'". As another precaution against the grooves falling into enemy hands the turntables themselves had a self-destruct mechanism built into them that could be activated in case a terminal was compromised. Thinking of all this sheds new light on the idea of a DJ-Battle.
Keeping the turntables at two different terminals across the globe synchronized was another technical hurdle that BTL overcame. If a needle jumped or the system went out of synch only garbled speech was heard. At the agreed upon time, say 1200 GMT, operators listened for the click of the phonograph being cued to the first groove. The turntables were started by releasing a clutch for the synchronous motor that kept the turntable running at a precise speed. Fine adjustments were made using 50-Hertz phase shifters (Helmholtz coils) to account for delays in transmission time. The operators would listen for full quieting of the audio as synchronization was established. Oscilliscopes and HF receivers were also used to keep systems locked to international time.
A complete SIGSALY system contained about forty racks of heavy equipment composed of vacuum tubes, relays, synchronous motors, turntables, and custom made electromechanical equipment. In the pre-transistor era all of this gear required a heavy load of power so cooling systems were also required to keep it all from getting fried. The average weight of a set up was about 55 tons.
The system passed Alan Turing's inspection (if not his test) as he had been briefly involved with the project on the British side. On July 15, 1943 the inaugural connection was established between the Pentagon and a room in the basement below Selfridges Department Store in London. Eventually a total of twelve SIGSALY encipherment terminals were established, including some in Paris, Algiers, Manila, Guam, Australia and one on a barge that ended up in the Tokyo Bay. In the year 1945 alone the system trafficked millions of words between the Allies.
To keep all of this operational a special division of the Army Signal Corp was set up, the 805th Signal Service Company. Training commenced in a school set up by BTL and members were sent to various locations. Their tasks required security clearances and a firm grasp on cutting edge technology which they were tasked to operate and maintain. For every eight hours of operation the SIGSALY systems required 16 hours of maintenance.
In putting the system together eight remarkable engineering "firsts" were achieved. A review conducted by The Institute of Electronic and Electrical Engineers in 1983 lists them as follows:
1. The first realization of enciphered telephony
2.The first quantized speech transmission
3.The first transmission of speech by Pulse Code Modulation (PCM)
4.The first use of companded PCM
5.The first examples of multilevel Frequency Shift Keying (FSK)
6.The first useful realization of speech bandwidth compression
7.The first use of FSK - FDM (Frequency Shift Keying-Frequency Division Multiplex) as a viable transmission method over a fading medium
8.The first use of a multilevel "eye pattern" to adjust the sampling intervals (a new, and important, instrumentation technique)
To do all these things required precision and refinement in new technology. SIGSALY has left the world with a rich inheritance that spans developments in cryptology, digital communications, and even left its mark on music.
How to Wreck A Nice Beach: The Vocoder from WWII to Hip-hop: The Machine Speaks by Dave Tompkins, Melville House, 2010
SIGSALY: The Start of the Digital Revolution by J.V. Boone and R.R. Peterson, retrieved at:
Who doesn't remember changing their voice as a kid by talking into a fan? Or sneaking off with baloons at a party or dance to inhale the helium and try to talk like a character from a cartoon? One year for Halloween I got a cheap voice changer toy that had three settings and I remember playing with it for hours. But voice changers weren't always so cheap, and the original was room-sized instead of hand held. The initial reason behind its development had nothing to do with keeping kids amused and was not driven by aesthetic concerns. It was only after Ma Bell and the military had wrapped up their use for the Vocoder that it came to be appreciated for its musical qualities, first by experimental electronic musicians, and later pop, rock and rap artists. The next few editions of the Music of Radio series delves into the story of electronic speech synthesis, from the Vocoder, to the Voder and on to the first text-to-speech computer programs written for gargantuan mainframes. It takes us deep into the stacks of the Bell Laboratory Archives and into the belly of WWII crypto communications before emerging in the 1960's and '70's when the stage was set for mind melting explorations in sonic psychedelia. Just as the Vocoder is still be used for artistic effects the original ideas behind it, compression and bandwidth reduction, continue to be used in new hardware and software applications for radio and telecommunications.
Homer Dudley, the inventor of the Vocoder, was an electronic and acoustic engineer whose primary area of focus revolved around the idea that human speech is fundamentally a form of radio communication. In his white-paper The Carrier Nature of Speech he wrote that "speech is like a radio wave in that information is transmitted over a suitably chosen carrier." This realization came to Dudley in October of 1928 when he was otherwise out of commission in a Manhattan hospital bed. Discoveries are often made from playfully messing around with things, either in horseplay or boredom, and Dudley was keeping himself entertained just as a kid might by making weird sounds with his voice through changing the shape of his mouth. He had the insight that his vocal cords were acting as a transmitter of a periodic waveform. The nose and throat were the resonating filters while the mouth and tongue produced harmonic content, or formants to use linguistic lingo. He also observed that the frequencies of his voice vibrated at a faster rate than the mouth itself moved.
These insights went on to have implications for the work he pursued at Bell Laboratories, a true idea factory, where money and resources were thrown at any old project that might bear the AT&T monopoly some form of fruit or further advantage in their already sprawling playground of wires and exchanges. Once recovered and back at work Homer thought his discovery might have an application in the area of compression and he made it his ambition to free up some of the phone companies precious bandwidth hoping to pack in more conversations onto the copper lines. He was given a corner and allowed to go work in it, devoting himself to his obsession.
He exploited his research in the invention of the Vocoder, or VOice CODER, first demonstrated at Harvard in 1936. It works by measuring how the spectral characteristics of speech change over time. The signal going into the mic is divided by filters into a number of frequency bands. The signal present at each frequency gives a representation of the spectral energy. This allows the Vocoder to reduce the information needed to store speech to a series of numbers. On the output end to a speaker or headphone the Vocoder reverses the process to synthesize speech electronically. Information about the instantaneous frequency of the original voice signal is discarded by the filters giving the end result it unique robotic and dehumanized characteristics. The amplitude of the modulator for each of the individual analysis bands generates a voltage that controls the amplifiers in each corresponding carrier band. The frequency components of the modulated signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands. Because the Vocoder does not employ a point-by-point recreation of the wave, the bandwidth used for transmission can be significantly reduced.
There is usually an unvoiced band or sibilance channel on a Vocoder for frequencies outside the analysis bands for typical talking, but still important in speech. These are words starting with the letters s, f, ch or other sibilant sounds. These are mixed with the carrier output for increased clarity, resulting in recognizable speech but still roboticized. Some Vocoders have a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency.
To better demonstrate the speech synthesis ability of the decoder part of his invention Dudley created another instrument, the Voder (Voice Operating Demonstrator). This was unveiled during the World Fair in New York in 1939 where Ray Bradbury was among the attendees who witnessed it firsthand. The Voder synthesized speech by creating the electronic equivalent of a vocal tract. Oscillators and noise generators provided a source of pitched tone and hiss. A 10-band resonator filter controlled by a keyboard converted the tone and hiss into vowels, consonants and inflections. Another set of extra keys allowed the operator to make the plosive sounds such as "p" and "d" as well as affrictive sounds of "j" in "jaw" and "ch" in "cheese". Only after months of practice with this difficult machine could a trained operator produce something recognizable as speech.
At the world fair Mrs. Helen Harper, who was noted for her skill, led a group of twenty operators in demonstrations of the Voder where people from the crowd could come up and ask the operator to make the Voder say something.
Homer Dudley had great success in his aim of reducing bandwidth with the Vocoder. It could chop up voice frequencies into ten bands at 300 hertz, a significant reduction of what was required for a phone call back in the day. Yet it never got used for that purpose. The large size of the equipment was impractical to install in homes and offices across the country even if it created more channels on the phone lines. For a time Dudley worked at marketing the Vocoder to Hollywood for use in audio special effects. It never made much of an impact there as other voice changing devices such as the Sonovox started being used in radio jingles and in cartoons. Before it could be discovered by musicians Homer Dudley's tool for voice compression had to eb put into service during America's efforts in WWII where it was used as part of the SIGSALY encryption program. The details surrounding the coding of the voices of MacArthur and Churchill will be explored in next months column.
How to Wreck A Nice Beach: The Vocoder from WWII to Hip-hop: The Machine Speaks by Dave Tompkins, Melville House, 2010
The Carrier Nature of Speech by Homer Dudley, The Bell System Technical Journal, Vol. 19, No. 4, October 1940
Fundamentals of Speech Synthesis by Homer Dudley, Journal of the Audio Engineering Society, Vol. 3, No. 4, October 1955
Lev Theremin's skill at invention was not lost on the Soviet machine. Not long after his musical instrument was patented, the radio watchman security device it was based on started being employed to guard the treasures of gold and silver Lenin had plundered from church and clergy. The watchman was also being used to protect the state bank. Setting up and installing these early electronic traps took him away from his primary interest in scientific research. Just as he was approaching the limits of his frustration his mentor at the Institute gave him a new problem to solve, that of "distance vision" or the transmission and reception of moving images over the airwaves. The embryonic idea for television was in the air at the time but no one had figured out how to make it a reality. The race was on and the Soviets wanted to be first to crack the puzzle.
Having researched the issue extensively in the published literature, Lev was ready to apply the powers of his mind towards a solution. In the Soviet Union parts weren't always readily available. Some were smuggled in, and others had to be scavenged from flea markets -the latter a process very familiar to radio junkies. By 1925 he had created a prototype from his junk box using a rotating disk with mirrors that directed light onto a photo cell. The received image had a resolution of sixteen lines, and it was possible to make out the shape of an object or person but not the identifiable details. Other inventors in Russia and abroad were also tackling the issue. Fine tuning the instrument over the next year he doubled the resolution to 32 lines and then, using interlaced scanning, to 64. Having created a rudimentary "Mechanism of Electric Distance Vision" he demonstrated the device and defended his thesis before students and faculty from the physics department at the Polytechnic Institute. Theremin had built the first functional television in Russia.
After this period Lev embarked to Europe and then America where he lived for just over a decade engaging the public, generating interest in his musical instrument, and doing work with RCA. As Hitler gathered power he was anxious about the encroaching war and returned home to the Soviet Union in 1938. He barely had time to settle back in when he was sent to the Kolmya gold mines for enforced labor for the better part of a year. This was done as a way of breaking him, a fear tactic that could be held over his head if he didn't cooperate: do what we say or go back to the mines. The state had better uses for him. He was picked up by the police overlord Lavrenti Beria who sent him to work in a secret laboratory that was part of the Gulag camp system. One of his first jobs was to build a radio beacon whose signals would help track down missing submarines, aircraft and smuggled cargo.
With WWII winding to a close the Cold War was dawning and Russia was on the offensive, trying to extend its reach and gather intelligence on such lighthearted subjects as the building of atomic bombs. In their efforts at organized espionage the Soviets sifted for all the data they could get from foreign consulates. Having succeeded with his beacon Lev was given another assignment. This time the goal wasn't to track down cargo or vehicles but to intercept U.S. secrets from inside Spaso House, the residence of the U.S. Ambassador. Failure to do the bidding of his boss would mean a return to the mines. His boss had high demands for the specifications of the bug Lev was to plant. The proposed system could have no microphones and no wires and was to be encased in something that didn't draw attention to itself.
The bug ended up being put inside a wooden carving of the Great Seal of the United States and was delivered by a delegation of Soviet Pioneers (their version of Boy Scouts) on July 4, 1945. Deep inside this "gesture of friendship" was a miniature metal cylinder with a nine inch antenna tail. The device was passive and was not detected by the X-Rays used at Spaso house in their routine scans. It only activated when a microwave beam of 330 Mhz was directed at the seal from a nearby building. There was a metal plate inside the cylinder that when hit with the beam resonated as a tuned circuit. Below the beak of the eagle the wood was thin enough to act as a diaphragm and the vibrations from it caused fluctuations in the capacitance between the plate and the diaphragm creating a microphone. The modulations this produced were picked up by the antenna and then transmitted out to the receiver at a Soviet listening post. Using this judiciously the Soviets were able to gain intelligence to aid them in a number of strategic decisions. The Great Seal bug is considered to be a grandfather to RFID technology.
This wasn't the last time Lev was asked to develop wireless eavesdropping technology. For the next job his overseers upped the ante on him. No device could be planted in the site targeted for surveillance. The operation was code named Snowstorm. Lev used his interest in optics to figure out a method. Knowing that window panes in a room vibrate slightly when people talked he needed a method to detect and read the vibrations from a distance. Resonating glass contains many simultaneous harmonics and it would be a difficult to find the place of least distortion to get a voice signal from. Then there was the obstacle of reinterpreting the signal back into a speech pattern. Using an infrared beam focused on the optimum spot and catching its reflection back in an interferometer with a photo element he was able to pick up communications. Back at his monitoring post he used his equipment and skills to reduce the large amounts of noise from the signal.
A few years later Lev was released from his duties at the lab, but was kept on a tight leash and not allowed to leave Moscow.
HOW TO BUILD A THEREMIN FROM THREE AM RADIOS
For those amateurs wishing to build and play a theremin there are many commercial kits available on the market. However a simple theremin can be built using just three AM radios. If you don't already have these laying around the house they can easily be obtained from your local thrift store.
One of the radios will be a fixed transmitter, another a variable transmitter and the third would be the receiver. The volume knobs on the fixed and variable transmitters can be turned all the way down, as they are just used to produce the intermediate frequency oscillations that will be picked up by the receiver. The receiver radio should be set on an unused frequency in the upper range of the AM band such as 1500 Khz. If it is in use tune to a nearby space where only static is heard. The fixed and variable transmitters should then be tuned 455 Khz below where your receiver is set, in this example 1045 Khz. 455 Khz is a common difference in the local oscillator frequency, although there can be variations. As these frequencies are set the receiver should start to make a whistling type sound, the production of a beat frequency.
The next step is to open up the variable radio and look for the variable capacitor, often housed in white plastic with four screws. Find the terminal that takes the station out of tune and use an aligator clip attached to the antenna, or solder a wire from the antenna to the oscillator terminal. Now the controls will have to be adjusted slightly again. Tune the fixed transmitter until the receiver starts whistling and have fun playing with the sounds it creates.
Theremin: Ether Music and Espionage by Albert Glinsky, University of Illinois Press, 2000
How to Make a Basic Theremin by eltunene: https://app.box.com/s/kgdstzwaoc/1/17284427/181802859/1
The sound of the theremin has become synonymous with the spectral and spooky sci-fi horror flicks of the 1940's and '50's. It's trilling oscillations conjure up images of flying saucers made from hub caps and fishing line. When most folks hear and see the theremin they tend to think of it as little more than a novelty or scientific amusement. While it may have fallen out favor in horror movie soundtracks it has remained a mainstay within the field of electronic music. It is distinguished among all musical instruments by being the only one that is played without touching the instrument itself. To the radio and electronics buff the theremin is worth exploring as a way of learning about electromagnetic fields and the creative use of the heterodyning effect for artistic purposes. Whether or not the quivering sounds the instrument pulls out of the ether are appealing to a listener is a matter of individual preference.
The inventor of the theremin, or etherphone as it was first dubbed, was Lev Teremen. He was born in Russia in 1896 a few years before Marconi achieved wireless telegraphy. As a young boy he spent his time reading the family encyclopedia and was fascinated by physics and electricity. At five he had started playing piano, and by nine had taken up the cello, an instrument that has an important influence on the way theremins are played. After showing promise in class he was asked to do independent research with electricity at the school physics lab. There he began an earnest study of high-frequency currents and magnetic fields, alongside optics and astronomy. It was around this time Lev met Abram Ioffe, a rising physicist whom he would work under in a variety of capacities. Yet his studies in atomic theory and music were overshadowed by the outbreak of WWI. In 1916 he was summoned by the draft and moved to Petrograd where his electrical experience saved him from the front lines. He was placed in a military engineering school where he landed in the Radio Technical Department to do work on transmitters and oversee the construction of a powerful and strategic radio station. In the course of the war the station had to be disassembled and Lev oversaw the blowing up of a 120 meter antenna mast. Another war time duty was as a teacher instructing other students to become radio specialists.
As Lev's reputation grew among engineers and academic scientists he was eventually asked to go and work with Ioffe Abram at the Physico-Technical Institute where he became the supervisor of a high-frequency oscillations laboratory. Lev's first assignment was to study the crystal structure of various objects using X-Rays. At this time he was also experimenting with hypnosis and Ioffe suggested he take his findings on trance-induced subjects to psychologist Ivan Pavlov. Though Lev resented radio work in preference for his love of exploration of atomic structures, Ioffe pushed him to work more systematically with radio technology. Now in the early 1920's Lev busied himself thinking of novel uses for the audion tube.
His first project involved the exploration of the human body's natural electrical capacitance to set up a simple burglar alarm circuit that he called the "radio watchman". The device was made by using an audion as a transmitter at a specific high frequency directed to an antenna. This antenna only radiated a small field of about sixteen feet. The circuits were calibrated so that when a person walked into the radiation pattern it would change the capacitance, cause a contact switch to close, and set off an audible signal. He was next asked to create a tool for measuring the dielectric constant of gases in a variety of conditions. For this he made a circuit and placed a gas between two plates of a capacitor. Changes in temperature were measured by a needle on a meter. This device was so sensitive it could be set off by the slightest movement of the hand. This device was refined by adding an audion oscillator and tuned circuit. The harmonics generated by the oscillator were filtered out to leave a single frequency that could be listened to on headphones.
As Lev played with this tool he noticed again how the presence of his movements near the circuitry were registered as variations in the density of the gas, and now measured by a change in the pitch. Closer to the capacitor the pitch became higher, while further away it became lower. Shaking his hand created vibrato. His musical self, long dormant under the influence of communism, came alive and he started to use this instrument to tease out the fragments he loved from his classical repertoire. Word quickly traveled around the institute that Theremin was playing music on a voltmeter. Ioffe encouraged Lev to refine what he had discovered -the capacitance of the body interacting with a circuit to change its frequency- into an instrument. To increase the range and have greater control of the pitch he employed the heterodyning principle. He used two high-frequency oscillators to generate the same note in the range of 300 khz :-beyond human hearing. One frequency was fixed, the other variable and could move out of sync with the first. He attached the variable circuit to a vertical antenna on the right hand side of the instrument. This served as one plate of the capacitor while the human hand formed another. The capacitance rose or fell depending on where the hand was in relation to the antenna. The two frequencies were then mixed into a beat frequency within audible range. To play a song the hand is moved at various distances from the antenna creating a series beat frequency notes.
To refine his etherphone further he designed a horizontal loop antenna that came out of the box at a right angle. Connected to carefully adjusted amplifier tubes and circuits this antenna was used by the other hand to control volume. The new born instrument had a range of four octaves and was played in a similar manner to the cello, as far as the motions of the two hands were concerned. After playing the instrument for his mentor, he performed a concert in November of 1920 to an audience of spellbound physics students. In 1921 he filed for a Russian patent on the device.
Theremin: Ether Music and Espionage by Albert Glinsky, 2000, University of Illinois
No man works in a vacuum. Before the industry of radio got off the ground it had been customary for researchers to use each-others discoveries with complete abandon. As technical progress in the field of wireless communication moved from the domain of scientific exploration to commercial development financial assets came to be at stake and rival inventors soon got involved in one of the great American pastimes: lawsuits. The self-styled "Father of Radio" Lee De Forest was involved in a number of infringement controversies. The most famous of these involved his invention of the audion (from audio and ionize) an electronic amplifying vacuum tube.
It was Edison who first produced the ancestor of what became the audion. While working on the electric light bulb he noticed that one side of the carbon filament behaved in a way that caused the blackening of the glass. Working on this problem he inserted a small electrode and was able to demonstrate that it would only operate when connected to the positive side of a battery. Edison had formed a one way valve. This electrical phenomenon made quite the impression on another experimenter, Dr. J. Ambrose Fleming, who brought the device back to life twenty years later when he realized it could be used as a radio wave detector.
At the time Fleming was working for Marconi as one of his advisers. It occurred to him that "if the plate of the Edison effect bulb were connected with the antenna, and the filament to the ground, and a telephone placed in the circuit, the frequencies would be so reduced that the receiver might register audibly the effect of the waves." Fleming made these adjustments. He also substituted a metal cylinder for Edison's flat plate. The sensitivity of the device was improved by increasing electronic emissions. This great idea in wireless communication was called the Fleming valve.
Fleming had patented this two-electrode tube in England in 1904 before giving the rights to the Marconi Company who took out American patents in 1905. Meanwhile Lee De Forest had read a report from a meeting of the Royal Society where Fleming had lectured on the operation of his detector. De Forest immediately began experimentation with the apparatus on his own and found himself dissatisfied. Between the cathode and anode he added a third element made up of a platinum grid that received current coming in from the antenna. This addition proved to transform the field of radio, setting powerful forces of electricity, as well as litigation, into motion.
The audion increased amplification on the receiving side but radio enthusiasts were doubtful about the ability of the triode tube to be used with success as a transmitter. De Forest had been set upon by financial troubles involving various scandals in the wireless world and was persuaded to sell his audion patent in 1913.
Edwin Howard Armstrong had been fascinated by radio since his boyhood and was an amateur by age fifteen when he began his career. Some of his experimentation was with the early audions that were not perfect vacuums (De Forest had mistakenly thought a little bit of gas left inside was beneficial to receiving). Armstrong took a close interest in how the audion worked and developed a keen scientific understanding of its principles and operation. By the time he was a young man at Columbia University in 1914, working alongside Professor Morecroft he used an oscillograph to make comprehensive studies based on his fresh and original ideas. In doing so he discovered the regenerative feedback principle that was yet another revolution for the wireless industry. Armstrong revealed that when feedback was increased beyond a certain point a vacuum tube would go into oscillation and could be used as a continuous-wave transmitter. Armstrong received a patent for the regenerative circuit.
De Forest in turn claimed he had already come up with the regenerative principle in his own lab, and so the lawsuits began, and continued for twenty years with victories that alternated as fast as electric current. Finally in 1934 the Supreme Court decided De Forest had the right in the matter. Armstrong however would achieve lasting fame for his superheterodyne receiver invented in 1918.
Around 1915 De Forest used heterodyning to create an instrument out of his triode valve, the Audion Piano. This was to be the first musical instrument created with vacuum tubes. Nearly all electronic instruments after if it were based on its general schematic up until the invention of the transistor.
The instrument consisted of a single keyboard manual and used one triode valve per octave. The set of keys allowed one monophonic note to be played per octave. Out of this limited palette it created variety by processing the audio signal through a series of resistors and capacitors to vary the timbre. The Audion Piano is also notable for its spatial effects, prefiguring the role electronics would play in the spatial movement of sound. The output could be sent to a number of speakers placed around the room to create an enveloping ambiance. De Forest later planned to build an improved version with separate tubes for each key giving it full polyphony, but it is not known if it was ever created.
In his grandiose autobiography De Forest described his instrument as making "sounds resembling a violin, cello, woodwind, muted brass and other sounds resembling nothing ever heard from an orchestra or by the human ear up to that time – of the sort now often heard in nerve racking maniacal cacophonies of a lunatic swing band. Such tones led me to dub my new instrument the ‘Squawk-a-phone’….The Pitch of the notes is very easily regulated by changing the capacity or the inductance in the circuits, which can be easily effected by a sliding contact or simply by turning the knob of a condenser. In fact, the pitch of the notes can be changed by merely putting the finger on certain parts of the circuit. In this way very weird and beautiful effects can easily be obtained.”
In 1915 an Audion Piano concert was held for the National Electric Light Association. A reporter wrote the following: “Not only does De Forest detect with the Audion musical sounds silently sent by wireless from great distances, but he creates the music of a flute, a violin or the singing of a bird by pressing a button. The tone quality and the intensity are regulated by the resistors and by induction coils…You have doubtless heard the peculiar, plaintive notes of the Hawaiian ukulele, produced by the players sliding their fingers along the strings after they have been put in vibration. Now, this same effect, which can be weirdly pleasing when skilfully made, can he obtained with the musical Audion.”
Fast forward to 1960. The Russian immigrant and composer Vladimir Ussachevsky is doing deep work in the trenches of the cutting edge facilities at the Columbia-Princeton Electronic Music Center, one of the first electronic music studios anywhere. Its flagship piece of equipment was the RCA Mark II Sound Synthesizer, banks of reel-to-reels and customized equipment. Ussachevsky received a commission from a group of amateur radio enthusiasts, the De Forest Pioneers, to create a piece in tribute to their namesake. In the studio Vladimir composed something evocative of the early days of radio and titled it "Wireless Fantasy". He recorded morse code signals tapped out by early radio guru Ed G. Raser on an old spark generator in the W2ZL Historical Wireless Museum in Trenton, New Jersey. Among the signals used were: QST; DF the station ID of Manhattan Beach Radio, a well known early broadcaster with a range from Nova Scotia to the Caribbean; WA NY for the Waldorf-Astoria station that started transmitting in 1910; and DOC DF, De Forests own code nickname. The piece ends suitably with AR, for end of message, and GN for good night. Woven into the various wireless sounds used in this piece are strains of Wagner's Parsifal, treated with the studio equipment to sound as if it were a short wave transmission. Lee De Forest had played a recording of Parsifal, then heard for the first time outside of Germany, in his first musical broadcast.
It is also available on the CD: Vladimir Ussachevsky, Electronic and Acoustic Works 1957-1972 New World Records
History of Radio to 1926 by Gleason L. Archer, The American Historical Society, 1938
The Father of Radio by Lee De Forest
The Music of Radio is a history series showcasing the relationships between radio and electronic music. This episode tunes in to sounds created by the sparks of a "wireless organ" designed by the Canadian amateur, early broadcaster and reverend Georges Désilets.
Georges was born to farming parents in 1866 in Nicolet, Quebec. As a young adult he joined the seminary. By the age of 27 in July of 1893 he was ordained into the ministry. As part of the work of his spiritual vocation he began to teach astronomy, chemistry and physics at the seminary. Later he focused his instructional efforts on music and natural history. Around this time it was very common for those in the clergy to be involved in scientific and technological pursuits as hobbyists. Supported by a church or parish these men were often set up in well appointed homes, had access to books, and the prime resource of any hobbyist: free time to tinker.
Somewhere around the year 1908 he became the Bishop of Nicolet. At this time Georges became active in working with a library, as well as monitoring installations of electrical apparatus and photography works. During this time period his keen and active mind turned to the field of radio-telegraphy. His amateur radio laboratory was assembled in the turret of the Bishopric. What ham wouldn't like to have a shack in a turret with an antenna on top?
From the turret he created the 9-AB broadcast radio station that transmitted an hour long orchestral and religious music program performed by musicians from the seminary once a week. Désilets was in need of an organ to accompany the choir and he began experimenting with the use of electronic sparks to create musical tones. This experimenting led to his invention of the Wireless Organ, and later a number of other patents in the field of radio communications. In doing so he joined the ranks of other reverends who had made contributions to science and the humanities including Rev. Edmund Cartwright, inventor of the power loom; Rev. George Garrett creator of the submarine; and Rev. John Michell who helped to discover the planet Uranus, among many others.
After the outbreak of WWI all non-government stations were closed down in Canada and his organ and station fell into the dread state of radio silence. Yet he continued to be active in the radio community, penning articles, and now doubt working in his radio lab. In the September 1916 issue of Wireless Age he wrote of his instrument:
“Those who have heard it agree that it is real music. Chords are produced by pressing two or three keys, and if the feeding transformer can supply the necessary power we have surprising results and pleasant effects. ... Unhappily my station was closed last year on account of the war, and my organ is now silent. I hope to resume my experiments later on; meanwhile, I wish I could, for a time, live on the free soil of the United States, paradise of the wireless amateur."
His set up used the standard pre-tube method of a spark-gap alternator and a number of studded 'spark-gap' disks attached to a rotating cone drum. The ratio interval between studs caused waveforms to be created in a series of prefixed pitches and was only able to be heard over wireless transmission, as there were still no instruments of amplification yet available. The first version only had a range of 1 1/2 octaves. After the war he lost no time in getting back on the air and continued his work, attaching a keyboard from an organ and a larger spark-drum that gave him a four octave range. He got the idea to use a rheostat attached to a footswitch for controlling volume and expression. In his improved device he also fitted a home-brewed oscillation transformer capable of delivering "10,000 volts at an imprest potential of 110 volts, 30 cycles A.C."
Georges story shows how curiosity, coupled with need, determination, the will to tinker and a bit of free time can unleash creative potentials. While the spurious emissions caused by spark-gaps may be frowned upon for the 21st century amateur it need not stop us from sitting at the workbench, the mixing board of a music studio, or at the controls of a transceiver where imaginative sparks are allowed to fly and signals of inspiration can be received.
Wireless Age, September 1916
Antifragile: Things that Gain from Disorder, by Nassim Nicholas Taleb, 2012, Random House
The Music of Radio is a history series showcasing the relationships between radio and electronic music. This installment focuses on sounds created by arcs in the days before incandescent lighting cast its long and overshadowing glow.
The first source of electrical lighting was the arc lamp. It was also used as a means for producing an electrical form of singing. Invented by Humphrey Davy in the first decade of the 19th century the arc lamp created light from the electricity passing between two carbon electrodes in free air. To ignite a carbon lamp the rods were touched together allowing a low voltage to strike the arc. They were then drawn apart to allow the electric current to flow between the gap. This first means of electrical lighting also became the first commercial use for electricity beginning around 1850 but it didn't really take off until the 1870's when regular supplies of power became available.
Three major advances in the technology occurred during the 1880's that helped to spread the adoption of the arc lamp. The first was a mechanism to automatically adjust the electrodes. The second was the placement of the arcs in an enclosure to cause the carbon to burn at a slower rate. Last salts and tiny amounts of metals were added to the carbon to create flames of greater intensity and different colors. At this time a number of companies became involved with the manufacturing of these lamps and they began to be used for lighting on streets and other public places. Yet there was one feature about the light source that many folks found disagreeable. These were audible power-frequency harmonics caused by the arcs negative resistance. Nikola Tesla was one of the guys who set to work on this problem of noise. In 1891 he received a patent for an alternator that ran at 10,000 cycle per second that was to be used to suppress the undesirable sounds of humming, hissing and howling emitted by the lamp.
Tesla's invention must have been impractical or just never caught because over in London in 1899 the Victorian electrical engineer William Duddell had been appointed to tackle the problem of the lamps dissonant electrical noise. Duddell was an illuminated man and he took a different angle than Nikola. Instead of suppressing the sounds he transformed them into music. In the course of his experimentation Duddell found that by varying the voltage supplied to the lamps he could control the audible frequencies by connecting a tuned circuit that consisted of an inductor and capacitor across the arc. The negative resistance of the arc was excited by the audio frequency oscillations from the tuned circuit at its resonant frequency. This could be heard as a musical tone. Duddell used another one of his inventions, the oscillograph, to analyze the particular conditions necessary for producing the oscillations. He demonstrated his invention before the London Institution of Electrical Engineers by wiring up a keyboard to make different tones from the arc. Being a patriotic fellow he played a rendition of God Save the Queen. His device came to be known as "Singing Arc" and was one of the first electronic oscillators. It was noted that arc lamps on the same circuit in other buildings could also be made to sing. The engineers speculated that music could be delivered over the lighting network, but this never became a reality. Duddell toured his instrument around Britain for a time but his invention was never capitalized on and so remained only a novelty.
Duddell's Singing Arc had been very close to becoming a radio. Marconi's spark-gap transmitter had already been demonstrated in 1895, yet Duddell thought it was impossible to leverage his Singing Arc to produce radio frequencies instead of audio frequencies. The AC current in the condenser was smaller than the supplied DC current so the arc never extinguished during an output cycle, making it impractical to use as an RF transmitter. With this set up it was not possible to reach the high frequencies required for transmission of Radio-telegraphy. If he had managed to increase the frequency range and attached an antenna his invention could have become a CW transmitter.
His oscillator was left for other experimenters to imrpove upon. This was done by Danish physicists Valdemar Poulsen and P.O. Pederson. In 1903 they patented the Poulsen arc wireless transmitter that was the first generate to continuous waves, and one of the first pieces of technology to transmit through amplitude modulation. Poulsen's version was used for radio work around the world up into the 1920's when it became replaced by vacuum tube transmitters.
Poulsen had previously demonstrated his inventive flair with the world's first magnetic recording device, the Telegraphone, at the Paris World Fair in 1900. Applying his skills he was able to raise the efficiency and frequency of Duddell's Singing Arc up to 200 kilohertz. His method of oscillation made use of an AC current from the condenser that was large enough to extinguish the arc but not so great that it caused the arc to restart in the opposite direction. A third method of oscillation was used in spark gap transmitters where the arc is extinguished but might reignite when the condenser reversed, producing damp oscillations.
The method of operating a Poulsen arc transmitter required frequency shift keying. On-off keying could not be used because of the time it took for the arc to strike and re-stabilize. With the arc staying on throughout operation the keying frequency needed to be adjusted anywhere from one to five percent. The signal at the unwanted frequency was deemed a compensation wave. Two keys were used, a "mark" or closed key, and a "space" or open key. This mode took up quite a chunk of bandwidth, as it also transmitted on the harmonics of the frequencies. Since around 1921 the use of the compensation wave method for CW has been prohibited. One way of working around this used a dummy antenna, or back shunt, tuned to the same frequency as the transmitter to absorb the load from the arc while keeping it running.
For those interested in creating a lethal high-voltage Plasma Arc Speaker based on Duddell's Singing Arc John Iovine has written an article on how to do just that for Make Magazine. The core of his project is a 555 timer and an insulated gate bi-polar transistor. Schematics, instructions and a video of it in operation are available at:
Today those of us with access to cell phones and data plans tend to take things like streaming music, news, on-demand videos and face time for granted. Yet the impulse to do more than just talk over the wires has been part of the spirit of telephony since its earliest days. In the 1890's the telephonic playground was still in its infancy and commercial applications for the technology could have gone in many different directions. During this time entrepreneurial types were coming up with creative experiments for using telephones as a news delivery system or for musical entertainment.
Two years after Elisha Gray's playing of the musical telegraph in 1874, other folks decided it would be a swell idea to transmit music concerts along the commercial telegraph lines. This was done initially for the entertainment of the operators. In 1881 the first "stereo" concert was given via telephone. Clément Ader used dual lines to pass music from a local theater to two separate phone receivers. At the time this was dubbed "binauriclar auduition" a name that for some reason didn't stick. Later in 1890 AT&T was at work on a service to provide music for mealtimes. Though there were some issues with sound quality they stated that "When we have overcome this difficulty we shall be prepared to furnish music on tap." AT&T also had other development plans for the phone lines. Used for business during the day they hoped to "stream" music, lectures, and various oral entertainments to all the cities of the East coast at night.
Stateside most of these types of efforts didn't take hold but a few in Europe did. The first permanent service was an outgrowth of Clement Ader's work, known as the Paris Theatrophone. This was a subscription based service launched in the 1890's. The "Theatraphonic network" provided Parisians with "programs dramatic and lyrical" and held its own until 1932. In Hungary the concept of a telephone newspaper caught on, with the Budapest Telefon Hirmondo, which began service in February of 1893. It included news reports, original fiction, and other entertainments. Still going strong in 1925 it added a radio station while still offering a telephone relay to customers all the way up to 1944.
It was within this milieu that Thaddeus Cahill obsessed over and created what must be considered the ultimate behemoth of a musical synthesizer, the Telharmonium, a type of electrical organ. It was specifically intended to be played over the phone lines. Amplifiers hadn't been invented yet and the phone receiver was still the only available technology that could make an electronic sound audible. The Telharmonium implemented sinuosoidal additive synthesis via mechanical means using tonewheels and alternators rather than an oscillating circuit. The discs on a tonewheel have specific numbers of bumps on the edge. These generate a specific frequency through induction as the bumps move past an electromagnetic coil. Frequency and waveform are determined by the shape of the wheel, the number of bumps on it and how often they pass the tip of the magnet. Using multiple tonewheels a single fundamental frequency can thus be combined with one or more harmonics to produce complex sounds. Later the tonewheel was used in radio work during the pre-vacuum tube era as a BFO for CW.
Cahill is credited with coining the phrase "synthesizer" for describing his instrument. It was patented in 1897. Five years later he founded the New England Electric Music Company with two partners. The Telharmonium or Dynamaphone as it was also called was first demonstrated in 1906. The instrument was a true boat anchor. The Mark I version weighed in at a hefty 7 tons and could be considered light compared to the Mark II and III which weighed around 200 tons, and took up thirty train box cars when shipped to New York for assembly in what Cahill called his "Music Plant". The instrument looked like a power generator and took up an entire floor on 39th street and Broadway in New York city. Indeed the machine itself put out 670-kilowatts of power. Each generator rotor produced a pitch and a 60-foot chassis held 145 rotors.
One floor up was Telharmonic Hall, a concert space where the instrument was controlled and played. Two to four musicians could sit at the controls to play the Telharmonium from the listening hall. It was a unique arrangement of four keyboard banks each with 84 keys. Before the minimalist composers La Monte Young and Terry Riley brought just intonation back into the fold of Western music, it was possible to play the Telharmonium using just intonation. Just intonation differs from equal temperament in that it occurs naturally as a series of overtones where all the notes in a scale are related by rational numbers. In just intonation the tuning depends on the scale you are using. Equal temperament was developed for keyboard instruments so that they could be played in any scale or key. The Telharmonium through additive synthesis, and the ability to control timbre, harmonics, and volume was an extremely flexible instrument.
Though there was no channel separation the Telharmonic hall was fitted with eight telephone receivers augmented with paper horns. These were arrayed behind ferns, columns and furniture. An electrician at the company suggested splicing the current from the Telharmonium into the arc lamps hanging from the ceiling which then resonated at the same frequency as that being played to create “singing arc.” The Telharmonium could also be piped to any number connected to the AT&T phone system.
Thomas Commerford Martin wrote of the new sounds of the Telharmonium as an alliance of electricity with music. Cahill "has devised a mechanism which throws on the circuits, manipulated by the performer at the central keyboard, the electrical current waves that, received by the telephone diaphragm at any one of ten thousand subscribers' stations, produce musical sounds of unprecedented clearness, sweetness, and purity."
Cahill had ambitious plans for his "Telharmony". He advocated that a form of "electric sleep-music" could be tapped at any time for the cure of modern nervous disorders. The electric drones could also be used to relieve boredom in the workplace. But his plans were not to bear fruit in the manner he thought. His instrument sometimes caused interference or crosstalk on the phone lines, electronic music interrupting business and domestic conversation. It also required vast amounts of power. When vacuum tubes started to appear and in the 1920's other less expensive electronic instruments, that did not require the infrastructure provided by Ma Bell, started being built. Finally with the advent of broadcast radio many of these types of ventures ceased to be profitable. No known recordings of the Telharmonium exist.
In the 1930's Hammond patented the electrically amplified organ which was essentially a smaller and more economical version of the Telharmonium. This was much to the chagrin of Cahill's family as the patent on his instrument had not yet run out. Synth pioneer Robert Moog later recognized the genius of Cahill's work and his seminal place in the history of electronic music.
In William Peck Banning's 1946 book, Commercial Broadcasting Pioneer: The WEAF Experiment 1922-1926, he wrote that "historians of the future may conclude that if there was any 'father' of broadcasting, perhaps it was the telephone itself".
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.