Tuning the Terrestrial Monochord, or listening to the Harmony of Earth

Antennas and monochords have a lot in common. A monochord is an ancient musical and scientific lab instrument made of one long string, similar in that respect to a long single wire antenna, only the string is stretched over a sounding box of equal length. One or more movable bridges are then moved up and down the string to demonstrate the mathematical relationships among the frequencies produced and for measuring musical intervals. Though it was first mentioned in Sumerian clay tablets, many attribute it’s invention to Pythagoras around 6 BCE. These ancients saw within the monochord a mystic holism in which notes, numbers, ratios and intervals combined with the sense of hearing and mathematical reason. Monochords are related to other instruments such as the Japanese koto, the hurdy-gurdy, and the Scandinavian psalmodikon this last used as an accompaniment to voice in sacred music. In medicine the sonometer, a variation of the monochord, continues to be used to diagnose hearing loss and bone density for those who may be at risk for osteoporosis.19th century psalmodikon player

The discovery of the precise relationship between the pitch of a musical note and the length of the string that produces it is also attributed to Pythagoras. If he had been able to put electricity into wire strings it might have been Pythagoras who discovered the principle of resonance that makes an antenna match a frequency. What Pythagoras did propose was the idea of the Music of the Spheres, a philosophical concept that conjectures that the movement of celestial bodies creates a form of heavenly music. This theory has continued to haunt the imagination of the West since it was first proposed. Later Plato described astronomy and music as “twinned” studies of sense recognition that both required knowledge of numerical proportions. Astronomy was for the eyes and music was for the ears. Now millenia later astronomy can be studied with the ears of a radio receiver and number crunching supercomputers.

Robert-Fludds-Celestial-Monochord-1618In 1618 the physician, scientist and mystic Robert Fludd conceived a divine or celestial monochord linking the Ptolemaic conception of the universe to musical intervals, suggesting that the instrument could also be used to demonstrate the harmony of the spheres. In Fludd’s picture a divine hand reaches down from out of a cloud to tune the monochord to the celestial frequencies of the planets and the stars. Around two and a half centuries later scientists unknowingly started tuning into the terrestrial frequencies that were unknowingly being picked up by telegraph and telephone lines.

In his masterful book Earth Sound Earth Signal Douglas Kahn writes that “radio was heard before it was invented”. He goes on to describe how the first person to listen to radio was Alexander Graham Bell’s assistant Thomas Watson. He tuned in with a telephone receiver “during the early hours of the night on a long metal line serving as an antenna before antennas were invented.” Other telephone users also listened to radio for two decades before Marconi made his first transmission. Watson enjoyed listening to the natural VLF signals given off by the earth, though he did not know it’s origin or that it was even radio at all. The natural signals were picked up on the telephone line acting as an extremely long wire that was resonant in the VLF range, from around 3 kHz to 30 kHz and corresponding to wavelengths of 100 to 10 kilometers. Watson’s own line from the lab stretched a half mile down the street. Since he wasn’t transmitting it didn’t have to be fully resonant to pick up the VLF signals. I like to think of these long antenna wires as a type of terrestrial monochord that tunes in to the harmony of the Earth.

Watson did not try to do anything about the noises he heard on the line, as they did not interfere with voice communication. In fact he actually enjoyed listening to spherics, whistlers, dawn chorus and other VLF phenomenon he likely picked up, even as he didn’t know or understand their cause. I like to listen to this form of natural radio myself. There are a number of live internet streams from people who have set up VLF listening posts, such as those found at http://abelian.org/vlf/. I think those sounds are as relaxing as listening to the surf of the ocean or a gentle breeze in the trees.  Kahn goes on to write that nature “has always been the biggest broadcaster, bigger than all governments, corporations, militaries, and other purveyors of anthropic signals combined.” May it remain so.

Fludd’s image of the celestial monochord was made famous in 1952 whenharry_smith1 it came to adorn the cover of The Anthology of American Folk music compiled by Harry Everett Smith and released by Smithsonian Folkways. I think some divine inspiration was passed on to Harry Smith, from the same hand that tunes the instrument, and from him it passed on to all the lives his massive compilation touched. The six-album set brought new levels of cultural awareness to musicians such as Blind Lemon Jefferson, the Carter Family and Mississippi John Hurt and went on to kick start the folk music revival of the 50’s and 60’s. It had a strong influence on Joan Baez and Bob Dylan, who are acknowledged as disciples of the anthology. It continues to touch new generations of musicians today.

Avant-garde composer and father of minimalism La Monte Young found early inspiration from another type of electrical monochord. He recalled as a child listening to the droning sound of the power plant next to his Uncle’s gas station. He became fascinated by the 60-cycle hum of electricity as it moved along the lines. This inspired such pieces of music as “the Second Dream of the High Tension Line Stepdown Transformer“. John Cale and the late Tony Conrad are among the many influenced by Young’s work. Both were involved in Young’s Theatre of Eternal Music. Cale went on to a long and varied career and is notable for being a founding member of the Velvet Underground. During rehearsals with Young, Cale and Conrad would tune their instruments to the 60-cycle electrical hum, what Young called the “underlying drone of the city”.

In the late 70’s composer Alvin Lucier started working with physicist John Trefny on a musical acoustics course they were teaching at Wesleyan University. They had set up a monochord and placed an electromagnet over one end while an audio oscillator drove the wire. This created an interaction between the flux field of the magnet and the frequency and loudness of the oscillator, causing the stretched wire to be observed vibrating by the naked eye. This demonstration captivated Alvin’s imagination and he started thinking about building a monochord to be used on the concert stage or in galleries. After getting some metal piano wire, clamps and a horseshoe magnet he had a built a portable version whose length could be varied depending on the size of the space. This became his classic piece Music on a Long Thin Wire. What he did was extend the wire across a room, clamping it to tables at either end. The ends of the wire were connected to the speaker terminals and a power amplifier placed under the table. The amplifier in turn had a sine wave oscillator connected to it, and a magnet straddled the wire at one end. Wooden bridges with embedded contact mics were put under the wire at both ends, and these were routed to a stereo systems. This electrified monochord is played by varying the frequency and loudness of the oscillator to create slides, frequency shifts, audible beat frequencies and other sonic effects. Lucier eventually discovered that the instrument could be left to play itself by carefully tuning the oscillator. Air currents, human proximity to the wire, heat or coolness and other shifts in the environment all caused new and amazing sounds to be heard, sometimes spontaneously erupting into triadic harmonies. This electric monochord is an instrument that can play itself just as the long thin wires of the early telephone and telegraph system tuned into the terrestrial harmonies continuously being broadcast by Mother Earth.



Earth Sound Earth Signal: Energies and Magnitude in the Arts by Douglas Kahn
The Hum of the City: La Monte Young and the Birth of NYC Drone by Alan Licht
The Anthology of American Folk Music compiled by Harry Everett Smith
Alvin Lucier, Music on a Long Thin Wire, Lovely Music LCD 1011

Posted in Uncategorized | Tagged , , , , , , , , , | Leave a comment

The Synthesis of Speech: Part 5: From a Clockwork Orange to DMR

In last month’s episode I explored the genesis of the first song uttered by a computer, Daisy Bell, and how that song ended up in 2001: A Space Odyssey. In this installment on the history of speech synthesis I’ll track the use of the vocoder in popular music on up to its implementation into the DMR radios that are currently a big buzz in the ham community.

In 1968 synth wizard Robert Moog built the first solid state vocoder. Two years later Moog built another musical vocoder, working with Wendy Carlos. This was a ten-band device inspired by Homer Dudley’s original designs. The carrier signal came from a Moog modular synthesizer. The modulator was the input from the microphone. The brilliant application of this instrument made its debut appearance in Stanley Kubrick’s film A Clockwork Orange, where the vocoder sang the vocal part from the fourth movement of Beethoven’s Ninth Symphony, the section titled “March from a Clockwork Orange” on the soundtrack. It’s something I could sit down and listen to on repeat over and over while enjoying a fine glass of moloko velocet. This was the first recording made with a vocoder and I find it interesting that the two earliest uses of speech synthesis for music ended up in films made by Kubrick. The song “Timesteps”, an original piece written by Wendy, is also features on the soundtrack. She had originally intended to include it as a mere introduction to the vocoder for those who might consider themselves “timid listeners” but Kubrick surprised Wendy by its inclusion in his dystopian masterpiece.


Coming down the road in 1974 was the classic album Autobahn by the German krautrockers Kraftwerk. This was the first commercial success for the power-station of a group. Their previous three albums had been highly experimental, though well worth an evening of listening. Kraftwerk’s contribution in the popularization of electronic music remains huge. Besides using commercial gear such as a Minimoog, the ARP Odyssey, and EMS Synthi AKS, Kraftwerk were dedicated homebrewers of their own instruments. Listening to the album now I can imagine the band soldering something together in the back of a Volkswagen Westfalia as they cruise down the highway at 120 km/h on to their next gig.

Three years later in 1977 Electric Light Orchestra released the album Out of the Blue, much to the delight of discerning listeners everywhere. There is nothing quite like the music of ELO to lift me up out of the melancholy I often find myself in during the middle of winter when spring seems far away. “Mr. Blue Sky” and “Sweet Talking Woman” are songs that toggle the happy switches in my brain. When I hear them things brighten up. This is in no small part to the judicious use of the vocoder. ELO was in love with the vocoder and it can be found littered across their recordings. (As a bit of a phone phreak another favorite cut is “Telephone Line”.)

During the 1980’s the vocoder started being used in the early hip-hop and rap groups. Dave Tompkins, author of How to Wreck a Nice Beach: The Vocoder from WWII to Hip-Hop notes the echo of history in the vocoders use alongside two turntables for the SIGSALY program and how DJs use two turntables to mix and scratch phat beats while a rap MC will drop lyrics over top of the sounds being produced by the vinyl, sometimes processing those vocals through the vocoder. The use of the vocoder continues to present times on hip-hop and jazz fusion albums such as Black Radio (1 & 2) from Robert Glasper Experiment.


While the vocoder was enjoying great success in the entertainment industry, its use in telecommunications was still ticking away, though a bit quieter, in the background.  Since 1970’s most of the tech in this area has focused on linear-predictive coding (LPC). It is a tool used for representing the spectral envelope of a digital signal of speech in compressed form, using the information from a linear predictive model and is a powerful speech analysis technique. When it came out the NSA were among the first to get their paws on it because LPC can be used for secure wireless with a digitized and encrypted voice sent over a narrow channel. The early example of this is Navajo I, a telephone built into a briefcase to be used by government agents. About 110 of these were produced in the early ’80s. Several other vocoder systems are used by the NSA for encryption (that we are allowed to know about).

Phone companies like to use LPC for speech compression because it encodes accurate speech at a low bit rate, saving them bandwidth. This had been Homer Dudley’s original intention with his first vocoding experiments back in the 1930’s. Now LPC has become the GSM standard protocol for cellular networks.  GSM  uses a variety of voice codecs that implement the technology to jam 3.1 kHz of audio into 6.5 and 13 kbit/s of transmission. Which is why to my ear, smart phones, for all the cool things they can do with data, apps and GPS, will never sound as good with voice as an old school toll call on copper wires. LPC is also used in VoIP.

LPC has also been used in musical vocoding. Paul Lansky created the computer music piece notjustmoreidlechatter using LPC. A 10th order derivative of LPC was used in the popular 1980s Speak & Spell educational toy. These became popular to hack by experimental musicians in a process known as circuit bending, where the toy is taken apart and the connections re-soldered to make sounds not originally intended by the manufactures. This technique was pioneered by Cincinnati maker and musician Q. Reed Ghazala into a high art form. Reed’s experimental instruments have been built for Tom Waits, Peter Gabriel, King Crimson’s Pat Mastalotto, Faust, Chris Cutler, Towa Tei, Yann Tomita, Blur and many other interesting musicians. And not so interesting ones (to me) such as Madonna. A future edition of The Music of Radio will cover his work in detail, but a lot can be found on his website anti-theory.net.

Finally vocoders are utilized in the DMR radios that are currently gaining popularity among hams around the world. In Ohio the regional ARES groups are being encouraged to utilize this mode as another tool in the box. DMR is an open digital mobile radio standard. DMR, along with P25 phase II and NXDN are the main competitor technologies in achieving 6.25 kHz equivalent bandwidth using the proprietary AMBE+2 vocoder. This vocoder type uses multi-band excitation to do it’s speech coding. Besides it’s use in DMR the AMBE+2 is also used in D-Star, Iridium satellite telephone systems, and OpenSky trunked radio systems.

From what I’ve heard I didn’t really care for the audio quality of DMR, as on cell phones. My ears would rather dig through the mud of the HF bands than listen to the way speech is compressed in these modes. I think the vocoder is better suited to musical studios where it can be used for aesthetic effects. However with the push to use these in ARES, and needing something to play with at OH-KY-IN’s digital night on the fourth Tuesday of the month, I do plan on taking the plunge into DMR. And when I do I will know that every time I have a QSO using the DMR platform I will be taking part in a legacy starting with Homer Dudley’s insights into human vocal system as a carrier wave for speech. A legacy that stretches across the fields of telecommunication, cryptology and popular music.


Chip Talk: Projects in Speech Synthesis by David Prochnow, Tab Books, 1987.
…and some other research on the interwebs.


This piece was originally published in the April 2017 issue of the Q-Fiver.

Posted in Phono Graphology | Leave a comment

The Synthesis of Speech: Part 4: A Bicycle Built for Two

Speech synthesis confers a number of benefits to technology end users. It allows individuals with impaired eyesight to be able to operate radios and computers. For those who cannot speak, and who may also have trouble using sign language, speech units such as the device employed by Stephen Hawking allow a person to communicate in ways unthinkable a century ago. For these individuals speech synthesizers play an integral role in adding quality to their day to day lives. On our local repeaters synth voices make announcements about nets and club events, and speech synths read the weather on the NWS frequencies. Beyond these specialized uses, one of the ways everyone can share in the joy of chip talk is through the medium of music.

280px-IBM_Electronic_Data_Processing_Machine_-_GPN-2000-001881The IBM 704 was the first computer to sing. It was first introduced in 1954 and 140 units had sold by 1960. The programming languages LISP and FORTRAN were first written for this large machine that used vacuum tube logic circuitry. Bell Telephone Laboratories (BTL) physicist John Larry Kelly coaxed the 704 into singing Daisy Bell aka A Bicycle Built for Two using a vocoder program he wrote for the 704.

Lovely as the a cappella computer was, it was deemed in need of instrumental accompaniment. For this part of the song the expertise of fellow BTL employee Max Vernon Mathews was sought out.  Max was an electrical engineer whose first love of music enabled him to become a pioneer in electronic and computer music. In 1954 he wrote the first computer program for sound generation, MUSIC, also used on the IBM 704. The accompaniment to the voice portion of Daisy Bell was programmed by Max in 1961 using the IBM 7090.

bicycle built for twoThe IBM 7090 was the transistorized version of the 709 vacuum tube mainframe. The 7090 series was designed for “large-scale scientific and technological applications.” The first of 7090’s was installed in late 1959 at a price tag of close to $3 million. Adjusted for inflation the price today would be a whopping $23 million buckaroos. Besides its musical capabilities, the 7090’s other accomplishments included being used for the control of the Gemini and Mercury space flights. IBM 7090’s were also used by the Air Force for the Ballistic Missile Early Warning System up until the 1980s. Daniel Shanks and John Wrench used it to calculate the first 100,000 digits of pi. Yet none of the above uses compare, in my mind, to the beauty of the IBM 704 joining forces with the IBM 7090 on the song Daisy Bell.

Another computer, HAL 9000, still gets most of the credit for this electronic version of Daisy Bell. Arthur C. Clarke, author of 2001: A Space Odyssey, happened to be visiting his friend and colleague John Peirce at BTL when John Larry Kelly was making his demonstrations of speech synthesis with the IBM 704. He was so fascinated by witnessing this computational marvel that six years later he wrote that version of Daisy Bell into his screenplay, as sung by HAL in the middle the machines climactic mental breakdown. The song was on the vinyl platter “Music from Mathematics” put out by the Decca label a handful of decades ago.

Daisy Bell went on to have a notable reprise for the Commodore 64 when Christopher C. Capon wrote his program “Sing Song Serenade”. The sounds for his version were played direct on the hardware by rapidly moving the read/write head of the computer. The resulting audio was emitted from the floppy disk drive.

Max Mathews continued to make strong contributions to the humanities in the realms of music and technology. In 1968 he developed Graphic 1, a graphical system that used a light pen for drawing figures that could be converted into sound. In 1970 Mathews developed GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) with F. R. Moore. GROOVE  was the first fully developed music synthesis system for interactive composition and realtime performance. It used 3C/Honeywell DDP-24 (or DDP-224) minicomputers.

An algorithm written by Mathews was used by Roger M. Shepard to synthesize Shepard Tones. These tones (named after Roger) consist of a superposition of sine waves separated by octaves. When the base pitch of the tone is played moving upward or downward, it is known as the Shepard Scale. Playing this scale creates an auditory illusion of a tone that continually ascends or descends in pitch, yet seems to get no higher or lower. It is the musical version of a barber pole or of the Penrose stair, a type of impossible object in geometry, made famous in the drawing Ascending and Descending by M.C. Escher.


Max also made a controller, called a Radio-Baton and Radiodrum, used to conduct and play electronic music. Developed at BTL in the 1980s it was originally a kind of three-dimensional mouse. The device has no inherent sound of its own, but produces control signals that are used to trigger sounds, sound-production, effects and the like. The Radio-Baton is similar to a theremin. Magnetic capacitance is used to locate the position of the conductors baton, or mallets in the case of the drum. The two mallets are antennas transmitting on slightly different frequencies. The drum surface, also electronic, acts as another set of antennas. The combination of these antenna signals is used to derive X, Y and Z, and these are interpreted according to the assigned musical parameters.



Besides the use of Daisy Bell in the soundtrack for 2001, director Stanley Kubrick used a wide range of work by modern composers. The piece Atmospheres written by Gyorgy Ligeti in 1961 was used for the scenes of the monolith and those of deep space. Ligeti’s earlier electronic work Artikulation, though not used in the film, shares an interesting connection to some of the ideas behind speech synthesis. Artikulation was composed in 1958 at the Studio for Electronic Music of West Deutsche Radio in Cologne with the help of Cornelius Cardew, an assistant of Karlheinz Stockhausen (whose works involving shortwave radios will be explored in time). The piece was composed to be an imaginary conversation of multiple ongoing monologues, dialogues, many voices in arguments and chatter. In it Ligeti created a kind of artificial polyglot language full of strange whispers, enunciations and utterance.


Music from Mathematics: Played by IBM 7090 Computer to Digital Sound Transducer,  Decca LP 9103.
Gyorgy Ligeti: Continuum / Zehn Stucke fur Blaserquintett / Artikulation / Glissandi / Etude fur Orgel / Volumina, Wergo 60161, 1988

Posted in Phono Graphology | Leave a comment

The Synthesis of Speech: part 3 AUDREY

This installment continues the exploration of the development of speech synthesis. So far I’ve investigated the invention of the Vocoder and how it was used in the SIGSALY program in WWII. In this episode I explore the other side of the speech synthesis coin, speech recognition. Without the ability for machines to recognize speech on the one hand and the ability to synthesize it on the other, the wunderkind of today’s consumer electronics, Siri, Dragon and Alexa, would not be possible. With both in place humans can now speak, and sometimes yell with exasperation, to a wide range of interconnected devices and our smart phones and Echo Dots will speak back to us. As developments in Artificial Intelligence take off the little computer in your pocket soon speak up for itself and yell back.

In a way it could be said that speech recognition systems began in the 19th century when sound waves were first converted into electrical signals. By 1932 Harvey Fletcher was researching the science of speech perception at that temple of telecommunications, Bell Laboratories. His contributions in this area showed that the features of speech are spread over a wide frequency range. He also developed the articulation index to quantify the quality of a speech channel. Articulation indexes are used in measuring the effectiveness of hearing aids and in industrial settings. Harvey is credited with the invention of an early electronic hearing aid, and is notable for overseeing the creation of the first stereophonic recordings and live stereo sound transmissions, for which he was dubbed the “father of stereophonic sound”.

Interest in speech recognition didn’t end with Fletcher. In 1952, over half a century before Siri or Alexa could respond to a voiced question of where to find the best noodle shop in town (or when the end of the world will be), AUDREY was on the scene. She derived her name from her special power: Automatic Digit Recognition. She was a collection of circuits capable of perceiving numbers spoken into an ordinary telephone. Due to the technological limits of the time she could only recognize the spoken numbers of “0” through “9”. When the digits were uttered into a mic on the handset AUDREY would respond by illuminating a corresponding bulb on the front panel of the device. It sounds simple, but this marvel was only achieved after overcoming steep technical hurtles.

S. Balashek, R. Biddulph, and K. H. Davis were the creators of AUDREY. One of the obstacles they faced was to craft a system capable of recognizing the same word when it is said with subtle variations. The spoken digit “7” for example, when said multiple times by even one person is subject to slight differences. Duration, intonation, quality, volume and timing all change the sound of the word with each individual utterance. To recognize speech amidst all these variables AUDREY focused on the sound parts within the words that have the most minimal variation. In this way the machine did not need to have an exactly spoken match. Roberto Pieraccini put it this way, saying there is less variety “across different repetitions of the same sounds and words than across different repetitions of different sounds and words.”

The exact matches came from the part of speech known as formants. A formant is a harmonic of a note that is augmented by the resonance of the vocal tract when speaking or singing. The information that humans require to distinguish speech sounds can be represented in a spectrogram by peaks in the amplitude/frequency spectrum. AUDREY could locate the formant in the spectrum of each utterance and use that to make a match.
AUDREY also required that there be pauses between words. She couldn’t isolate or separate individual words when said in a string. In addition designated talkers had to be assigned, talkers who could produce the specific formants, otherwise she might not recognize a digit. For each speaker the reference patterns of the formants drawn electronically and stored within her memory had to be fine tuned. Yet despite all the limitations around her use, the researchers proved that building a machine capable of recognizing human speech wasn’t a pipe dream.

AUDREY was expensive because she was state of the art and all analog. The six-foot high relay rack she kept occupied with all her vacuum-tube circuitry required a lot of upkeep. And she drew a lot of power that really hiked up the electric bill. The invention never really went anywhere in terms of being used as a tool in Ma Bell’s vast monopoly. It could have been used by toll operators or wealthy customers of the telephone to voice dial, but manual dialing was simple, fast, and cheap.

Creating a system that had uniform recognition of words as uttered by multiple people was a dream that had to be fulfilled by other researchers down the line. They built on the sweat equity and foundation of those who went before. The fact that a machine can be made to decipher strange human vocalizations at all is sheer wonder. While others may be fond of Siri, Dragon and Alexa it is AUDREY who will always remain in my heart.

The Voice in the Machine: Building Computers That Understand Speech by Roberto Roberto Pieraccini, MIT Press 2012
Audrey: The First Speech Recognition System

Posted in Phono Graphology | Tagged , , | Leave a comment

The Synthesis of Speech: Part 2 SIGSALY

This edition of the Music of Radio continues to explore developments around  electronically generated speech. Homer Dudley, an engineer and acoustics researcher who worked for Bell Telephone Laboratories (BTL), made significant contributions to this field beginning with his invention of the Vocoder and Voder. The development of these two instruments was detailed in last month’s column. Now I will turn my attention to how the Vocoder was employed in encrypting the transmissions of high ranking officials during WWII for the SIGSALY program. SIGSALY, by-the-way, is simply a cover name for the system and is not an acronym.

In 1931 BTL had developed the A-3 scrambler that was used by Roosevelt and Churchill, but the security of this device was eventually compromised by German’s at a radio post in South Holland who had been intercepting the Prime Ministers telephone calls. The A-3 had worked with the Trans-Atlantic Telephone by splitting speech up into different bands, but it wasn’t difficult to reassemble as the Germans proved in 1941, making the situation surrounding communications security to become intolerable to the Allies.

In 1942 the Army contracted BTL to assist with the communication problem and create “indestructible speech” or speech that could withstand attempts at code breaking. From this effort the revolutionary 12-channel SIGSALY system was born. To create SIGSALY workers sifted through over 80 patents in the general area of voice security. None of these fit the needs of the allies, but Homer Dudley’s Vocoder did and formed the basis of the system.  For SIGSALY a twelve-channel Vocoder system was used. Ten of the channels measured the power of the voice signal in a portion of voice frequency spectrum (generally 250-3000 Hz). Two channels were devoted to “pitch” information and whether or not unvoiced (hiss) energy was present. The Vocoder enciphered the speech as it went out over phone or radio. In order to be deciphered at each end of the conversation an audio crypto-key was needed. This came in the form of vinyl records.

From the standpoint of music history it is interesting to note, as Dave Tompkins did in his book How to Wreck A Nice Beach: The Vocoder fromWWII to Hip-hop, that the SIGSALY system employed two-turntables alongside the microphone/telephone. The classified name for this vinyl part of the operation was SIGGRUV.  The turntables were used to solve the problem of needing a cryptographic key. They played vinyl records produced by the Muzak Corporation, a company famous for the creation of elevator music. The sounds on these records weren’t aimed at soothing weekend shoppers or people sitting in waiting rooms.  Muzak had been contracted into pressing vinyl that contained random white noise, like channel 3 on an old television set. The noise was created by the output of very large mercury-rectifier tubes that were four inches in diameter, and over a foot high. These generated wide band thermal noise that was sampled every twenty milliseconds. The samples were then quantized into six levels of equal probability. The level information was converted into channels of a Frequency Shift Keyed audio tone signal recorded onto a vinyl master. From the master only three copies of a key segment were made. If these platters had been commercial entertainment masters thousands would have been pressed from its blueprint. If any SIGGRUV vinyl still exists, and for security reasons they shouldn’t have, those grooves are critically rare.

It had to be insured that no pattern could be detected so the records had to be random noise. If the equipment had somehow been duplicated by the Axis powers, the communications would still be uncompromised as the they required the crypto key of the matching vinyl, required at each terminal. This made the transportation of these records, via armored truck, the most secure since Edison invented the Phonograph. Just as the masters were destroyed after making three keys, each vinyl key was only ever to be played once, as operators were instructed to burn after playing. The official instruction read, “The used project record should be cut-up and placed in an oven and reduced to a plastic biscuit of ‘Vinylite'”. As another precaution against the grooves falling into enemy hands the turntables themselves had a self-destruct mechanism built into them that could be activated in case a terminal was compromised. Thinking of all this sheds new light on the idea of a DJ-Battle.

sigsaly-terminalKeeping the turntables at two different terminals across the globe synchronized was another  technical hurdle that BTL overcame. If a needle jumped or the system went out of synch only garbled speech was heard. At the agreed upon time, say 1200 GMT, operators listened for the click of the phonograph being cued to the first groove. The turntables were started by releasing a clutch for the synchronous motor that kept the turntable running at a precise speed. Fine adjustments were made using 50-Hertz phase shifters (Helmholtz coils) to account for delays in transmission time. The operators would listen for full quieting of the audio as synchronization was established. Oscilliscopes and HF receivers were also used to keep systems locked to international time.

A complete SIGSALY system contained about forty racks of heavy equipment composed of vacuum tubes, relays, synchronous motors, turntables, and custom made electromechanical equipment. In the pre-transistor era all of this gear required a heavy load of power so cooling systems were also required to keep it all from getting fried. The average weight of a set up was about 55 tons.

The system passed Alan Turing’s inspection (if not his test) as he had been briefly involved with the project on the British side. On July 15, 1943 the inaugural connection was established between the Pentagon and a room in the basement below Selfridges Department Store in London. Eventually a total of twelve SIGSALY encipherment terminals were established, including some in Paris, Algiers, Manila, Guam, Australia and one on a barge that ended up in the Tokyo Bay. In the year 1945 alone the system trafficked millions of words between the Allies.

To keep all of this operational a special division of the Army Signal Corp was set up, the 805th Signal Service Company. Training commenced in a school set up by BTL and members were sent to various locations. Their tasks required security clearances and a firm grasp on cutting edge technology which they were tasked to operate and maintain. For every eight hours of operation the SIGSALY systems required 16 hours of maintenance.

In putting the system together eight remarkable engineering “firsts” were achieved. A review conducted by The Institute of Electronic and Electrical Engineers in 1983 lists them  as follows:

1. The first realization of enciphered telephony
2.The first quantized speech transmission
3.The first transmission of speech by Pulse Code Modulation (PCM)
4.The first use of companded PCM
5.The first examples of multilevel Frequency Shift Keying (FSK)
6.The first useful realization of speech bandwidth compression
7.The first use of FSK – FDM (Frequency Shift Keying-Frequency Division Multiplex) as a viable transmission method over a fading medium
8.The first use of a multilevel “eye pattern” to adjust the sampling intervals (a new, and important, instrumentation technique)

To do all these things required precision and refinement in new technology. SIGSALY has left the world with a rich inheritance that spans developments in cryptology, digital communications, and even left its mark on music.


How to Wreck A Nice Beach: The Vocoder from WWII to Hip-hop: The Machine Seaks by Dave Tompkins, Melville House, 2010

SIGSALY: The Start of the Digital Revolution by J.V. Boone and R.R. Peterson, retrieved at:

[This article originally appeared in the January 2017 issue of the Q-Fiver.]

Posted in Uncategorized | Leave a comment

The Synthesis of Speech: Part 1 The Vocoder and Voder

Who doesn’t remember changing their voice as a kid by talking into a fan? Or sneaking off with baloons at a party or dance to inhale the helium and try to talk like a character from a cartoon? One year for Halloween I got a cheap voice changer toy that had three settings and I remember playing with it for hours. But voice changers weren’t always so cheap, and the original was room-sized instead of hand held. The initial reason behind its development had nothing to do with keeping kids amused and was not driven by aesthetic concerns. It was only after Ma Bell and the military had wrapped up their use for the Vocoder that it came to be appreciated for its musical qualities, first by experimental electronic musicians, and later pop, rock and rap artists. The next few editions of the Music of Radio series delves into the story of electronic speech synthesis, from the Vocoder, to the Voder and on to the first text-to-speech computer programs written for gargantuan mainframes. It takes us deep into the stacks of the Bell Laboratory Archives and into the belly of WWII crypto communications before emerging in the 1960’s and ’70’s when the stage was set for mind melting explorations in sonic psychedelia. Just as the Vocoder is still be used for artistic effects the original ideas behind it, compression and bandwidth reduction, continue to be used in new hardware and software applications for radio and telecommunications.

vocaltractHomer Dudley, the inventor of the Vocoder, was an electronic and acoustic engineer whose primary area of focus revolved around the idea that human speech is fundamentally a form of radio communication. In his white-paper The Carrier Nature of Speech he wrote that “speech is like a radio wave in that information is transmitted over a suitably chosen carrier.” This realization came to Dudley in October of 1928 when he was otherwise out of commission in a Manhattan hospital bed. Discoveries are often made from playfully messing around with things, either in horseplay or boredom, and Dudley was keeping himself entertained just as a kid might by making weird sounds with his voice through changing the shape of his mouth. He had the insight that his vocal cords were acting as a transmitter of a periodic waveform. The nose and throat were the resonating filters while the mouth and tongue produced harmonic content, or formants to use linguistic lingo. He also observed that the frequencies of his voice vibrated at a faster rate than the mouth itself moved.

These insights went on to have implications for the work he pursued at Bell Laboratories, a true idea factory, where money and resources were thrown at any old project that might bear the AT&T monopoly some form of fruit or further advantage in their already sprawling playground of wires and exchanges. Once recovered and back at work Homer thought his discovery might have an application in the area of compression and he made it his ambition to free up some of the phone companies precious bandwidth hoping to pack in more conversations onto the copper lines. He was given a corner and allowed to go work in it, devoting himself to his obsession.

He exploited his research in the invention of the Vocoder, or VOice CODER, first demonstrated at Harvard in 1936. It works by measuring how the spectral characteristics of speech change over time. The signal going into the mic is divided by filters into a number of frequency bands. The signal present at each frequency gives a representation of the spectral energy. This allows the Vocoder to reduce the information needed to store speech to a series of numbers. On the output end to a speaker or headphone the Vocoder reverses the process to synthesize speech electronically. Information about the instantaneous frequency of the original voice signal is discarded by the filters giving the end result it unique robotic and dehumanized characteristics. The amplitude of the modulator for each of the individual analysis bands generates a voltage that controls the amplifiers in each corresponding carrier band. The frequency components of the modulated signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands. Because the Vocoder does not employ a point-by-point recreation of the wave, the bandwidth used for transmission can be significantly reduced.

There is usually an unvoiced band or sibilance channel on a Vocoder for frequencies outside the analysis bands for typical talking, but still important in speech. These are words starting with the letters s, f, ch or other sibilant sounds. These are mixed with the carrier output for increased clarity, resulting in recognizable speech but still roboticized. Some Vocoders have a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency.

To better demonstrate the speech synthesis ability of the decoder part of his invention Dudley created another instrument, the Voder (Voice Operating Demonstrator). This was unveiled during the World Fair in New York in 1939 where Ray Bradbury was among the attendees who witnessed it firsthand.  The Voder synthesized speech by creating the electronic equivalent of a vocal tract. Oscillators and noise generators provided a source of pitched tone and hiss. A 10-band resonator filter controlled by a keyboard converted the tone and hiss into vowels, consonants and inflections. Another set of extra keys allowed the operator to make the plosive sounds such as “p” and “d” as well as affrictive sounds of “j” in “jaw” and “ch” in “cheese”. Only after months of practice with this difficult machine could a trained operator produce something recognizable as speech.

At the world fair Mrs. Helen Harper, who was noted for her skill, led a group of twenty operators in demonstrations of the Voder where people from the crowd could come up and ask the operator to make the Voder say something.

Homer Dudley had great success in his aim of reducing bandwidth with the Vocoder. It could chop up voice frequencies into ten bands at 300 hertz, a significant reduction of what was required for a phone call back in the day. Yet it never got used for that purpose. The large size of the equipment was impractical to install in homes and offices across the country even if it created more channels on the phone lines. For a time Dudley worked at marketing the Vocoder to Hollywood for use in audio special effects. It never made much of an impact there as other voice changing devices such as the Sonovox started being used in radio jingles and in cartoons. Before it could be discovered by musicians Homer Dudley’s tool for voice compression had to eb put into service during America’s efforts in WWII where it was used as part of the SIGSALY encryption program. The details surrounding the coding of the voices of MacArthur and Churchill will be explored in next months column.


How to Wreck A Nice Beach: The Vocoder from WWII to Hip-hop: The Machine Seaks by Dave Tompkins, Melville House, 2010

“The Carrier Nature of Speech” by Homer Dudley, The Bell System Technical Journal, Vol. 19, No. 4, October 1940

“Fundamentals of Speech Synthesis” by Homer Dudley, Journal of the Audio Engineering Society, Vol. 3, No. 4, October 1955

[This article originally appeared in the December 2016 issue of the Q-Fiver]

Posted in Uncategorized | Leave a comment

Lev Theremin and the Vibrations of the Ether Part 2

Lev Theremin’s skill at invention was not lost on the Soviet machine. Not long after his musical instrument was patented, the radio watchman security device it was based on started being employed to guard the treasures of gold and silver Lenin had plundered from church and clergy. The watchman was also being used to protect the state bank. Setting up and installing these early electronic traps took him away from his primary interest in scientific research. Just as he was approaching the limits of his frustration his mentor at the Institute gave him a new problem to solve, that of “distance vision” or the transmission and reception of moving images over the airwaves. The embryonic idea for television was in the air at the time but no one had figured out how to make it a reality. The race was on and the Soviets wanted to be first to crack the puzzle.

Having researched the issue extensively in the published literature, Lev was ready to apply the powers of his mind towards a solution. In the Soviet Union parts weren’t always readily available. Some were smuggled in, and others had to be scavenged from flea markets -the latter a process very familiar to radio junkies. By 1925 he had created a prototype from his junk box using a rotating disk with mirrors that directed light onto a photo cell. The received image had a resolution of sixteen lines, and it was possible to make out the shape of an object or person but not the identifiable details. Other inventors in Russia and abroad were also tackling the issue. Fine tuning the instrument over the next year he doubled the resolution to 32 lines and then, using interlaced scanning, to 64. Having created a rudimentary “Mechanism of Electric Distance Vision” he demonstrated the device and defended his thesis before students and faculty from the physics department at the Polytechnic Institute. Theremin had built the first functional television in Russia.

After this period Lev embarked to Europe and then America where he lived for just over a decade engaging the public, generating interest in his musical instrument, and doing work with RCA. As Hitler gathered power he was anxious about the encroaching war and returned home to the Soviet Union in 1938. He barely had time to settle back in when he was sent to the Kolmya gold mines for enforced labor for the better part of a year. This was done as a way of breaking him, a fear tactic that could be held over his head if he didn’t cooperate: do what we say or go back to the mines. The state had better uses for him. He was picked up by the police overlord Lavrenti Beria who sent him to work in a secret laboratory that was part of the Gulag camp system. One of his first jobs was to build a radio beacon whose signals would help track down missing submarines, aircraft and smuggled cargo.

With WWII winding to a close the Cold War was dawning and Russia was on the offensive, trying to extend its reach and gather intelligence on such lighthearted subjects as the building of atomic bombs. In their efforts at organized espionage the Soviets sifted for all the data they could get from foreign consulates. Having succeeded with his beacon Lev was given another assignment. This time the goal wasn’t to track down cargo or vehicles but to intercept U.S. secrets from inside Spaso House, the residence of the U.S. Ambassador. Failure to do the bidding of his boss would mean a return to the mines. His boss had high demands for the specifications of the bug Lev was to plant. The proposed system could have no microphones and no wires and was to be encased in something that didn’t draw attention to itself.

great-sealThe bug ended up being put inside a wooden carving of the Great Seal of the United States and was delivered by a delegation of Soviet Pioneers (their version of Boy Scouts) on July 4, 1945. Deep inside this “gesture of friendship” was a miniature metal cylinder with a nine inch antenna tail. The device was passive and was not detected by the X-Rays used at Spaso house in their routine scans. It only activated when a microwave beam of 330 Mhz was directed at the seal from a nearby building. There was a metal plate inside the cylinder that when hit with the beam resonated as a tuned circuit. Below the beak of the eagle the wood was thin enough to act as a diaphragm and the vibrations from it caused fluctuations in the capacitance between the plate and the diaphragm creating a microphone. The modulations this produced were picked up by the antenna and then transmitted out to the receiver at a Soviet listening post. Using this judiciously the Soviets were able to gain intelligence to aid them in a number of strategic decisions. The Great Seal bug is considered to be a grandfather to RFID technology.

This wasn’t the last time Lev was asked to develop wireless eavesdropping technology. For the next job his overseers upped the ante on him. No device could be planted in the site targeted for surveillance. The operation was code named Snowstorm. Lev used his interest in optics to figure out a method. Knowing that window panes in a room vibrate slightly when people talked he needed a method to detect and read the vibrations from a distance. Resonating glass contains many simultaneous harmonics and it would be a difficult to find the place of least distortion to get a voice signal from. Then there was the obstacle of reinterpreting the signal back into a speech pattern. Using an infrared beam focused on the optimum spot and catching its reflection back in an interferometer with a photo element he was able to pick up communications. Back at his monitoring post he used his equipment and skills to reduce the large amounts of noise from the signal.

A few years later Lev was released from his duties at the lab, but was kept on a tight leash and not allowed to leave Moscow.


For those amateurs wishing to build and play a theremin there are many commercial kits available on the market. However a simple theremin can be built using just three AM radios. If you don’t already have these laying around the house they can easily be obtained from your local thrift store.

One of the radios will be a fixed transmitter, another a variable transmitter and the third would be the receiver. The volume knobs on the fixed and variable transmitters can be turned all the way down, as they are just used to produce the intermediate frequency oscillations that will be picked up by the receiver. The receiver radio should be set on an unused frequency in the upper range of the AM band such as 1500 Khz. If it is in use tune to a nearby space where only static is heard. The fixed and variable transmitters should then be tuned 455 Khz below where your receiver is set, in this example 1045 Khz. 455 Khz is a common difference in the local oscillator frequency, although there can be variations. As these frequencies are set the receiver should start to make a whistling type sound, the production of a beat frequency.

The next step is to open up the variable radio and look for the variable capacitor, often housed in white plastic with four screws. Find the terminal that takes the station out of tune and use an aligator clip attached to the antenna, or solder a wire from the antenna to the oscillator terminal. Now the controls will have to be adjusted slightly again. Tune the fixed transmitter until the receiver starts whistling and have fun playing with the sounds it creates.


Theremin: Ether Music and Espionage by Albert Glinsky, University of Illinois Press, 2000
How to Make a Basic Theremin by eltunene: https://app.box.com/s/kgdstzwaoc/1/17284427/181802859/1

Posted in Phono Graphology | Tagged , , | Leave a comment

Lev Theremin and the Vibrations of the Ether Part 1

lev-thereminThe sound of the theremin has become synonymous with the spectral and spooky sci-fi horror flicks of the 1940’s and ’50’s. It’s trilling oscillations conjure up images of flying saucers made from hub caps and fishing line. When most folks hear and see the theremin they tend to think of it as little more than a novelty or scientific amusement. While it may have fallen out favor in horror movie soundtracks it has remained a mainstay within the field of electronic music. It is distinguished among all musical instruments by being the only one that is played without touching the instrument itself. To the radio and electronics buff the theremin is worth exploring as a way of learning about electromagnetic fields and the creative use of the heterodyning effect for artistic purposes. Whether or not the quivering sounds the instrument pulls out of the ether are appealing to a listener is a matter of individual preference.

The inventor of the theremin, or etherphone as it was first dubbed, was Lev Teremen. He was born in Russia in 1896 a few years before Marconi achieved wireless telegraphy. As a young boy he spent his time reading the family encyclopedia and was fascinated by physics and electricity. At five he had started playing piano, and by nine had taken up the cello, an instrument that has an important influence on the way theremins are played. After showing promise in class he was asked to do independent research with electricity at the school physics lab. There he began an earnest study of high-frequency currents and magnetic fields, alongside optics and astronomy. It was around this time Lev met Abram Ioffe, a rising physicist whom he would work under in a variety of capacities. Yet his studies in atomic theory and music were overshadowed by the outbreak of WWI. In 1916 he was summoned by the draft and moved to Petrograd where his electrical experience saved him from the front lines. He was placed in a military engineering school where he landed in the Radio Technical Department to do work on transmitters and oversee the construction of a powerful and strategic radio station. In the course of the war the station had to be disassembled and Lev oversaw the blowing up of a 120 meter antenna mast. Another war time duty was as a teacher instructing other students to become radio specialists.

As Lev’s reputation grew among engineers and academic scientists he was eventually asked to go and work with Ioffe Abram at the Physico-Technical Institute where he became the supervisor of a high-frequency oscillations laboratory. Lev’s first assignment was to study the crystal structure of various objects using X-Rays. At this time he was also experimenting with hypnosis and Ioffe suggested he take his findings on trance-induced subjects to psychologist Ivan Pavlov. Though Lev resented radio work in preference for his love of exploration of atomic structures, Ioffe pushed him to work more systematically with radio technology. Now in the early 1920’s Lev busied himself thinking of novel uses for the audion tube.

His first project involved the exploration of the human body’s natural electrical capacitance to set up a simple burglar alarm circuit that he called the “radio watchman”. The device was made by using an audion as a transmitter at a specific high frequency directed to an antenna. This antenna only radiated a small field of about sixteen feet. The circuits were calibrated so that when a person walked into the radiation pattern it would change the capacitance, cause a contact switch to close, and set off an audible signal. He was next asked to create a tool for measuring the dielectric constant of gases in a variety of conditions. For this he made a circuit and placed a gas between two plates of a capacitor. Changes in temperature were measured by a needle on a meter. This device was so sensitive it could be set off by the slightest movement of the hand. This device was refined by adding an audion oscillator and tuned circuit. The harmonics generated by the oscillator were filtered out to leave a single frequency that could be listened to on headphones.

As Lev played with this tool he noticed again how the presence of his movements near the circuitry were registered as variations in the density of the gas, and now measured by a change in the pitch. Closer to the capacitor the pitch became higher, while further away it became lower. Shaking his hand created vibrato. His musical self, long dormant under the influence of communism, came alive and he started to use this instrument to tease out the fragments he loved from his classical repertoire. Word quickly traveled around the institute that Theremin was playing music on a voltmeter. Ioffe encouraged Lev to refine what he had discovered -the capacitance of the body interacting with a circuit to change its frequency- into an instrument. To increase the range and have greater control of the pitch he employed the heterodyning principle. He used two high-frequency oscillators to generate the same note in the range of 300 khz :-beyond human hearing. One frequency was fixed, the other variable and could move out of sync with the first. He attached the variable circuit to a vertical antenna on the right hand side of the instrument. This served as one plate of the capacitor while the human hand formed another. The capacitance rose or fell depending on where the hand was in relation to the antenna. The two frequencies were then mixed into a beat frequency within audible range. To play a song the hand is moved at various distances from the antenna creating a series beat frequency notes.

To refine his etherphone further he designed a horizontal loop antenna that came out of the box at a right angle. Connected to carefully adjusted amplifier tubes and circuits this antenna was used by the other hand to control volume. The new born instrument had a range of four octaves and was played in a similar manner to the cello, as far as the motions of the two hands were concerned. After playing the instrument for his mentor, he performed a concert in November of 1920 to an audience of spellbound physics students. In 1921 he filed for a Russian patent on the device.

Sources: Theremin: Ether Music and Espionage by Albert Glinsky, 2000, University of Illinois

Posted in Phono Graphology | Tagged , | Leave a comment

The Audion Piano

No man works in a vacuum. Before the industry of radio got off the ground it had been customary for researchers to use each-others discoveries with complete abandon. As technical progress in the field of wireless communication moved from the domain of scientific exploration to  commercial development financial assets came to be at stake and rival inventors soon got involved in one of the great American pastimes: lawsuits. The self-styled “Father of Radio” Lee De Forest was involved in a number of infringement controversies. The most famous of these involved his invention of the audion (from audio and ionize) an electronic amplifying vacuum tube.

It was Edison who first produced the ancestor of what became the audion. While working on the electric light bulb he noticed that one side of the carbon filament behaved in a way that caused the blackening of the glass. Working on this problem he inserted a small electrode and was able to demonstrate that it would only operate when connected to the positive side of a battery. Edison had formed a one way valve. This electrical phenomenon made quite the impression on another experimenter, Dr. J. Ambrose Fleming, who brought the device back to life twenty years later when he realized it could be used as a radio wave detector.

At the time Fleming was working for Marconi as one of his advisers. It occurred to him that “if the plate of the Edison effect bulb were connected with the antenna, and the filament to the ground, and a telephone placed in the circuit, the frequencies would be so reduced that the receiver might register audibly the effect of the waves.” Fleming made these adjustments. He also substituted a metal cylinder for Edison’s flat plate. The sensitivity of the device was improved by increasing electronic emissions. This great idea in wireless communication was called the Fleming valve.

Fleming had patented this two-electrode tube in England in 1904 before giving the rights to the Marconi Company who took out American patents in 1905. Meanwhile Lee De Forest had read a report from a meeting of the Royal Society where Fleming had lectured on the operation of his detector. De Forest immediately began experimentation with the apparatus on his own  and found himself dissatisfied. Between the cathode and anode he added a third element made up of a platinum grid that received current coming in from the antenna. This addition  proved to transform the field of radio, setting powerful forces of electricity, as well as litigation, into motion.

The audion increased amplification on the receiving side but radio enthusiasts were doubtful about the ability of the triode tube to be used with success as a transmitter. De Forest had been set upon by financial troubles involving various scandals in the wireless world and was persuaded to sell his audion patent in 1913.

Edwin Howard Armstrong had been fascinated by radio since his boyhood and was an amateur by age fifteen when he began his career. Some of his experimentation was with the early audions that were not perfect vacuums (De Forest had mistakenly thought a little bit of gas left inside was beneficial to receiving). Armstrong took a close interest in how the audion worked and developed a keen scientific understanding of its principles and operation. By the time he was a young man at Columbia University in 1914, working alongside Professor Morecroft he used an oscillograph to make comprehensive studies based on his fresh and original ideas. In doing so he discovered the regenerative feedback principle that was yet another revolution for the wireless industry. Armstrong revealed that when feedback was increased beyond a certain point a vacuum tube would go into oscillation and could be used as a continuous-wave transmitter. Armstrong received a patent for the regenerative circuit.

De Forest in turn claimed he had already come up with the regenerative principle in his own lab, and so the lawsuits began, and continued for twenty years with victories that alternated as fast as electric current. Finally in 1934 the Supreme Court decided De Forest had the right in the matter. Armstrong however would achieve lasting fame for his superheterodyne receiver invented in 1918.

Around 1915 De Forest used heterodyning to create an instrument out of his triode valve, the Audion Piano. This was to be the first musical instrument created with vacuum tubes. Nearly all electronic instruments after if it were based on its general schematic up until the invention of the transistor.

audion-pianoThe instrument consisted of a single keyboard manual and used one triode valve per octave. The set of keys allowed one monophonic note to be played per octave. Out of this limited palette it created variety by processing the audio signal through a series of resistors and capacitors to vary the timbre. The Audion Piano is also notable for its spatial effects, prefiguring the role electronics would play in the spatial movement of sound. The output could be sent to a number of speakers placed around the room to create an enveloping ambiance. De Forest later planned to build an improved version with separate tubes for each key giving it full polyphony, but it is not known if it was ever created.

In his grandiose autobiography De Forest described his instrument as making “sounds resembling a violin, cello, woodwind, muted brass and other sounds resembling nothing ever heard from an orchestra or by the human ear up to that time – of the sort now often heard in nerve racking maniacal cacophonies of a lunatic swing band. Such tones led me to dub my new instrument the ‘Squawk-a-phone’….The Pitch of the notes is very easily regulated by changing the capacity or the inductance in the circuits, which can be easily effected by a sliding contact or simply by turning the knob of a condenser. In fact, the pitch of the notes can be changed by merely putting the finger on certain parts of the circuit. In this way very weird and beautiful effects can easily be obtained.”

audionIn 1915 an Audion Piano concert was held for the National Electric Light Association. A reporter wrote the following: “Not only does De Forest detect with the Audion musical sounds silently sent by wireless from great distances, but he creates the music of a flute, a violin or the singing of a bird by pressing a button. The tone quality and the intensity are regulated by the resistors and by induction coils…You have doubtless heard the peculiar, plaintive notes of the Hawaiian ukulele, produced by the players sliding their fingers along the strings after they have been put in vibration. Now, this same effect, which can be weirdly pleasing when skilfully made, can he obtained with the musical Audion.”

Fast forward to 1960. The Russian immigrant and composer Vladimir Ussachevsky is doing deep work in the trenches of  the cutting edge facilities at the Columbia-Princeton Electronic Music Center, one of the first electronic music studios anywhere. Its flagship piece of equipment was the RCA Mark II Sound Synthesizer, banks of reel-to-reels and customized equipment. Ussachevsky received a commission from a group of amateur radio enthusiasts, the De Forest Pioneers, to create a piece in tribute to their namesake. In the studio Vladimir composed something evocative of the early days of radio and titled it “Wireless Fantasy”. He recorded morse code signals tapped out by early radio guru Ed G. Raser on an old spark generator in the W2ZL Historical Wireless Museum in Trenton, New Jersey. Among the signals used were: QST; DF the station ID of Manhattan Beach Radio, a well known early broadcaster with a range from Nova Scotia to the Caribbean; WA NY for the Waldorf-Astoria station that started transmitting in 1910; and DOC DF, De Forests own code nickname. The piece ends suitably with AR, for end of message, and GN for good night. Woven into the various wireless sounds used in this piece are strains of Wagner’s Parsifal, treated with the studio equipment to sound as if it were a short wave transmission. Lee De Forest had played a recording of Parsifal, then heard for the first time outside of Germany, in his first musical broadcast.

Here is a youtube video of the song Wireless Fantasy:

It is also available on the CD:

Vladimir Ussachevsky, Electronic and Acoustic Works 1957-1972 New World Records


History of Radio to 1926 by Gleason L. Archer, The American Historical Society, 1938
The Father of Radio by Lee De Forest

Posted in Phono Graphology | Tagged , , , , | Leave a comment

Into the Ruins & Archdruid Report ‘Zine

This just in from Joel Caris, editor of the fabulous new science fiction quarterly, Into the Ruins:

Into  the Ruins“I’m quite happy to announce that Into the Ruins was not just a one-off accomplishment; in fact, the second issue is finished and ready for your reading pleasure. This Summer 2016 issue features a ton of great content, with five new and fantastic short stories from Jay Cummings, Chloe Woods, Bart Hillyer, Lawrence Buentello, and the returning G.Kay Bishop. From a distant civilization that cycles through the same ebb and flow of peace and warfare we find littered throughout human history, to a melancholic meditation on our fast-changing world set in 2020 that feels eerily familiar to today; from a love story set in a less energy-intensive time, to a haunting encampment at the edge of dry and dusty ruins; and on again to an adventurous and amusing attempt to deliver a key new manuscript on the herbal treatment of spinal meningitis to a distant library, these stories inspire a wide range of emotions, from meditative reflection on the predicament of our times to delight at unexpected adventure.

In addition, this issue features the debut of “Deindustrial Futures Past,” a new column from John Michael Greer which will be a recurring feature in future issues. In “Deindustrial Futures Past,” Greer will be exploring a variety of deindustrial SF works from the past, and he focuses on Edgar Pangborn for the first go. Justin Patrick Moore returns with a new review, as well, taking a look at Joëlle Anthony’s Restoring Harmony. A new Editor’s Introduction, a variety of letters to the editor, and a very short story excerpt from me round out the issue. All of this comes as a 112 page, 7″ x 10″ paperback with another beautiful cover by W. Jack Savage.

Subscribers should be receiving their issues shortly and those who aren’t ready to subscribe but who would like to check out the first issue anyway are encouraged to order a copy to peruse at their pleasure. Direct purchases from Figuration Press for shipment next week are available at that link, (though order soon for immediate shipment, as I won’t be able to mail issues fromAugust 5th – August 15th). In addition, you can order directly from Amazon. For international readers, you can go to the issue page for links to international Amazon sites it’s available at or for a link to order directly from CreateSpace, which ships throughout the world. Finally, a digital version is also available through Payhip for $7.50 (or more, if you care to increase your support).

As always, I encourage readers to send their thoughts and feedback to me ateditor@intotheruins.com, both as casual emails (rambling acceptable!) and as official letters to the editor that I can consider for publication in the third issue of Into the Ruins, coming before too long. Comments for contributing authors will be happily forwarded on.

Lastly, I want to again provide a huge thanks to John Michael Greer for his myriad forms of support; Shane Wilson, who proved a steady and invaluable Assistant Editor, catching mistakes I otherwise missed; Justin Patrick Moore, for another great book review; my amazing partner Kate O’Neill, who continues to put up with me devoting so much attention to this project; to those who wrote letters to the editor and who have helped diversify the views available in the magazine; W. Jack Savage, for again providing such a beautiful cover, and for being patient with me; and of course to all the fantastic authors published herein, whose imaginative works form the backbone of this publication and, ultimately, are the reason it exists. And finally, to everyone who has subscribed (or who still is yet to subscribe), thank you for supporting this project and helping to make it happen.

Now go read the issue and enjoy some fantastic deindustrial and post-peak science fiction!” – Joel Caris, Editor & Publisher


Also, The Archdruid Report is now available as a monthly printed subscription ‘zine. Here are the details from John Michael Greer: “On a less dismal note, I’m pleased to report that the print edition of The Archdruid Report is up and running, and copies of the first monthly issue will be heading out soon. There’s still time to subscribe, if you like getting these posts in a less high-tech and more durable form; please visit the Stone Circle Press website.”

Posted in Bibliomancy | Tagged , , , , , | Leave a comment