Sothis Medias
  • Home
  • The Radiophonic Laboratory
  • Down Home Punk
  • Seeds from Sirius
  • About
  • Publications
  • Home
  • The Radiophonic Laboratory
  • Down Home Punk
  • Seeds from Sirius
  • About
  • Publications

Two Pioneers of Spread Spectrum Radio

6/1/2019

0 Comments

 
Picture
In wireless communications spread spectrum radio is a transmission technique where the frequency of the signal is intentionally varied. This gives the signal a much greater bandwidth than if its frequency had remained constant. In the conventional transmission and receiving of signals, the frequency does not change over time, except for small fluctuations due to modulation. The signal is kept on a single frequency so two people communicating can exchange information, or so a listener in the broadcast bands knows exactly where to go to find his favorite station.
​
That is all fine and dandy for typical uses of radio. But as radio has developed the inventors and researchers who expanded the state of the art found a couple of hitches that made it problematic for certain types of signals to remain parked on one frequency. The first was interference caused by deliberate jamming on the desired frequency. This category also included other non-malicious interference coming from transmissions on nearby frequencies. The second issue with using only one frequency in a communication is when the information being transmitted is of a sensitive nature. Constant-frequency signals are easy to intercept. The military and others can make use of codes and encryption to veil transmissions on single frequencies, but codes can be broken. Radio researchers found that another layer of communication security could be added by the use of frequency-hopping which was the first technique established in spread spectrum radio.

Though attributed to multiple inventors, the first patent for frequency hopping was granted to actress Hedy Lamarr and composer George Antheil in 1942 for their "Secret Communications System" that was designed to protect Allied radio-guided torpedoes from being jammed by the Axis powers. Both Hedy and George are most remembered for their main fields of activity, movies and music, but they each had a touch of the polymath inside of them, and their other passions allowed them to make a significant advance in the radio arts.
Hedy was born in 1914 in Vienna and started training in the theater as a teenager in the 1920's. By the age of eighteen she had married her first of six husbands. Friedrich "Fritz" Mandl was a wealthy ammunitions manufacturer whose weapon systems later gave her inspiration for the patent. During this time she had started a career in film in Czechoslovakia with the 1933 film Ecstasy  which became controversial for its frank depictions of nudity and sexuality. Hubby Mandl got a bit ticked off by these movie scenes and attempted to stop Hedy from continuing her career as an actress. In her autobiography Ecstasy and Me she claimed that she was kept virtually a prisoner in their Austrian castle home. She wrote, "I knew very soon that I could never be an actress while I was his wife.... He was the absolute monarch in his marriage.... I was like a doll. I was like a thing, some object of art which had to be guarded—and imprisoned—having no mind, no life of its own". And Hedy had a keen mind with natural talent for science and invention.

Both Mandl and Lammar had Jewish parents, but Mandl also had business ties with the Nazi government, to whom he sold his weapons. Mussolini and Hitler were among those who attended the lavish parties Mandl hosted at their Schloss Schwarzenau castle. Hedy would accompany him to his meetings where she got to associate with scientists and professionals involved in military technology. It was at these conferences where her interests in inventing and applied science were first sparked.

As her marriage grew unbearable she decided to flee to Paris where she met movie mogul Louis B. Mayer who was scouting for talent. With all the trouble brewing in Europe he found it easy to persuade her to move to Hollywood where she arrived in 1938 and began work on the film Algiers. She was in number of other popular feature films, including I Take This Woman (1940), Comrade X (1940), Come Live With Me (1941), H.M. Pulham, Esq. (1941), and her most famous role in Cecil B. Demile's Samson and Delilah (1949). After starring in the comedy My Favorite Spy (1951) with Bob Hope her acting career started to peter out.

It was during the height WWII and her career when she was also grew bored with acting. Hedy had complained that the roles given to her required little challenge in terms of technique or the delivery of lines and monologues. Mostly the films she had starred in cast her for her beauty rather than her talent and ability. Stifled by the lack of more demanding roles she found an outlet for her intellectual capacities through the hobby of tinkering and inventing which was nurtured by her friendship with aviation tycoon Howard Hughes.

Lamarr had some ideas about using radio controlled torpedoes in the war effort. To help her in its implementation she eventually tapped composer George Antheil, who had also found success in Hollywood scoring films. Antheil had been a part of the Lost Generation, and like many of his contemporaries such as Ernest Hemingway, he had moved to Europe after the horrors of the first World War to live a bohemian and artistic life amidst the cafes and salons of Paris in the 1920's. It was during this time period when he composed his best known work Ballet Mecanique. It began its life as an accompaniment to the Dadaist film of the same name made by Fernand Léger and Dudley Murphy, with cinematography by Man Ray. The techniques Antheil developed in this composition were to be key to the success of his shared frequency hopping patent.

Ballet Mecanique was scored to use a number of player pianos. He described their effect as "All percussive. Like machines. All efficiency. No LOVE. Written without sympathy. Written cold as an army operates. Revolutionary as nothing has been revolutionary." There are no human dancers. The mechanical instruments are what make it a ballet. Antheil's original conception was to use 16 specially synchronized player pianos, two grand pianos, electronic bells, xylophones, bass drums, a siren and three airplane propellers. There were a number of difficulties involved in this set-up that broke away from traditional orchestral arrangements. The synchronization of the player pianos proved to be the largest obstacle. Consisting of periods of music and interludes of relative silence created by the droning roar of airplane propellers. Antheil described it as "the rhythm of machinery, presented as beautifully as an artist knows how."
Besides composing Antheil was a writer and fierce patriot. He was a member of the Hollywood Anti-Nazi League and wrote a book of predictions about WWII titled The Shape of the War to Come. He also penned a newspaper column on relationship advice that was nationally syndicated and he fancied himself an expert on the subject of female endocrinology. His interests in this area was what first brought into contact with Hedy. She had sought him out for advice on how she might enhance her upper torso. After he proposed that she could make use of glandular extracts their conversation turned to the kind of torpedoes being used in the war.
​
Lamarr was herself a staunch supporter of her adopted country, though she didn't become a naturalized citizen until 1953. Using knowledge she gained from her first marriage with the munitions manufacture she had the insight that radio controlled torpedoes would excel in the fight against the Axis powers. However the radio signals could easily be jammed and the torpedo sent off course. Working with Antheil she devised their "Secret Communications System".  

The action of composing for the player pianos helped Antheil with one of the aspects of creating their system, which had a striking resemblance to the still top secret SIGSALY system. It is best described in the overview of their patent number 2,292,387: "Briefly, our system as adapted for radio control of a  remote craft, employs a pair of synchronous records, one at the transmitting station and one at the receiving station, which change the tuning of the transmitting and receiving apparatus from time to time, so that without knowledge of the records an enemy would be unable to determine at what frequency a controlling impulse would be sent. Furthermore, we contemplate employing records of the type used for many years in player pianos, and which consist, of long rolls of paper having perforations variously positioned in a plurality of longitudinal rows along the records. In a conventional player piano record there may be 88 rows of perforations, and in our system such a record would permit the use of 88 different carrier frequencies, from one to another of which both the transmitting and receiving station would be changed at intervals. Furthermore, records of the type described can be made of substantial length and may be driven slow or fast. This makes it possible for a pair of records, one at the transmitting station and one at the receiving station, to run for a length of time ample for the remote control of a device such as a torpedo. The two records may be synchronized by driving them with accurately calibrated constant-speed spring motors, such as are employed for driving clocks and chronometers. However, it is also within the scope of our invention to periodically correct the position of the record at the receiving station by transmitting synchronous impulses from the transmitting station. The use of synchronizing impulses for correcting the phase relation of rotary apparatus at a receiving station is well-known and highly developed in the fields of automatic telegraphy and television."

Although the US Navy did not adopt their technology until the 1960s the principles of their work continue to live on and are now used in everyday devices such as Wi-Fi, CDMA, and Bluetooth technology. Spread spectrum systems are also used in the unregulated 2.4 GHz band and on some walkie-talkies that operate in the 900 MHz portion of the spectrum. Other spread spectrum techniques include direct-sequence spread spectrum (DSSS), time-hopping spread spectrum (THSS), and chirp spread spectrum (CSS). 

In 2008 Elyse Singer wrote the script for an off-Broadway play, Frequency Hopping, that features the lives of Lamarr and Antheil. It won a prize for best new play about science and technology. Hedy and George's pioneering work eventually led to their posthumous induction into the National Inventors Hall of Fame in 2014. 

Sources:
Ecstasy and Me by Heddy Lamarr
https://en.wikipedia.org/wiki/Hedy_Lamarr
The Bad Boy of Music by George Antheil
https://en.wikipedia.org/wiki/George_Antheil
https://en.wikipedia.org/wiki/Ballet_Mécanique
https://www.google.com/patents/US2292387
https://en.wikipedia.org/wiki/Frequency-hopping_spread_spectrum
Suggested Listening:
George Antheil, Ballet Mecanique: Digital Re-creation of the Carnegie Hall Concert of 1927, Conducted by Maurice Peress, Music Masters Inc. 1992. 
Picture
0 Comments

The Synthesis of Speech: Part V: from A Clockwork Orange to DMR

6/1/2019

0 Comments

 
Picture
In last month's episode I explored the genesis of the first song uttered by a computer, Daisy Bell, and how that song ended up in 2001: A Space Odyssey. In this last installment on the history of speech synthesis I'll track the use of the vocoder in popular music on up to its implementation into the DMR radios that are currently a big buzz in the ham community.

In 1968 synth wizard Robert Moog built the first solid state vocoder. Two years later Moog built another musical vocoder, working with Wendy Carlos. This was a ten-band device inspired by Homer Dudley's original designs. The carrier signal came from a Moog modular synthesizer. The modulator was the input from the microphone. The brilliant application of this instrument made its debut appearance in Stanley Kubrick's film A Clockwork Orange, where the vocoder sang the vocal part from the fourth movement of Beethoven's Ninth Symphony, the section titled "March from a Clockwork Orange" on the soundtrack. It's something I could sit down and listen to on repeat over and over while enjoying a fine glass of moloko velocet. This was the first recording made with a vocoder and I find it interesting that the two earliest uses of speech synthesis for music ended up in films made by Kubrick. The song "Timesteps", an original piece written by Wendy, is also features on the soundtrack. She had originally intended to include it as a mere introduction to the vocoder for those who might consider themselves "timid listeners" but Kubrick surprised Wendy by its inclusion in his dystopian masterpiece.  

Coming down the road in 1974 was the classic album Autobahn by the German krautrockers Kraftwerk. This was the first commercial success for the power-station of a group. Their previous three albums had been highly experimental, though well worth an evening of listening. Kraftwerk's contribution in the popularization of electronic music remains huge. Besides using commercial gear such as a Minimoog, the ARP Odyssey, and EMS Synthi AKS, Kraftwerk were dedicated homebrewers of their own instruments. Listening to the album now I can imagine the band soldering something together in the back of a Volkswagen Westfalia as they cruise down the highway at 120 km/h on to their next gig. 

Three years later in 1977 Electric Light Orchestra released the album Out of the Blue, much to the delight of discerning listeners everywhere. There is nothing quite like the music of ELO to lift me up out of the melancholy I often find myself in during the middle of winter when spring seems far away. "Mr. Blue Sky" and "Sweet Talking Woman" are songs that toggle the happy switches in my brain. When I hear them things brighten up. This is in no small part to the judicious use of the vocoder. ELO was in love with the vocoder and it can be found littered across their recordings. (As a bit of a phone phreak another favorite cut is "Telephone Line".)

During the 1980's the vocoder started being used in the early hip-hop and rap groups. Dave Tompkins, author of How to Wreck a Nice Beach: The Vocoder from WWII to Hip-Hop notes the echo of history in the vocoders use alongside two turntables for the SIGSALY program and how DJs use two turntables to mix and scratch phat beats while a rap MC will drop lyrics over top of the sounds being produced by the vinyl, sometimes processing those vocals through the vocoder. The use of the vocoder continues to present times on hip-hop and jazz fusion albums such as Black Radio (1 & 2) from Robert Glasper Experiment. 
 
While the vocoder was enjoying great success in the entertainment industry, its use in telecommunications was still ticking away, though a bit quieter, in the background.  Since 1970's most of the tech in this area has focused on linear-predictive coding (LPC). It is a tool used for representing the spectral envelope of a digital signal of speech in compressed form, using the information from a linear predictive model and is a powerful speech analysis technique. When it came out the NSA were among the first to get their paws on it because LPC can be used for secure wireless with a digitized and encrypted voice sent over a narrow channel. The early example of this is Navajo I, a telephone built into a briefcase to be used by government agents. About 110 of these were produced in the early '80s. Several other vocoder systems are used by the NSA for encryption (that we are allowed to know about).  

Phone companies like to use LPC for speech compression because it encodes accurate speech at a low bit rate, saving them bandwidth. This had been Homer Dudley's original intention with his first vocoding experiments back in the 1930's. Now LPC has become the GSM standard protocol for cellular networks.  GSM  uses a variety of voice codecs that implement the technology to jam 3.1 kHz of audio into 6.5 and 13 kbit/s of transmission. Which is why to my ear, smart phones, for all the cool things they can do with data, apps and GPS, will never sound as good with voice as an old school toll call on copper wires. LPC is also used in VoIP.  

LPC has also been used in musical vocoding. Paul Lansky created the computer music piece notjustmoreidlechatter using LPC. A 10th order derivative of LPC was used in the popular 1980s Speak & Spell educational toy. These became popular to hack by experimental musicians in a process known as circuit bending, where the toy is taken apart and the connections re-soldered to make sounds not originally intended by the manufactures. This technique was pioneered by Cincinnati maker and musician Q. Reed Ghazala into a high art form. Reed's experimental instruments have been built for Tom Waits, Peter Gabriel, King Crimson's Pat Mastalotto, Faust, Chris Cutler, Towa Tei, Yann Tomita, Blur and many other interesting musicians. And not so interesting ones (to me) such as Madonna. A future edition of The Music of Radio will cover his work in detail, but a lot can be found on his website anti-theory.net. 

Finally vocoders are utilized in the DMR radios that are currently gaining popularity among hams around the world. In Ohio the regional ARES groups are being encouraged to utilize this mode as another tool in the box. DMR is an open digital mobile radio standard. DMR, along with P25 phase II and NXDN are the main competitor technologies in achieving 6.25 kHz equivalent bandwidth using the proprietary AMBE+2 vocoder. This vocoder type uses multi-band excitation to do it's speech coding. Besides it's use in DMR the AMBE+2 is also used in D-Star, Iridium satellite telephone systems, and OpenSky trunked radio systems. 

From what I've heard I didn't really care for the audio quality of DMR, as on cell phones. My ears would rather dig through the mud of the HF bands than listen to the way speech is compressed in these modes. I think the vocoder is better suited to musical studios where it can be used for aesthetic effects. However with the push to use these in ARES, and needing something to play with at OH-KY-IN's digital night on the fourth Tuesday of the month, I do plan on taking the plunge into DMR. And when I do I will know that every time I have a QSO using the DMR platform I will be taking part in a legacy starting with Homer Dudley's insights into human vocal system as a carrier wave for speech. A legacy that stretches across the fields of telecommunication, cryptology and popular music.       

Sources:
Chip Talk: Projects in Speech Synthesis by David Prochnow, Tab Books, 1987.
https://en.wikipedia.org/wiki/Digital_mobile_radio
https://en.wikipedia.org/wiki/Multi-Band_Excitation
https://en.wikipedia.org/wiki/A_Clockwork_Orange_(soundtrack)
...and some other research on the interwebs. 
0 Comments

The Synthesis of Speech: Part IV: A Bicycle Built for Two

6/1/2019

0 Comments

 
Speech synthesis confers a number of benefits to technology end users. It allows individuals with impaired eyesight to be able to operate radios and computers. For those who cannot speak, and who may also have trouble using sign language, speech units such as the device employed by Stephen Hawking allow a person to communicate in ways unthinkable a century ago. For these individuals speech synthesizers play an integral role in adding quality to their day to day lives. On our local repeaters synth voices make announcements about nets and club events, and speech synths read the weather on the NWS frequencies. Beyond these specialized uses, one of the ways everyone can share in the joy of chip talk is through the medium of music.      

The IBM 704 was the first computer to sing. It was first introduced in 1954 and 140 units had sold by 1960. The programming languages LISP and FORTRAN were first written for this large machine that used vacuum tube logic circuitry. Bell Telephone Laboratories (BTL) physicist John Larry Kelly coaxed the 704 into singing Daisy Bell aka A Bicycle Built for Two using a vocoder program he wrote for the 704. 

Lovely as the a cappella computer was, it was deemed in need of instrumental accompaniment. For this part of the song the expertise of fellow BTL employee Max Vernon Mathews was sought out.  Max was an electrical engineer whose first love of music enabled him to become a pioneer in electronic and computer music. In 1954 he wrote the first computer program for sound generation, MUSIC, also used on the IBM 704. The accompaniment to the voice portion of Daisy Bell was programmed by Max in 1961 using the IBM 7090.   

The IBM 7090 was the transistorized version of the 709 vacuum tube mainframe. The 7090 series was designed for "large-scale scientific and technological applications." The first of 7090's was installed in late 1959 at a price tag of close to $3 million. Adjusted for inflation the price today would be a whopping $23 million buckaroos. Besides its musical capabilities, the 7090's other accomplishments included being used for the control of the Gemini and Mercury space flights. IBM 7090's were also used by the Air Force for the Ballistic Missile Early Warning System up until the 1980s. Daniel Shanks and John Wrench used it to calculate the first 100,000 digits of pi. Yet none of the above uses compare, in my mind, to the beauty of the IBM 704 joining forces with the IBM 7090 on the song Daisy Bell. 

Another computer, HAL 9000, still gets most of the credit for this electronic version of Daisy Bell. Arthur C. Clarke, author of 2001: A Space Odyssey, happened to be visiting his friend and colleague John Peirce at BTL when John Larry Kelly was making his demonstrations of speech synthesis with the IBM 704. He was so fascinated by witnessing this computational marvel that six years later he wrote that version of Daisy Bell into his screenplay, as sung by HAL in the middle the machines climactic mental breakdown. The song was on the vinyl platter "Music from Mathematics" put out by the Decca label a handful of decades ago (listen to video above.)

Daisy Bell went on to have a notable reprise for the Commodore 64 when Christopher C. Capon wrote his program "Sing Song Serenade". The sounds for his version were played direct on the hardware by rapidly moving the read/write head of the computer. . The resulting audio was emitted from the floppy disk drive. 
 
Max Mathews continued to make strong contributions to the humanities in the realms of music and technology. In 1968 he developed Graphic 1, a graphical system that used a light pen for drawing figures that could be converted into sound. In 1970 Mathews developed GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) with F. R. Moore. GROOVE  was the first fully developed music synthesis system for interactive composition and realtime performance. It used 3C/Honeywell DDP-24 (or DDP-224) minicomputers.

An algorithm written by Mathews was used by Roger M. Shepard to synthesize Shepard Tones. These tones (named after Roger) consist of a superposition of sine waves separated by octaves. When the base pitch of the tone is played moving upward or downward, it is known as the Shepard Scale. Playing this scale creates an auditory illusion of a tone that continually ascends or descends in pitch, yet seems to get no higher or lower. It is the musical version of a barber pole or of the Penrose stair, a type of impossible object in geometry, made famous in the drawing Ascending and Descending by M.C. Escher.  

Max also made a controller, called a Radio-Baton and Radiodrum, used to conduct and play electronic music. Developed at BTL in the 1980s it was originally a kind of three-dimensional mouse. The device has no inherent sound of its own, but produces control signals that are used to trigger sounds, sound-production, effects and the like. The Radio-Baton is similar to a theremin. Magnetic capacitance is used to locate the position of the conductors baton, or mallets in the case of the drum. The two mallets are antennas transmitting on slightly different frequencies. The drum surface, also electronic, acts as another set of antennas. The combination of these antenna signals is used to derive X, Y and Z, and these are interpreted according to the assigned musical parameters.

Many of these mainframe musical programs are now available in the program Max that can run off a laptop.

ARTIKULATION

​

​Besides the use of Daisy Bell in the soundtrack for 2001, director Stanley Kubrick used a wide range of work by modern composers. The piece Atmospheres written by Gyorgy Ligeti in 1961 was used for the scenes of the monolith and those of deep space. Ligeti's earlier electronic work Artikulation, though not used in the film, shares an interesting connection to some of the ideas behind speech synthesis. Artikulation was composed in 1958 at the Studio for Electronic Music of West Deutsche Radio in Cologne with the help of Cornelius Cardew, an assistant of Karlheinz Stockhausen (whose works involving shortwave radios will be explored in time). The piece was composed to be an imaginary conversation of multiple ongoing monologues, dialogues, many voices in arguments and chatter. In it Ligeti created a kind of artificial polyglot language full of strange whispers, enunciations and utterance. 

Sources:
Music from Mathematics: Played by IBM 7090 Computer to Digital Sound Transducer,  Decca LP 9103.
https://en.wikipedia.org/wiki/IBM_704
https://en.wikipedia.org/wiki/IBM_7090
https://en.wikipedia.org/wiki/Daisy_Bell
https://en.wikipedia.org/wiki/Shepard_tone
https://en.wikipedia.org/wiki/Max_Mathews
https://en.wikipedia.org/wiki/John_Larry_Kelly,_Jr.
https://en.wikipedia.org/wiki/Radiodrum
Gyorgy Ligeti: Continuum / Zehn Stucke fur Blaserquintett / Artikulation / Glissandi / Etude fur Orgel / Volumina, Wergo 60161, 1988. 
0 Comments

The Synthesis of Speech: Part III : AUDREY

6/1/2019

0 Comments

 
Picture
This installment continues the exploration of the development of speech synthesis. So far I've investigated the invention of the Vocoder and how it was used in the SIGSALY program in WWII. In this episode  I explore the other side of the speech synthesis coin, speech recognition. Without the ability for machines to recognize speech on the one hand and the ability to synthesize it on the other, the wunderkind of today's consumer electronics, Siri, Dragon and Alexa, would not be possible. With both in place humans can now speak, and sometimes yell with exasperation, to a wide range of interconnected devices and our smart phones and Echo Dots will speak back to us. As developments in Artificial Intelligence take off the little computer in your pocket soon speak up for itself and yell back.  
​       
In a way it could be said that speech recognition systems began in the 19th century when sound waves were first converted into electrical signals. By 1932 Harvey Fletcher was researching the science of speech perception at that temple of telecommunications, Bell Laboratories. His contributions in this area showed that the features of speech are spread over a wide frequency range. He also developed the articulation index to quantify the quality of a speech channel. Articulation indexes are used in measuring the effectiveness of hearing aids and in industrial settings. Harvey is credited with the invention of an early electronic hearing aid, and is notable for overseeing the creation of the first stereophonic recordings and live stereo sound transmissions, for which he was dubbed the "father of stereophonic sound".

Interest in speech recognition didn't end with Fletcher. In 1952, over half a century before Siri or Alexa could respond to a voiced question of where to find the best noodle shop in town (or when the end of the world will be), AUDREY was on the scene. She derived her name from her special power: Automatic Digit Recognition. She was a collection of circuits capable of perceiving numbers spoken into an ordinary telephone. Due to the technological limits of the time she could only recognize the spoken numbers of "0" through "9". When the digits were uttered into a mic on the handset AUDREY would respond by illuminating a corresponding bulb on the front panel of the device. It sounds simple, but this marvel was only achieved after overcoming steep technical hurtles.

S. Balashek, R. Biddulph, and K. H. Davis were the creators of AUDREY. One of the obstacles they faced was to craft a system capable of recognizing the same word when it is said with subtle variations. The spoken digit "7" for example, when said multiple times by even one person is subject to slight differences. Duration, intonation, quality, volume and timing all change the sound of the word with each individual utterance. To recognize speech amidst all these variables AUDREY focused on the sound parts within the words that have the most minimal variation. In this way the machine did not need to have an exactly spoken match. Roberto Pieraccini put it this way, saying there is less variety "across different repetitions of the same sounds and words than across different repetitions of different sounds and words."
The exact matches came from the part of speech known as formants. A formant is a harmonic of a note that is augmented by the resonance of the vocal tract when speaking or singing. The information that humans require to distinguish speech sounds can be represented in a spectrogram by peaks in the amplitude/frequency spectrum. AUDREY could locate the formant in the spectrum of each utterance and use that to make a match.

AUDREY also required that there be pauses between words. She couldn't isolate or separate individual words when said in a string. In addition designated talkers had to be assigned, talkers who could produce the specific formants, otherwise she might not recognize a digit. For each speaker the reference patterns of the formants drawn electronically and stored within her memory had to be fine tuned. Yet despite all the limitations around her use, the researchers proved that building a machine capable of recognizing human speech wasn't a pipe dream. 
   
AUDREY was expensive because she was state of the art and all analog. The six-foot high relay rack she kept occupied with all her vacuum-tube circuitry required a lot of upkeep. And she drew a lot of power that really hiked up the electric bill. The invention never really went anywhere in terms of being used as a tool in Ma Bell's vast monopoly. It could have been used by toll operators or wealthy customers of the telephone to voice dial, but manual dialing was simple, fast, and cheap.

Creating a system that had uniform recognition of words as uttered by multiple people was a dream that had to be fulfilled by other researchers down the line. They built on the sweat equity and foundation of those who went before. The fact that a machine can be made to decipher strange human vocalizations at all is sheer wonder. While others may be fond of Siri, Dragon and Alexa it is AUDREY who will always remain in my heart.

Sources:
The Voice in the Machine: Building Computers That Understand Speech by Roberto Roberto Pieraccini, MIT Press 2012
https://en.wikipedia.org/wiki/Speech_recognition
https://en.wikipedia.org/wiki/Harvey_Fletcher
https://en.wikipedia.org/wiki/Articulation_Index 
https://astaspeaks.wordpress.com/2014/10/13/audrey-the-first-speech-recognition-system/     
0 Comments

The Synthesis of Speech: Part II : Sigsaly

6/1/2019

0 Comments

 
Picture
This edition of the Music of Radio continues to explore developments around  electronically generated speech. Homer Dudley, an engineer and acoustics researcher who worked for Bell Telephone Laboratories (BTL), made significant contributions to this field beginning with his invention of the Vocoder and Voder. The development of these two instruments was detailed in last month's column. Now I will turn my attention to how the Vocoder was employed in encrypting the transmissions of high ranking officials during WWII for the SIGSALY program. SIGSALY, by-the-way, is simply a cover name for the system and is not an acronym.
​
In 1931 BTL had developed the A-3 scrambler that was used by Roosevelt and Churchill, but the security of this device was eventually compromised by German's at a radio post in South Holland who had been intercepting the Prime Ministers telephone calls. The A-3 had worked with the Trans-Atlantic Telephone by splitting speech up into different bands, but it wasn't difficult to reassemble as the Germans proved in 1941, making the situation surrounding communications security to become intolerable to the Allies. 
 
In 1942 the Army contracted BTL to assist with the communication problem and create "indestructible speech" or speech that could withstand attempts at code breaking. From this effort the revolutionary 12-channel SIGSALY system was born. To create SIGSALY workers sifted through over 80 patents in the general area of voice security. None of these fit the needs of the allies, but Homer Dudley's Vocoder did and formed the basis of the system.  For SIGSALY a twelve-channel Vocoder system was used. Ten of the channels measured the power of the voice signal in a portion of voice frequency spectrum (generally 250-3000 Hz). Two channels were devoted to "pitch" information and whether or not unvoiced (hiss) energy was present. The Vocoder enciphered the speech as it went out over phone or radio. In order to be deciphered at each end of the conversation an audio crypto-key was needed. This came in the form of vinyl records. 

From the standpoint of music history it is interesting to note, as Dave Tompkins did in his book How to Wreck A Nice Beach: The Vocoder fromWWII to Hip-hop, that the SIGSALY system employed two-turntables alongside the microphone/telephone. The classified name for this vinyl part of the operation was SIGGRUV.  The turntables were used to solve the problem of needing a cryptographic key. They played vinyl records produced by the Muzak Corporation, a company famous for the creation of elevator music. The sounds on these records weren't aimed at soothing weekend shoppers or people sitting in waiting rooms.  Muzak had been contracted into pressing vinyl that contained random white noise, like channel 3 on an old television set. The noise was created by the output of very large mercury-rectifier tubes that were four inches in diameter, and over a foot high. These generated wide band thermal noise that was sampled every twenty milliseconds. The samples were then quantized into six levels of equal probability. The level information was converted into channels of a Frequency Shift Keyed audio tone signal recorded onto a vinyl master. From the master only three copies of a key segment were made. If these platters had been commercial entertainment masters thousands would have been pressed from its blueprint. If any SIGGRUV vinyl still exists, and for security reasons they shouldn't have, those grooves are critically rare.   

It had to be insured that no pattern could be detected so the records had to be random noise. If the equipment had somehow been duplicated by the Axis powers, the communications would still be uncompromised as the they required the crypto key of the matching vinyl, required at each terminal. This made the transportation of these records, via armored truck, the most secure since Edison invented the Phonograph. Just as the masters were destroyed after making three keys, each vinyl key was only ever to be played once, as operators were instructed to burn after playing. The official instruction read, "The used project record should be cut-up and placed in an oven and reduced to a plastic biscuit of 'Vinylite'". As another precaution against the grooves falling into enemy hands the turntables themselves had a self-destruct mechanism built into them that could be activated in case a terminal was compromised. Thinking of all this sheds new light on the idea of a DJ-Battle.

Keeping the turntables at two different terminals across the globe synchronized was another  technical hurdle that BTL overcame. If a needle jumped or the system went out of synch only garbled speech was heard. At the agreed upon time, say 1200 GMT, operators listened for the click of the phonograph being cued to the first groove. The turntables were started by releasing a clutch for the synchronous motor that kept the turntable running at a precise speed. Fine adjustments were made using 50-Hertz phase shifters (Helmholtz coils) to account for delays in transmission time. The operators would listen for full quieting of the audio as synchronization was established. Oscilliscopes and HF receivers were also used to keep systems locked to international time. 
 
A complete SIGSALY system contained about forty racks of heavy equipment composed of vacuum tubes, relays, synchronous motors, turntables, and custom made electromechanical equipment. In the pre-transistor era all of this gear required a heavy load of power so cooling systems were also required to keep it all from getting fried. The average weight of a set up was about 55 tons. 
 
The system passed Alan Turing's inspection (if not his test) as he had been briefly involved with the project on the British side. On July 15, 1943 the inaugural connection was established between the Pentagon and a room in the basement below Selfridges Department Store in London. Eventually a total of twelve SIGSALY encipherment terminals were established, including some in Paris, Algiers, Manila, Guam, Australia and one on a barge that ended up in the Tokyo Bay. In the year 1945 alone the system trafficked millions of words between the Allies.

To keep all of this operational a special division of the Army Signal Corp was set up, the 805th Signal Service Company. Training commenced in a school set up by BTL and members were sent to various locations. Their tasks required security clearances and a firm grasp on cutting edge technology which they were tasked to operate and maintain. For every eight hours of operation the SIGSALY systems required 16 hours of maintenance. 
In putting the system together eight remarkable engineering "firsts" were achieved. A review conducted by The Institute of Electronic and Electrical Engineers in 1983 lists them  as follows: 

1. The first realization of enciphered telephony
2.The first quantized speech transmission
3.The first transmission of speech by Pulse Code Modulation (PCM)
4.The first use of companded PCM
5.The first examples of multilevel Frequency Shift Keying (FSK)
6.The first useful realization of speech bandwidth compression
7.The first use of FSK - FDM (Frequency Shift Keying-Frequency Division Multiplex) as a viable transmission method over a fading medium
8.The first use of a multilevel "eye pattern" to adjust the sampling intervals (a new, and important, instrumentation technique) 

To do all these things required precision and refinement in new technology. SIGSALY has left the world with a rich inheritance that spans developments in cryptology, digital communications, and even left its mark on music.

Sources:
How to Wreck A Nice Beach: The Vocoder from WWII to Hip-hop: The Machine Speaks by Dave Tompkins, Melville House, 2010
SIGSALY: The Start of the Digital Revolution by J.V. Boone and R.R. Peterson, retrieved at:
https://www.nsa.gov/about/cryptologic-heritage/historical-figures-publications/publications/wwii/sigsaly-start-digital.shtml
0 Comments

The Synthesis of Speech: Part 1: The Voder & Vocoder

6/1/2019

0 Comments

 
Picture
Who doesn't remember changing their voice as a kid by talking into a fan? Or sneaking off with baloons at a party or dance to inhale the helium and try to talk like a character from a cartoon? One year for Halloween I got a cheap voice changer toy that had three settings and I remember playing with it for hours. But voice changers weren't always so cheap, and the original was room-sized instead of hand held. The initial reason behind its development had nothing to do with keeping kids amused and was not driven by aesthetic concerns. It was only after Ma Bell and the military had wrapped up their use for the Vocoder that it came to be appreciated for its musical qualities, first by experimental electronic musicians, and later pop, rock and rap artists. The next few editions of the Music of Radio series delves into the story of electronic speech synthesis, from the Vocoder, to the Voder and on to the first text-to-speech computer programs written for gargantuan mainframes. It takes us deep into the stacks of the Bell Laboratory Archives and into the belly of WWII crypto communications before emerging in the 1960's and '70's when the stage was set for mind melting explorations in sonic psychedelia. Just as the Vocoder is still be used for artistic effects the original ideas behind it, compression and bandwidth reduction, continue to be used in new hardware and software applications for radio and telecommunications. 

Homer Dudley, the inventor of the Vocoder, was an electronic and acoustic engineer whose primary area of focus revolved around the idea that human speech is fundamentally a form of radio communication. In his white-paper The Carrier Nature of Speech he wrote that "speech is like a radio wave in that information is transmitted over a suitably chosen carrier." This realization came to Dudley in October of 1928 when he was otherwise out of commission in a Manhattan hospital bed. Discoveries are often made from playfully messing around with things, either in horseplay or boredom, and Dudley was keeping himself entertained just as a kid might by making weird sounds with his voice through changing the shape of his mouth. He had the insight that his vocal cords were acting as a transmitter of a periodic waveform. The nose and throat were the resonating filters while the mouth and tongue produced harmonic content, or formants to use linguistic lingo. He also observed that the frequencies of his voice vibrated at a faster rate than the mouth itself moved. 

These insights went on to have implications for the work he pursued at Bell Laboratories, a true idea factory, where money and resources were thrown at any old project that might bear the AT&T monopoly some form of fruit or further advantage in their already sprawling playground of wires and exchanges. Once recovered and back at work Homer thought his discovery might have an application in the area of compression and he made it his ambition to free up some of the phone companies precious bandwidth hoping to pack in more conversations onto the copper lines. He was given a corner and allowed to go work in it, devoting himself to his obsession.  

He exploited his research in the invention of the Vocoder, or VOice CODER, first demonstrated at Harvard in 1936. It works by measuring how the spectral characteristics of speech change over time. The signal going into the mic is divided by filters into a number of frequency bands. The signal present at each frequency gives a representation of the spectral energy. This allows the Vocoder to reduce the information needed to store speech to a series of numbers. On the output end to a speaker or headphone the Vocoder reverses the process to synthesize speech electronically. Information about the instantaneous frequency of the original voice signal is discarded by the filters giving the end result it unique robotic and dehumanized characteristics. The amplitude of the modulator for each of the individual analysis bands generates a voltage that controls the amplifiers in each corresponding carrier band. The frequency components of the modulated signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands. Because the Vocoder does not employ a point-by-point recreation of the wave, the bandwidth used for transmission can be significantly reduced. 

There is usually an unvoiced band or sibilance channel on a Vocoder for frequencies outside the analysis bands for typical talking, but still important in speech. These are words starting with the letters s, f, ch or other sibilant sounds. These are mixed with the carrier output for increased clarity, resulting in recognizable speech but still roboticized. Some Vocoders have a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency.

To better demonstrate the speech synthesis ability of the decoder part of his invention Dudley created another instrument, the Voder (Voice Operating Demonstrator). This was unveiled during the World Fair in New York in 1939 where Ray Bradbury was among the attendees who witnessed it firsthand.  The Voder synthesized speech by creating the electronic equivalent of a vocal tract. Oscillators and noise generators provided a source of pitched tone and hiss. A 10-band resonator filter controlled by a keyboard converted the tone and hiss into vowels, consonants and inflections. Another set of extra keys allowed the operator to make the plosive sounds such as "p" and "d" as well as affrictive sounds of "j" in "jaw" and "ch" in "cheese". Only after months of practice with this difficult machine could a trained operator produce something recognizable as speech. 

At the world fair Mrs. Helen Harper, who was noted for her skill, led a group of twenty operators in demonstrations of the Voder where people from the crowd could come up and ask the operator to make the Voder say something. 

Homer Dudley had great success in his aim of reducing bandwidth with the Vocoder. It could chop up voice frequencies into ten bands at 300 hertz, a significant reduction of what was required for a phone call back in the day. Yet it never got used for that purpose. The large size of the equipment was impractical to install in homes and offices across the country even if it created more channels on the phone lines. For a time Dudley worked at marketing the Vocoder to Hollywood for use in audio special effects. It never made much of an impact there as other voice changing devices such as the Sonovox started being used in radio jingles and in cartoons. Before it could be discovered by musicians Homer Dudley's tool for voice compression had to eb put into service during America's efforts in WWII where it was used as part of the SIGSALY encryption program. The details surrounding the coding of the voices of MacArthur and Churchill will be explored in next months column.                    

 Sources:
How to Wreck A Nice Beach: The Vocoder from WWII to Hip-hop: The Machine Speaks by Dave Tompkins, Melville House, 2010
The Carrier Nature of Speech by Homer Dudley, The Bell System Technical Journal, Vol. 19, No. 4, October 1940
Fundamentals of Speech Synthesis by Homer Dudley, Journal of the Audio Engineering Society, Vol. 3, No. 4, October 1955
https://en.wikipedia.org/wiki/Vocoder
https://en.wikipedia.org/wiki/Voder
0 Comments

Lev Theremin and the Music of the Ether (Part II)

6/1/2019

0 Comments

 
Picture
Lev Theremin's skill at invention was not lost on the Soviet machine. Not long after his musical instrument was patented, the radio watchman security device it was based on started being employed to guard the treasures of gold and silver Lenin had plundered from church and clergy. The watchman was also being used to protect the state bank. Setting up and installing these early electronic traps took him away from his primary interest in scientific research. Just as he was approaching the limits of his frustration his mentor at the Institute gave him a new problem to solve, that of "distance vision" or the transmission and reception of moving images over the airwaves. The embryonic idea for television was in the air at the time but no one had figured out how to make it a reality. The race was on and the Soviets wanted to be first to crack the puzzle.  
​   
Having researched the issue extensively in the published literature, Lev was ready to apply the powers of his mind towards a solution. In the Soviet Union parts weren't always readily available. Some were smuggled in, and others had to be scavenged from flea markets -the latter a process very familiar to radio junkies. By 1925 he had created a prototype from his junk box using a rotating disk with mirrors that directed light onto a photo cell. The received image had a resolution of sixteen lines, and it was possible to make out the shape of an object or person but not the identifiable details. Other inventors in Russia and abroad were also tackling the issue. Fine tuning the instrument over the next year he doubled the resolution to 32 lines and then, using interlaced scanning, to 64. Having created a rudimentary "Mechanism of Electric Distance Vision" he demonstrated the device and defended his thesis before students and faculty from the physics department at the Polytechnic Institute. Theremin had built the first functional television in Russia. 
    
After this period Lev embarked to Europe and then America where he lived for just over a decade engaging the public, generating interest in his musical instrument, and doing work with RCA. As Hitler gathered power he was anxious about the encroaching war and returned home to the Soviet Union in 1938. He barely had time to settle back in when he was sent to the Kolmya gold mines for enforced labor for the better part of a year. This was done as a way of breaking him, a fear tactic that could be held over his head if he didn't cooperate: do what we say or go back to the mines. The state had better uses for him. He was picked up by the police overlord Lavrenti Beria who sent him to work in a secret laboratory that was part of the Gulag camp system. One of his first jobs was to build a radio beacon whose signals would help track down missing submarines, aircraft and smuggled cargo.

With WWII winding to a close the Cold War was dawning and Russia was on the offensive, trying to extend its reach and gather intelligence on such lighthearted subjects as the building of atomic bombs. In their efforts at organized espionage the Soviets sifted for all the data they could get from foreign consulates. Having succeeded with his beacon Lev was given another assignment. This time the goal wasn't to track down cargo or vehicles but to intercept U.S. secrets from inside Spaso House, the residence of the U.S. Ambassador. Failure to do the bidding of his boss would mean a return to the mines. His boss had high demands for the specifications of the bug Lev was to plant. The proposed system could have no microphones and no wires and was to be encased in something that didn't draw attention to itself.
  
The bug ended up being put inside a wooden carving of the Great Seal of the United States and was delivered by a delegation of Soviet Pioneers (their version of Boy Scouts) on July 4, 1945. Deep inside this "gesture of friendship" was a miniature metal cylinder with a nine inch antenna tail. The device was passive and was not detected by the X-Rays used at Spaso house in their routine scans. It only activated when a microwave beam of 330 Mhz was directed at the seal from a nearby building. There was a metal plate inside the cylinder that when hit with the beam resonated as a tuned circuit. Below the beak of the eagle the wood was thin enough to act as a diaphragm and the vibrations from it caused fluctuations in the capacitance between the plate and the diaphragm creating a microphone. The modulations this produced were picked up by the antenna and then transmitted out to the receiver at a Soviet listening post. Using this judiciously the Soviets were able to gain intelligence to aid them in a number of strategic decisions. The Great Seal bug is considered to be a grandfather to RFID technology.    

This wasn't the last time Lev was asked to develop wireless eavesdropping technology. For the next job his overseers upped the ante on him. No device could be planted in the site targeted for surveillance. The operation was code named Snowstorm. Lev used his interest in optics to figure out a method. Knowing that window panes in a room vibrate slightly when people talked he needed a method to detect and read the vibrations from a distance. Resonating glass contains many simultaneous harmonics and it would be a difficult to find the place of least distortion to get a voice signal from. Then there was the obstacle of reinterpreting the signal back into a speech pattern. Using an infrared beam focused on the optimum spot and catching its reflection back in an interferometer with a photo element he was able to pick up communications. Back at his monitoring post he used his equipment and skills to reduce the large amounts of noise from the signal.   
A few years later Lev was released from his duties at the lab, but was kept on a tight leash and not allowed to leave Moscow.

HOW TO BUILD A THEREMIN FROM THREE AM RADIOS 
    
For those amateurs wishing to build and play a theremin there are many commercial kits available on the market. However a simple theremin can be built using just three AM radios. If you don't already have these laying around the house they can easily be obtained from your local thrift store.

One of the radios will be a fixed transmitter, another a variable transmitter and the third would be the receiver. The volume knobs on the fixed and variable transmitters can be turned all the way down, as they are just used to produce the intermediate frequency oscillations that will be picked up by the receiver. The receiver radio should be set on an unused frequency in the upper range of the AM band such as 1500 Khz. If it is in use tune to a nearby space where only static is heard. The fixed and variable transmitters should then be tuned 455 Khz below where your receiver is set, in this example 1045 Khz. 455 Khz is a common difference in the local oscillator frequency, although there can be variations. As these frequencies are set the receiver should start to make a whistling type sound, the production of a beat frequency.

The next step is to open up the variable radio and look for the variable capacitor, often housed in white plastic with four screws. Find the terminal that takes the station out of tune and use an aligator clip attached to the antenna, or solder a wire from the antenna to the oscillator terminal. Now the controls will have to be adjusted slightly again. Tune the fixed transmitter until the receiver starts whistling and have fun playing with the sounds it creates.          

Sources:
Theremin: Ether Music and Espionage by Albert Glinsky, University of Illinois Press, 2000
https://en.wikipedia.org/wiki/Leon_Theremin
How to Make a Basic Theremin by eltunene: https://app.box.com/s/kgdstzwaoc/1/17284427/181802859/1

0 Comments

Lev Theremin and the Vibrations of the Ether (Part 1)

6/1/2019

0 Comments

 
Picture
The sound of the theremin has become synonymous with the spectral and spooky sci-fi horror flicks of the 1940's and '50's. It's trilling oscillations conjure up images of flying saucers made from hub caps and fishing line. When most folks hear and see the theremin they tend to think of it as little more than a novelty or scientific amusement. While it may have fallen out favor in horror movie soundtracks it has remained a mainstay within the field of electronic music. It is distinguished among all musical instruments by being the only one that is played without touching the instrument itself. To the radio and electronics buff the theremin is worth exploring as a way of learning about electromagnetic fields and the creative use of the heterodyning effect for artistic purposes. Whether or not the quivering sounds the instrument pulls out of the ether are appealing to a listener is a matter of individual preference.
​
The inventor of the theremin, or etherphone as it was first dubbed, was Lev Teremen. He was born in Russia in 1896 a few years before Marconi achieved wireless telegraphy. As a young boy he spent his time reading the family encyclopedia and was fascinated by physics and electricity. At five he had started playing piano, and by nine had taken up the cello, an instrument that has an important influence on the way theremins are played. After showing promise in class he was asked to do independent research with electricity at the school physics lab. There he began an earnest study of high-frequency currents and magnetic fields, alongside optics and astronomy. It was around this time Lev met Abram Ioffe, a rising physicist whom he would work under in a variety of capacities. Yet his studies in atomic theory and music were overshadowed by the outbreak of WWI. In 1916 he was summoned by the draft and moved to Petrograd where his electrical experience saved him from the front lines. He was placed in a military engineering school where he landed in the Radio Technical Department to do work on transmitters and oversee the construction of a powerful and strategic radio station. In the course of the war the station had to be disassembled and Lev oversaw the blowing up of a 120 meter antenna mast. Another war time duty was as a teacher instructing other students to become radio specialists. 

As Lev's reputation grew among engineers and academic scientists he was eventually asked to go and work with Ioffe Abram at the Physico-Technical Institute where he became the supervisor of a high-frequency oscillations laboratory. Lev's first assignment was to study the crystal structure of various objects using X-Rays. At this time he was also experimenting with hypnosis and Ioffe suggested he take his findings on trance-induced subjects to psychologist Ivan Pavlov. Though Lev resented radio work in preference for his love of exploration of atomic structures, Ioffe pushed him to work more systematically with radio technology. Now in the early 1920's Lev busied himself thinking of novel uses for the audion tube. 

His first project involved the exploration of the human body's natural electrical capacitance to set up a simple burglar alarm circuit that he called the "radio watchman". The device was made by using an audion as a transmitter at a specific high frequency directed to an antenna. This antenna only radiated a small field of about sixteen feet. The circuits were calibrated so that when a person walked into the radiation pattern it would change the capacitance, cause a contact switch to close, and set off an audible signal. He was next asked to create a tool for measuring the dielectric constant of gases in a variety of conditions. For this he made a circuit and placed a gas between two plates of a capacitor. Changes in temperature were measured by a needle on a meter. This device was so sensitive it could be set off by the slightest movement of the hand. This device was refined by adding an audion oscillator and tuned circuit. The harmonics generated by the oscillator were filtered out to leave a single frequency that could be listened to on headphones. 

As Lev played with this tool he noticed again how the presence of his movements near the circuitry were registered as variations in the density of the gas, and now measured by a change in the pitch. Closer to the capacitor the pitch became higher, while further away it became lower. Shaking his hand created vibrato. His musical self, long dormant under the influence of communism, came alive and he started to use this instrument to tease out the fragments he loved from his classical repertoire. Word quickly traveled around the institute that Theremin was playing music on a voltmeter. Ioffe encouraged Lev to refine what he had discovered -the capacitance of the body interacting with a circuit to change its frequency- into an instrument. To increase the range and have greater control of the pitch he employed the heterodyning principle. He used two high-frequency oscillators to generate the same note in the range of 300 khz :-beyond human hearing. One frequency was fixed, the other variable and could move out of sync with the first. He attached the variable circuit to a vertical antenna on the right hand side of the instrument. This served as one plate of the capacitor while the human hand formed another. The capacitance rose or fell depending on where the hand was in relation to the antenna. The two frequencies were then mixed into a beat frequency within audible range. To play a song the hand is moved at various distances from the antenna creating a series beat frequency notes. 
  
To refine his etherphone further he designed a horizontal loop antenna that came out of the box at a right angle. Connected to carefully adjusted amplifier tubes and circuits this antenna was used by the other hand to control volume. The new born instrument had a range of four octaves and was played in a similar manner to the cello, as far as the motions of the two hands were concerned. After playing the instrument for his mentor, he performed a concert in November of 1920 to an audience of spellbound physics students. In 1921 he filed for a Russian patent on the device. 

Source:
Theremin: Ether Music and Espionage by Albert Glinsky, 2000, University of Illinois 
0 Comments

The Audion Piano & Wireless Fantasies

6/1/2019

0 Comments

 
No man works in a vacuum. Before the industry of radio got off the ground it had been customary for researchers to use each-others discoveries with complete abandon. As technical progress in the field of wireless communication moved from the domain of scientific exploration to  commercial development financial assets came to be at stake and rival inventors soon got involved in one of the great American pastimes: lawsuits. The self-styled "Father of Radio" Lee De Forest was involved in a number of infringement controversies. The most famous of these involved his invention of the audion (from audio and ionize) an electronic amplifying vacuum tube.

It was Edison who first produced the ancestor of what became the audion. While working on the electric light bulb he noticed that one side of the carbon filament behaved in a way that caused the blackening of the glass. Working on this problem he inserted a small electrode and was able to demonstrate that it would only operate when connected to the positive side of a battery. Edison had formed a one way valve. This electrical phenomenon made quite the impression on another experimenter, Dr. J. Ambrose Fleming, who brought the device back to life twenty years later when he realized it could be used as a radio wave detector. 

At the time Fleming was working for Marconi as one of his advisers. It occurred to him that "if the plate of the Edison effect bulb were connected with the antenna, and the filament to the ground, and a telephone placed in the circuit, the frequencies would be so reduced that the receiver might register audibly the effect of the waves." Fleming made these adjustments. He also substituted a metal cylinder for Edison's flat plate. The sensitivity of the device was improved by increasing electronic emissions. This great idea in wireless communication was called the Fleming valve.

Fleming had patented this two-electrode tube in England in 1904 before giving the rights to the Marconi Company who took out American patents in 1905. Meanwhile Lee De Forest had read a report from a meeting of the Royal Society where Fleming had lectured on the operation of his detector. De Forest immediately began experimentation with the apparatus on his own  and found himself dissatisfied. Between the cathode and anode he added a third element made up of a platinum grid that received current coming in from the antenna. This addition  proved to transform the field of radio, setting powerful forces of electricity, as well as litigation, into motion.

The audion increased amplification on the receiving side but radio enthusiasts were doubtful about the ability of the triode tube to be used with success as a transmitter. De Forest had been set upon by financial troubles involving various scandals in the wireless world and was persuaded to sell his audion patent in 1913.

Edwin Howard Armstrong had been fascinated by radio since his boyhood and was an amateur by age fifteen when he began his career. Some of his experimentation was with the early audions that were not perfect vacuums (De Forest had mistakenly thought a little bit of gas left inside was beneficial to receiving). Armstrong took a close interest in how the audion worked and developed a keen scientific understanding of its principles and operation. By the time he was a young man at Columbia University in 1914, working alongside Professor Morecroft he used an oscillograph to make comprehensive studies based on his fresh and original ideas. In doing so he discovered the regenerative feedback principle that was yet another revolution for the wireless industry. Armstrong revealed that when feedback was increased beyond a certain point a vacuum tube would go into oscillation and could be used as a continuous-wave transmitter. Armstrong received a patent for the regenerative circuit.  
De Forest in turn claimed he had already come up with the regenerative principle in his own lab, and so the lawsuits began, and continued for twenty years with victories that alternated as fast as electric current. Finally in 1934 the Supreme Court decided De Forest had the right in the matter. Armstrong however would achieve lasting fame for his superheterodyne receiver invented in 1918.   

Around 1915 De Forest used heterodyning to create an instrument out of his triode valve, the Audion Piano. This was to be the first musical instrument created with vacuum tubes. Nearly all electronic instruments after if it were based on its general schematic up until the invention of the transistor.

The instrument consisted of a single keyboard manual and used one triode valve per octave. The set of keys allowed one monophonic note to be played per octave. Out of this limited palette it created variety by processing the audio signal through a series of resistors and capacitors to vary the timbre. The Audion Piano is also notable for its spatial effects, prefiguring the role electronics would play in the spatial movement of sound. The output could be sent to a number of speakers placed around the room to create an enveloping ambiance. De Forest later planned to build an improved version with separate tubes for each key giving it full polyphony, but it is not known if it was ever created. 

In his grandiose autobiography De Forest described his instrument as making "sounds resembling a violin, cello, woodwind, muted brass and other sounds resembling nothing ever heard from an orchestra or by the human ear up to that time – of the sort now often heard in nerve racking maniacal cacophonies of a lunatic swing band. Such tones led me to dub my new instrument the ‘Squawk-a-phone’….The Pitch of the notes is very easily regulated by changing the capacity or the inductance in the circuits, which can be easily effected by a sliding contact or simply by turning the knob of a condenser. In fact, the pitch of the notes can be changed by merely putting the finger on certain parts of the circuit. In this way very weird and beautiful effects can easily be obtained.”

In 1915 an Audion Piano concert was held for the National Electric Light Association. A reporter wrote the following: “Not only does De Forest detect with the Audion musical sounds silently sent by wireless from great distances, but he creates the music of a flute, a violin or the singing of a bird by pressing a button. The tone quality and the intensity are regulated by the resistors and by induction coils…You have doubtless heard the peculiar, plaintive notes of the Hawaiian ukulele, produced by the players sliding their fingers along the strings after they have been put in vibration. Now, this same effect, which can be weirdly pleasing when skilfully made, can he obtained with the musical Audion.”
​
Fast forward to 1960. The Russian immigrant and composer Vladimir Ussachevsky is doing deep work in the trenches of  the cutting edge facilities at the Columbia-Princeton Electronic Music Center, one of the first electronic music studios anywhere. Its flagship piece of equipment was the RCA Mark II Sound Synthesizer, banks of reel-to-reels and customized equipment. Ussachevsky received a commission from a group of amateur radio enthusiasts, the De Forest Pioneers, to create a piece in tribute to their namesake. In the studio Vladimir composed something evocative of the early days of radio and titled it "Wireless Fantasy". He recorded morse code signals tapped out by early radio guru Ed G. Raser on an old spark generator in the W2ZL Historical Wireless Museum in Trenton, New Jersey. Among the signals used were: QST; DF the station ID of Manhattan Beach Radio, a well known early broadcaster with a range from Nova Scotia to the Caribbean; WA NY for the Waldorf-Astoria station that started transmitting in 1910; and DOC DF, De Forests own code nickname. The piece ends suitably with AR, for end of message, and GN for good night. Woven into the various wireless sounds used in this piece are strains of Wagner's Parsifal, treated with the studio equipment to sound as if it were a short wave transmission. Lee De Forest had played a recording of Parsifal, then heard for the first time outside of Germany, in his first musical broadcast.
It is also available on the CD: Vladimir Ussachevsky, Electronic and Acoustic Works 1957-1972 New World Records

Sources:
History of Radio to 1926 by Gleason L. Archer, The American Historical Society, 1938
The Father of Radio by Lee De Forest
https://en.wikipedia.org/wiki/Heterodyne
http://120years.net/the-audion-pianolee-de-forestusa1915/
https://en.wikipedia.org/wiki/Computer_Music_Center

0 Comments
Forward>>

    Justin Patrick Moore

    Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.

    For shorter pieces,  announcements of JPM radio-activity, radio show downloads, publications,  catalogers book and music alerts, and sporadic dream infused rants follow Justin at sothismedias.dreamwidth.org.

    To listen to completed musical projects please visit sothismedias.bandcamp.com 

    Archives

    October 2020
    September 2020
    August 2020
    July 2020
    May 2020
    April 2020
    February 2020
    January 2020
    December 2019
    November 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019

    Categories

    All
    Down Home Punk
    Radiophonic Laboratory

    RSS Feed

Powered by Create your own unique website with customizable templates.