Who doesn’t like listening in to a conversation being held by two people nearby? Who doesn’t take secret delight in overhearing a snippet of information being mouthed from across the room? Anyone who has enjoyed monitoring local police, fire and utility frequencies, and even cell phone conversations before they were encrypted knows the secret pleasure that comes from electronic eavesdropping. Scanner radios, SDRs and even the humble Baofeng can offer the discrete listener hours of aural voyeurism. Radio traffic picked up during these sessions of signal intelligence and information gathering can be recorded with ease via a simple setup; and what is received and recorded may be transformed and put to artistic purposes.
This is exactly the method used by Robin Rimbaud, a British electronic musician born in 1964 who works under the name Scanner because of his use of the device in his early live performances and recordings. Tapping the airwaves, he mixed the indeterminate radio and cell phone signals into the electronica he was making, and by doing so found himself a name.
What is being picked up on the scanner will always be something evocative of the time and place where the frequencies were scanned. It is site specific. It is time specific. The people on the other end don’t necessarily know they are being listened to. They didn’t consent to be eavesdropped on, except by pressing the push to talk button. They didn’t sign a waiver allowing their voices to be recorded, mixed with music, and preserved for posterity on vinyl and CD. Robin Rimbaud, as Scanner wasn’t interested in getting their permission. What he was interested in was avant-garde literature, cinema and music. While earning a degree in Modern Arts at Kingston University in Surrey, England he formed the music group The Rimbaud Brothers with another bloke named Tony Rimbaud who was also a student (though they weren’t actually related). They started releasing cassette tapes in the early 80’s, and later turned into a trio when Chris Staley joined up, becoming Dau Al Set.
These cassette tapes were to prove important. The Peyrere compilation tape he put out in 1986 featured the work of Nurse with Wound, Derek Jarman, Current 93, Coil and Test Dept, cinching his alignment with the British experimental music scene. All these tapes prepared him for his work as curator of the Ash International record label, a subsidiary label of Touch Music out of London. His first debut as Scanner was released on Ash International in 1992.
This first self-titled Scanner album contains just under an hour of intercepted cell phone conversations of unsuspecting callers captured by his police scanner. As such, some of the material Robin Rimbaud picked up and put to record is enough to make you blush. I confess that when I first heard of police scanners as a thirteen year old skateboarding punk rocker the idea of being able to listen in to a juicy cell phone call was an exciting prospect. As was the idea of being able to hear the cops come bust us for skating at a certain site on the radio and leave before they got there.
Robin Rimbaud got into scanning on accident. He says, “As for the scanner device itself, it was purely by chance that I discovered it, since a friend was part of a hunt saboteur group and they would use it to listen in to the local police,” Rimbaud said. “I immediately saw the potential and intrigue of being able to access these private spaces and incorporate them into these exploratory soundscapes I was producing at the time. I was especially drawn to the fact that the recordings were so intimate, so clear, yet abstract in nature. One had to imagine who these people were you [are] overhearing, where they were, what kinds of lives they led, although the nature of their conversations often clearly explained this! So I began using these live voices and recordings inside the music I was producing and adopted the name of the machine I was using to create the work.”
The window of opportunity for tapping into this telephonic underworld was short-lived however. Back in the day when those rigs were analog the ability to sit on the freqs used by the telcos was a built in feature. Now it is illegal to monitor cell calls (unless you happen to work for the NSA). The companies making the scanners were under fire from the telcos. The telcos were putting pressure on Congress. So when the bill was sent up to Capitol Hill a new law was passed prohibiting scanners sold after a specified date from receiving the frequencies allocated to the Cellular Radio Service. Later an amendment was added to make it illegal to modify radios to receive those frequencies. There are Canadian and European unblocked versions available, but it is illegal to bring them into the U.S. Does that mean it is illegal to build your own scanner radio that can pick up cell calls…? Well, all that’s moved to digital now anyways and would be difficult to pick up (unless you happen to work for the NSA).
What about cordless landline phones? Frequencies used by early cordless phones at 43.720–44.480 MHz, 46.610–46.930 MHz, and 902.000–906.000 MHz are still around in some people’s homes and might be picked up by scanners but it’s still illegal to do so. And with all these scanners around most cordless phone makers moved their sets up to 2.4 GHz systems that make use of spread-spectrum modes which adds another layer of security.
The idea of listening in to what others consider private conversations brings us into the realm of ethics. Are radio listeners being nosey, butting their heads where they don’t belong? I think it is a mistaken notion that radio communication privacy can be achieved by declaring certain radio transmissions illegal to monitor and banning radio receivers capable of receiving ‘prohibited’ transmissions. This belief is rooted in a common misconception about the public nature of radio waves themselves. Courts have held that there is no privacy implied while transmitting on the public airwaves. To really eavesdrop in the smartphone centric world of today it might be better to be able to intercept text messages; hypothetically speaking of course. Texting isn’t my favorite thing, so why anyone (other than the NSA) would want to read a bunch of emoji’s is beyond me, lol.
Yet I do understand the desire to listen in, to gather intelligence, and to monitor, to eavesdrop. It can be exciting. Some of what you can grab off the air is just plain mind boggling. Robin Rimbaud understands this as well. He continued to release music on the Ash Interntational label, working closely with Mike Harding of Touch on the first dozen releases. These included Scanner², Mass Observation, Blind, and Runaway Train. [Some of these can be listened to on the artist’s bandcamp site: https://scanner.bandcamp.com/]
All have their merits but this last recording is a real gem, and was already famous when it was in circulation among railway operators before it got released to the experimental music crowd. The Runaway Train album consists of the unedited, un-doctored real-time recording of the radio contact between Alfie, controller of the railway line in New Brunswick, Canada and the engineer Wesley, on March 9, 1948, as the engineer lost control of his train to its ultimate derailment. This entire drama was taped as it happened and is insane with tension. While his colleagues work calmly and professionally to prevent a derailment, Wesley bravely remains on board. 55mph becomes 70mph. The dialogue between Wesley, and Alfie, grows charged as each minute passes. As the train hurtles on threatening the unsuspecting communities it passes through, as well as its crew. At 95mph, with a doctor and ambulance standing by, Wesley faces disaster. Suddenly the line goes dead. Can Wesley survive?
This tape had been circulating among CN and VIA Rail employees and a copy eventually reached the father of a man named Brian Damage. Brian got the tape from his dad and shared it with his friend Robin Rimbaud who was looking for unusual field recordings to put out on his Ash International label. Ash released it in 1994 (Ash 1.9) as a one-sided record in an edition of 500 copies, with an additional 500 pressed the following year. [You can listen to this one yourself on bandcamp at: https://phycus1.bandcamp.com/album/runaway-train]
Listening to this recording now, over seventy years after it was first captured off the radio is still a dramatic edge-of-your-seat listen. On a psychological level, it showcases the way humans are predisposed to focus in on the tragedy of others, to tell stories of death, demise, and destruction. Just the other day at the time of this writing I turned on my radio to see what traffic I could catch from local police and fire departments after a plane crashed into a home in Madeira. The same thing is at work when I slow down to look at an accident while driving. Our radios and scanners simply extend the reach of our observation. The allow us to listen in to the drama of human life as it unravels around us in real time.
The weird thing is that for the people involved the tragedy continues long after our scanners are turned off. In the case of train engineer Wesley, even though he walked away from the accident with his life intact, his 43 year career was over, and the pension that had been promised him was in limbo. The whole aftermath of his story was documented in the press and collected by Daniel Dawdy on the webpage: http://www.cwrr.com/Lounge/Feature/runaway
Now that I’m not reasoning like a teenager anymore my motivations for monitoring radio frequencies are different. It isn’t to evade the police. For one, cops and skaters get along better these days and there are designated spots where it is legit to have a street session. For another it’s fascinating to learn how radio traffic is handled during small and large emergencies. As a ham learning how to communicate clearly on the air is a skill that could come in handy if ever my skills are needed for the greater good of the community. Listening in is one way to develop that skill.
Magnetic Lemniscate: A Brief History of the Tape Loop
Sometimes, if the day has been hectic, when I get home I just want to kick back, relax and put on a record. Or a cassette. I still have hundreds of hours of music stored on tape, one of the finest mediums of storage ever invented. This privilege of being able to listen to recorded audio is unique in human history, and my ability to soak in the musical glow from my hi-fi system with my feet propped up and my head in my hands was built on the sweat of many researchers. The phonograph, loudspeaker and microphones all proclaimed that the age of audio had arrived. The promises made by this tech only cracked the door ajar. There was still a bolt in place on the other side barring further entry. The invention of magnetic tape recording proved to be the golden skeleton key responsible for unlocking the door to the studio of the audio engineer, and from there many other rooms in the mansion of new media.
Inside the tape studio it is possible to cut. Splice. Rewind. Fast forward. Edit. Create a new sequence for creative playback. The practice of recording and editing audio using magnetic tape was an obvious improvement over the previous electro-mechanical methods. The leap in audio fidelity alone was a dramatic feat. Further, it allowed for new practices of editing. It allowed for repetition, a key aspect of music, and so the loop was born. Splice. Snip. Audio on magnetic tape had established itself as simply superior. The analog tape recorder made it possible to erase. Audio mistakes could be fixed at less cost by recording over a previous recording, something not possible on the shellac and vinyl based medium of the phonograph. The edit turned into an art form as tape had the advantage of being cut. Spliced, it could be joined back together in an endless profusion of edits. Music could be rearranged, deranged, or removed.
From 1950 onwards magnetic tape quickly became the standard medium for audio master recording in the music and broadcast radio industries. This led to the development of hi-fi stereo recordings for the domestic market. If the day has been hectic, just kick back with some Les Baxter or the exotica of Martin Denny and let it transport you away from the work of the daily grind. Now in hi-fidelity, and turning at 33 1/3 rpm, longer songs and longer sounds mean more time to chill in the lounge. Sonically edited the album now offered to audio engineers the same plasticity of arrangement known to film directors. The many new combinations available became mind boggling and cinematic.
When I think of tape, I think primarily of its role in audio and video storage. I think of the way it revolutionized sound recording, reproduction and broadcasting. It allowed radio, which had always been broadcast live, to be recorded for later or repeated airing. I think of how I sat with a radio and it’s built in cassette player to tape those late night radio shows. To be listened to again and again. But there was also data storage on tape. Remember tape drives? They were a key technology in early computer development, allowing unparalleled amounts of data to be mechanically created, stored for long periods of time, and rapidly accessed.
When I think of tape I think of iron oxide. It’s on tape and it’s also in your blood. It’s the stuff responsible for giving it that bright red color. It’s the stuff that holds the memory of a recording on the tape making it magnetic. The memory is in the blood. Iron oxide stores the genetic memory of music. Editing a tape splices the DNA of sound. Perhaps it is this magnetic resonance of the iron oxide, a shared connection with a vital and elemental force that has given tape such a place of prominence in electronic music. Perhaps it was the way the tapes could be manipulated, slowed down, sped up, chopped up and put into new patterns, which made tape such a dream. This medium of preservation and creation is in the very blood of electronic music.
With the invention of the tape loop the dream of creating infinite music was realized. The use of the pause button had been put on hold. Tape loops are spools of magnetic tape used to create repetitive, rhythmic musical patterns or dense layers of sound when played on a tape recorder. Sound is recorded on a section of magnetic tape and this tape is cut and spliced end-to-end, creating a circle which can be played over and over again, continuously, over and over. This is usually done on a reel-to-reel machine, though industrious lo-fi recording artists have been known to rig their own cassette tapes into loops. The loop originated with the musique concrète work of Pierre Schaeffer in the 1940s. He used the simultaneous playing of tape loops to create phrase patterns and rhythms. Musical experimentalists continued to explore the possibilities of this method on through the 1950s and 60s. Devotees of the tape loop included Steve Reich, Terry Riley, Karlheinz Stockhausen and Brian Eno.
The medium is perfect for creating phase patterns, rhythms, textures, and timbres. When the speed of a loop is accelerated to a sufficient degree a sequence of events originally perceived as a rhythm now is heard as a pitch. The variation of the rhythm in the original recording produces different timbres in the sped up sound. Tape can also be slowed down, causing the music to drop in pitch and for sounds to be stretched. Tape was also used to create echo systems. The first delay effects were made using tape loops improvised on reel-to-reels by shortening or lengthening the loop of tape and adjusting the read and write heads, to create an echo whose time parameters could be adjusted. This delayed signal may either be played back multiple times, or played back into the recording again, to create the sound of a repeating, decaying echo.
Being the pioneer he was Stockhausen made extensive use of loops in Gesang der Jünglinge (1955–56) and Kontakte (1958–60) and he used the technique for live performance in Solo (1965–66). Steve Reich was the composer to use the technique the most, specifically in his "phasing" pieces Come Out (1966) and It's Gonna Rain (1965).
In the realm of popular music it was used to great effect in the 60’s and 70’s. Think of the psychedelic music of the Beatles on the White album and of its use in the progressive rock and ambient genres. A standard loop on a standard reel-to-reel is at most a few seconds long. This is not enough for some composers. To create a longer loop a standard practice was to use two reel-to-reels or for even longer stretches of tape, to run them around mic stands, or even door knobs. Perhaps the best known album made with this technique was Brian Eno’s Music for Airports: Ambient 1. This recording ushered in the vast and sprawling genre of ambient. In creating his 1978 landmark Eno reported that for one song, "the tape loops was seventy-nine feet long and the other eighty-three feet".
Enter William Basinski
Texas born Basinski is a classically trained clarinetist who studied jazz saxophone and composition at North Texas State University in the late 1970s. At the age of twenty in 1978 he became inspired by the techniques of Steve Reich and Brian Eno and started the process of developing his own musical vocabulary using old reel-to-reel tape decks. Basinski experimented with short looped melodies. When played against themselves the loops created a pleasant feedback. Working with this discovery he created his singular meditative, melancholy style within the drone and ambient genres.
Basinki’s first release was Shortwave Music. First created in 1983, it wasn’t released until 1998 when Carsten Nicolai's Raster-Noton label put it out in a small vinyl edition. It was followed by his shortwave magnum opus The River. Basinski writes, "As a young composer in the early 1980’s I was experimenting with tape loops: recording and mixing them with sounds coming from the airwaves. The idea was to capture music out of the ether. In NYC, there was a very powerful radio station, I can’t remember the call letters, but it was the station that played American popular standards….that is, the ‘1001 Strings’ smoothed out, de-syncopated versions of the American popular standards: what was commonly referred to then as Muzak, or ‘elevator music’. In those days, there was no Prozac, only Muzak to smooth out the seams and ease the tension of hectic neurotic life in the mid-late 20th century. At any rate, this station was so powerful, it could be picked up by simply running a wire across the floor, so frequently I was picking up background transmissions in my recordings. Since it was inevitable and I had no choice in the matter, I began experimenting with recording off the radio small loops of string intros, outros and interludes randomly in my primitive studio in Brooklyn. I would then slow them down a couple of speeds and as if peering into a microscope, to see what I could discover beneath the glossy surface. Frequently, these loops held great depth and melancholy. This appealed to me greatly and I created a vast archive of these loops to later experiment with. I am still using this archive to this day.”
Having this library of ‘found’ material became very important to his work, as it became the basis for many future albums and releases. Something else he found at a thrift store was also important, the machine that would provide his radio static. “I bought a wonderful old Hallicrafters shortwave radio at the Goodwill around the corner and began listening to that. The sounds coming from this magical device were awesome. The idea that one could hear transmissions from ‘behind the Iron Curtain’ or Japan or London was thrilling and mysterious. The waves of shifting static and interstellar particle showers were mind-boggling to a young man who grew up in the shadow of the space race.
I was having a problem with a 60 Hz ground loop hum in my recordings. I had no idea what was causing it at the time…probably our fluorescent lights…just that it bothered me and I couldn’t figure out how to get rid of it. So I decided to try to mask it with the shortwave radio static. I would set the Hallicrafters on a pleasing in-between-stations setting teeming with showers of sparkling static and record live while mixing my loops. The results were extraordinary. The Hallicrafters would sometimes shift focus as if responding to the music coming from the loops. Occasionally a distant station from the Middle East perhaps, would slide into range just for a moment like a lingering column of cigarette smoke swirling slowly in a spotlight. I was very encouraged and excited. I didn’t know if I was really a composer, or if this was music, but to me it was magic! I loved it and was in my laboratory every night after work, like Dr. Frankenstien, just waiting to see what fascinating and strange sounds would bubble up next. The results of this period of experimentation were the Shortwave Music pieces and ultimately, the 90 minute masterwork of the series, The River. It would be over 25 years before these pieces would be released to the public."
Even though it wasn’t until the late 90’s that his music saw release on a label Basinski remained very active in the NYC music scene. He was a member of many bands including the Gretchen Langheld Ensemble and House Afire. In 1989, he opened his own performance space, "Arcadia" at 118 N. 11th Street. In the 1990s he helped put together many intimate underground shows at his space for artists like Diamanda Galás, Rasputina, The Murmurs, and Antony as well as his own experimental electronic/improvisation band, Life on Mars. In 2000, he made a film titled Fountain with artists James Elaine and Roger Justice.
In August and September 2001 Basinski started work on what would become his most recognizable piece, the epic four-volume album The Disintegration Loops. The album is made up of old tape loops whose quality had degraded. In an attempt to salvage these loops by recording them onto a digital format, the magnetic iron oxide ferrite on the tapes slowly crumbled. With each pass of the tape over the head on the reel-to-reel deck more and more of the iron oxide fell off. The loops were allowed to play for extended periods as they deteriorated further, with increasing gaps and cracks and spaces in the music. These sounds were treated further with a spatializing reverb effect to further enhance their haunting aura. Basinski was able to capture the sound of their disintegration and the results were beautiful and stunning. The disintegration of these tapes was made all the more poignant as he finished his work on them on the morning of 9/11. Basinksi sat on the roof of his apartment building in Brooklyn with friends listening to the finished project as the World Trade Center towers collapsed. The artwork that accompanies the album features stills of footage he shot of the NYC skyline in the aftermath of the attack. In September 2012, the record label Temporary Residence reissued the entire Disintegration Loops series as a 9xLP box set, marking the project's 10-year anniversary as well as its impending induction into the National September 11 Memorial & Museum.
The creation of the Disintegration Loops was something of an accident, timestamped by their own destruction and the terrible tragedy of 9/11. The four albums are perfect as a reminder of the beauty to be found in imperfection, as a reminder of our own transience, of our own ultimate disintegration, of how the iron oxide in our blood will once again return to dust.
Live wires :a history of electronic music by Daniel Warner, Reaktion Books Ltd, London, England, 2017.
William Basinki’s website: http://www.mmlxii.com
At the Osaka ’70 world expo Takeisha Kosugi was creating environmental sound works as a commissioned artist. This was the same year Karlheinz Stockhausen and his ensemble were creating an oasis of musical calm and exultation within the spherical auditorium at the world expo, playing pieces like Spiral and others utilizing shortwave radio. The space/time coordinates for technology and experimental music were in perfect alignment at the expo. Born in 1938 Kosugi already had considerable experience with musical antics by the time 1970 arrived. His exploration of diverse sound worlds and his humorous antics continued until his death on October 12, 2018 at the age of 80.
In 1958 Kosugi was a student of musicology at the Tokyo National University of Fine Arts and Music. It was there that his apprenticeship and final mastery for creating experimental sound worlds began. Joined by fellow student Shukou Mizuno he founded a collective improvisation ensemble. When they participated in a dance festival at the Tokyo Dance Institute they donned the name Group Ongaku. It translates into English as Group Music and is a simple description of their practice. Many more of the group's few performances were at dance concerts, symposiums and festivals, but they also performed at recitals of music by Toshi Ichiyanagi and Yoko Ono, the two leading lights of Japanese experimental music in the world scene at the time. They also performed at member Yasanao Tone's one man exhibition at the Minami Gallery. This connection to world of dance and the visual arts would follow Kosugi the rest of his life. He was later a key collaborator in Merce Cunningham’s famous dance company. Group Ongaku played radical music and it soon established them as an essential component in Japan’s post-war music scene.
As early practitioners of collective improvisation they attempted to create acoustic sound spaces that corresponded to the actual time & space they played in. Violin was Takeisha Kosugi’s primary instrument. The other members of Group Ongaku played cello, drums, guitar, and saxophone as well as using their voices and whatever else happened to be lying around nearby. Usually there was a radio nearby and whatever they could tune in off the bands became a part of their improvised sets. This is evident on the “Automatism” recording from 1960 (released by the Hear Sound Art Library in 1996. The members were also fans of the tape recorder. The recordings they put to tape were further manipulated and added into the mix during their live shows.
If you listen to it and it sounds strange and chaotic I’d agree with you. The thing about collective improvisation is that it takes practice and is a skill set that must be learned. The listener who judges the results based on previous exposure only to pop and rock music may think it is all just noise. An appreciation of jazz will have the listener better prepared for what might be encountered in the forays of Group ONGAKU, but they may still be left bewildered, apparently abandoned in a wilderness where loud predators lurk behind every menacing sound. The different voices of the instruments may appear disconnected; but there is a unity, like a golden thread, amidst all the howling. There is a method to the apparent madness of inchoate gurgling that churns alongside sax squelches, vacuum cleaners, and violin scrapings. This is the soundtrack for a generation waking up and coming into adulthood after the devastation of the Nagasaki nightmare, their memories forever burned in the aftermath of Hiroshima. In their early twenties and full of the vigor of youth is it not to be expected that their experience, when translated into music, the language of pure emotion, shows signs of chaos and rage?
Yet they were firmly in the zeitgeist of the time even if they were removed from centers of musical innovation in the west. The sounds they made shared a common goal and direction with other contemporaries such as John Cage with whom Kosugi would later formed a close friendship. Audiences in Japan did not hear the work of those such as Group Ongaku, Cage and Stockhausen with the same revulsion and outrage as often happened on opening nights in Europe and America. Steeped in the traditions of Zen Buddhism and the Shinto religion felt more at ease with the random, non-linear, and abstract acoustics they created. Which isn’t to say they were adored as much as the Japanese bands who brought rock and roll into their hearts and made it their own, but only that their existed a level of understanding from their countrymen.
Listening to these recordings now, over fifty years after they were made, they sound remarkable and are right at home in the canon of twentieth century improvisational and experimental music. The work Kosugi did with Group ONGAKU formed a strong foundation for his later journeys with his next band the Taj Mahal Travelers, and his ongoing work as a solo artist.
THE TAJ MAHAL TRAVELLERS
The efforts of Group Ongaku gradually wound their way down. Throughout the rest of the 60s it served the needs of individual composers within the group as a way to have a ready ensemble able to play their work. Ready to embark on a new project Kosugi pulled together the members of the Taj Mahal Travellers in 1969. His recruits were from the ranks of the younger generation of Japanese. They had grown up with rock and roll and jazz. Their minds had been turned on the moment they had tuned in their radios and they were ready to drop out. Standard musical instruments were played by the Travelers, but the way they played them was very not standard. For the most part the instruments used were acoustic, such as the santur (an Iranian hammered dulcimer), harmonica, tuba, tympani, trumpet. Others were electrically amplified such as Kosugi’s violin. Ryo Koike also amplified his double bass, but he is remembered more the way he played with it sitting flat on its back across the ground. Straddled across the top his bass the way he bowed his instrument was very sensual. A Mini Korg synthesizer was also a part of their set up. Besides playing mandolin Michihiro Kimura was also the resident tree branch shaker. As Julian Cope noted about this unusual instrument “Kimura appears to have spent much of the early ‘70s shaking a tree branch in a wide variety of obscure locations around the world.” Other instruments in this vein were “voices, stones, and bamboo winds.” Trying to hear those on their extant recordings is part of the magic and the mystery. The Taj Mahal Travelers had made themselves a promise to play “wherever a power supply was available” and their sound had an emphasis on heavy electronic processing. The use of delay effects and echoes congealed the array of their instruments into a swirling cosmic gel.
It is undoubted that the Taj Mahal Travellers were infused with the psychedelic spirit of the day. Yet group leader Kosugi put forth a valiant effort to make sure they were not confused with being a mere commune of music making hippies. In their first year they played a series of shows at Shibuya’s Station 70 club and the stage was taken over by revelers who wanted to contribute to the music making. They jumped up onstage, even though they hadn’t been invited, thinking it was some kind of “happening” or “be-in”. Yet the sound of the Travellers wasn’t intended to be a free for all among whoever wanted to participate. Rather it was an improvised exploration of sonic geography between dedicated musicians who were united in a singular aim. After these initial performances Kosugi took pains to only book his band at places such as art galleries or the kaikan culture-halls.
The group took a break over the summer when Kosugi went to Osaka for to perform as a solo artist at Expo ’70. He became friends with Stockhausen and the members of that group and was inspired by their day long performances in the specially designed spherical auditorium. With these experiences fresh in his mind at the end of Expo ’70 he was ready to get to work with the Travelers again. Some of the band still insisted on trying to play at rock venues, which Kosugi resisted. Their collective destiny changed when they were asked to play a dawn-to-dusk concert at Oiso Beach. This experience gave the groups the modus operandi they needed to succeed. Throughout the rest of their career they continued to perform outdoors for the most part, playing their strange music on beaches and hilltops. Their music consisted of improvised drones composed and long spontaneous passages reflective of the deep meditative presence they occupied within the unique space/time of coordinates of each specific performance. The group continued to play at beaches, and on mountains and were also invited to play at Shinto temples. Between 1971-2 they went on a tour where they played in majestic locations in the Netherlands, Germany and England, before heading on to Iran and India where they played at the Taj Mahal itself, before coming back home to Japan. From 1972-74 they spent a good deal of time both on the road and in the studio. The names of their recorded songs reflect their process, such as “Taj Mahal Travelers between 6.20 and 6.46pm” or “Taj Mahal Travelers between 7.50 and 8.05pm”. They are snapshots of what was played by a certain group at a certain place at a certain time.
After making two albums with his band Takeisha returned to the studio in September of 1974 to make a solo album. Catch Wave is a piece he wrote for processed violin, voice, radios and oscillators. It is available for listening in the extensive cultural archives of Ubu.com at http://www.ubu.com/sound/kosugi.html. The first side of the LP arrives like a beam from a strong station. With the antenna pointed it gets a bearing on the transmitting station and comes in full quieting. As only good radio and good music can do, Catch Wave transports you to another world, one of endless shimmering undulations. The rising of the waves of the radio and the oscillator, and a prevalent wah-wah-wah and whirr-whirr-whirr of mysterious origin are mixed in with the floating see-saw of the electric violin. The entirety of the piece creates an immersive mysterious sound world. The waves build and then fall back again into nothing. The piece takes up the entirety of two-sides of a slab of wax. It is the kind of signal I always want to tune right into. It is kind of wave I always want to catch.
Japrock sampler: how the post-war Japanese blew their minds on rock ‘n roll by Julian Cope, Bloomsbury Publishing, 2007
Holger Czukay was another musician who was fascinated with the sounds of shortwave listening. He brought his love of radio and communications technology on board with him when he helped to found the influential krautrock band Can in 1968. Shortwave listening continued to inform Czukay’s musical practice in his solo and other collaborative works later in his career. It all got started when he worked at a radio shop as a teenager.
Holger had been born in the Free City of Danzig in 1938, the year before the outbreak of World War II. In the aftermath of the war his family was expelled from the city when the Allies dissolved its status as free city-state and made it become a part of Poland. Growing up in those bleak times his formal primary education was limited, but he made up for it when he found work at a radio repair shop. He had already developed an interest in music and one his ideas was to become a conductor, but fate had other plans for him. Working with the radios day in and day out he developed a fondness for broadcast radio. In particular he found unique aural qualities in the static and grainy washes of the radio waves coming in across the shortwave bands. At the shop he also became familiar with basic electrical repair work and rudimentary engineering. All of this would serve him well when building the studio for Can. In his work with the band he not only played bass and other instruments but acted as the chief audio engineer.
He spoke about this time, and his fascination with the mystery of electricity, in an interview. “When I was fourteen or fifteen years old, I didn't know if I wanted to become a technician or a musician. And when you are so young you think the one has to exclude the other. So in the very beginning I thought I am sort of a musical wonder-child, and want to become a conductor and that was very very serious, but there was no chance to get educated as I was a refugee after the war. And then, suddenly, electricity. Electricity was such a fascinating thing - it was something. And then I became the boy in a shop who carries the radios to repair them and carries them back again. That was so-called three-dimensional radio, before stereo. There was one front speaker in the radio and at the side, there were two treble speakers which gave an image of spatial depth. I must say these radios sounded fantastic.”
In 1963 at the age of twenty-five he Czukay decided to pursue the musical side of his vocation and begin studying under Karlheinz Stockhausen at the Cologne Courses for New Music. This is where he met up with Irmin Schmidt, another founding member of Can, who was also a student of Stockhausen’s. As much as Can itself was one of the guiding forces of Krautrock, or Kosmiche music as it was also called, a broad style of experimental rock music developed in Germany in the late 60s. Krautrock was for the most part divorced from the traditional blues and rock and roll influences of British and American rock music scenes of the time. Krautrock featured more electronic elements and contributed to the further development electronic music and ambient music as well as the birth of post-punk, alternative rock and New Age music. Stockhausen himself could be thought of as one of its chief instigators, a kind of Godfather of the genre. This was due not only to his influence as a teacher of German musicians, but because of his pioneering work with the raw elements of electronic music itself at the WDR studios.
Eccentric British rock musician and author Julian Cope discusses the importance of Stockhausen’s composition Hymnen in his book Krautrock Sampler. He considered that piece in particular pivotal to the whole Krautrock movement. It’s release had “repercussions all over W. Germany, and not least in the heads of young artists. It was a huge 113 minute piece, subtitled ‘anthems for electronic and concrete sounds’. Hymnen was divided up into four LP sides, titled Region I, Region II, Region III and Region IV.” In a previous column I had discussed this piece of music as an early attempt at creating ‘world music’. With its sounds of shortwave receivers and electronics it plays anthems from various countries in an attempt to unify them. What he did with the German anthem, ‘Deutschland, Deutschland Uber Alles’ had a liberating effect on young Germany, who had grown up under the shadow of the worst kind of nationalism. Cope writes of the German publics reaction, “The left-wing didn’t see the funny side at all and accused him of appealing to the basest German feelings, whilst the right-wing hated him for vilifying their pride and joy, and letting the Europeans laugh at them. Stockhausen had just returned from six months at the University of California, where he had lectured on experimental music. Among those at his seminar’s were the Grateful Dead’s Jerry Garcia and Phil Lesh, Grace Slick of Jefferson Airplane and many other psychedelic musicians. Far from snubbing the new music Stockhausen was seen at a Jefferson Airplane show at the Filmore West and was quoted as saying that the music ‘…really blows my mind.’ So whilst the young German artists loved Stockhausen for embracing their own rock’n’roll culture, they doubly loved him for what they recognized as the beginning of a freeing of all German symbols. By reducing ‘Deutschland, Deutschland Uber Alles’ to its minimum possible length he had codified it…Stockhausen had unconsciously diffused a symbol of oppression, and so enabled the people to have it back.”
Czukay’s time studying with Stockhausen was as important to the development of Krautrock as was Hymnen itself. In fact while Stockhausen was working on Hymnen at the WDR studio during the day, Holger Czukay and the other members of a pre-Can group, the Technical Space Composers Crew, would go in and use the equipment at night to record their own album Canaxis. In the piece ‘Boat Woman’s Song’ some of Czukay’s early pioneering use of sampling can be heard. The proto-ambient pieces of music on this record were painstakingly assembled from tape loops and segments of a traditional Vietnamese folk song. In an interview Czukay spoke of the experience. “When Stockhausen left for home, we had a second key and went in and switched everything on. We went in and Canaxis was produced in one night. In one night the main song ‘Boat Woman Song’ was done. I prepared myself at night at home, so I knew exactly what I wanted to do, so in four hours the whole thing was done.” David Johnson helped Czukay and Rolf Dammers engineer the album. “He knew the studio a bit better than me. He was engineering a bit, switching on stuff, copying from one machine to another…and that was okay. In four hours the job was done.” The music on Canaxis is eerie and beautiful and haunting. It is both a part of this world, but also not of it. It seems as if it has come to us from beyond, and some fifty years later it still sounds fresh, as all timeless music does.
Stockhausen influenced Czukay in other ways. It hadn’t originally been Czukay’s intention to become a rock musician. He was more interested in classical music, which he thought was the best, with a definite leaning towards it’s avant-garde. “Therefore I went to Stockhausen as he was the most interesting person. Very radical in his thoughts. With the invention of electronic music he could replace all other musicians suddenly: that was not only an experiment; that was a revolution! I thought that is the right man, yeah? So I studied with him for about three years. Until I finally said, if a bird is ready to fly, he leaves his nest and that is what I have done.”
After leaving the nest Holger became a music teacher in his own right as a way to make a living. Later he was able to work full time as a musician, because as he often joked, he was married to a rich woman. Teachers always learn from their students though and his were teaching him about the rock and pop music of the time, playing him records of Jimi Hendrix and the Rolling Stones. The Velvet Underground and Pink Floyd's stood out to him, as did the song I am the Walrus by the Beatles. Czukay fell in love with that masterpiece of psychedelic pop. In particular he loved the way bursts of AM static and the sound of tuning between stations had been used for a musical effect at the end of the cut.
All of these influences and elements would fused together in his work with Can, a project begun while he was still a teacher. Irmin Schmidt’s mark on the band was equally massive, and he was just steeped, if not more, in the 20th century avant-garde, but exploring his contribution is not in the scope of this article. For most of his time in the band, Czukay played bass, but toward the end he gave up that instrument altogether in favor of a shortwave radio. He speaks about Stockhausen’s influence in making this switch.
“A shortwave radio is just basically an unpredictable synthesizer. You don’t know what it’s going to bring from one moment to the next. It surprises you all the time and you have to react spontaneously. The idea came from Stockhausen again. He made a piece called ‘Short Wave’ [‘Kurzwellen’]. And I could hear that the musicians were searching for music, for stations or whatever, and he was sitting in the middle of it all and the sounds came into his hands and he made music out of it. He was mixing it live – and composing it live. He had a kind of plan, but didn’t know what the plan would bring him. With Can, I would mix stuff in with what the rest of the band were playing. Also, we were searching for a singer and we didn’t find one – we tested many, but couldn’t find anyone – so I thought: ‘why not look to the radio for someone instead? The man inside the radio does not hear us, but we hear him.’” This he used without additional effects. “The radio has a VFO – an oscillator – where you can receive single side-bands, which means just half of the waves and you can decode it – it’s like a ring modulator. And that’s more than enough. The other members of Can were very open to these unpredictable uses of instruments, especially in the early days.”
His work with radios in a musical setting was a way for him to bring in energies from outside the band into their work. In his own words, “I looked for the devices to bring a different world into the group again and they had to react on that. That was the idea, working with a radio or working with tapes or working with a telephone. I even had this idea that with a transmitter, we could transmit and receive things back again. Or to call up people like today's radio shows where people call up or you call people. This sort of interaction I wanted to establish. But the group was not interested in this. So I finished with Can and went my own way. And here, I really followed this. I was working on that for a few years (with Can) but then I found it that it wasn't fun anymore. I continued alone then worked with other people.”
Can had a great run as a band from 1968 to 1979. Afterwards Czukay continued to flourish with his solo recordings, including albums like Radio Wave Surfer. The methods he developed for using radio as an instrument he termed radio painting. He continued to make solo albums and collaborate with other musicians on various project throughout the 80’s, 90’s and 2000’s. He died of unknown causes on September 5, 2017.
All of this tells you the who, what, where, when and why. But to get the full experience I invite you to blow your mind by listening to Stockhausen, Can, Holger Czukay, and other crispy Krautrock bands! There is no better place to start than with Hymnen, the Can discography.
Krautrock Sampler: one head’s guide to great Kosmische muisk 1968-onwards by Julian Cope, Head Heritage, 1996.
Starting in the early 1960s Karlheinz Stockhausen composed several instrumental works which he called "process compositions". These did away with traditional stave notation and instead used symbols including plus, minus, and equal signs that indicated the successive transformations of sounds that were otherwise unspecified or unforeseeable by the composer. In this way he brings elements of improvisation into the fold of Western classical music where the strict adherence to a fixed score left little room for interpretation by musicians. The scores in his process pieces don’t dictate specific notes or ways of playing but rather specify the way a sound is to be changed or imitated. Taking a cue from his studies of information theory Stockhausen created a way of writing music that is similar to computer programming. The program “determines the way information is processed while leaving the choice of information to be processed to the individual user.” (Maconie 1990, 156-157)
Stockhausen’s process pieces include Plus-Minus (1963), Prozession (1967), Kurzwellen, and Spiral (both 1968). Eventually they led to the text based processes of his intuitive music compositions in the cycles Aus den sieben Tagen (1968) and Für kommende Zeiten (1968–70).
Kurzwellen (Short waves), the third of the process pieces also marks the beginning of Stockhausen’s magnificent voyage using shortwave receivers as a medium for musical transportation. The formal procedures in Kurzwellen (and the others) are fixed. Stockhausen thinks of these not as fixed in the way Beethoven’s Fifth symphony is a fixed piece that will sound the same to a greater or larger degree from recording to recording or performance to performance. Only the processes themselves are fixed. These are indicated primarily by plus, minus, and equal signs and constitute the composition.
Yet the sound materials themselves, like the knobs on the tuners, are variable. The process scores can be followed and bring about very different results each time they are played and yet somehow still sound similar. The sound material coming in from the shortwave radios is unpredictable. Yet the prescribed processes themselves can be heard from one performance to another as being "the same". These developments in musical theory and practice make live performances and new recordings exciting events.
The sounds coming in from the radio are what they players use as source material for the process of transformation as indicated by the score. Each player has a radio at their station. Stockhausen writes, “An undreamed intensity of listening and of intuitive playing is reached – and shared by all co-players and listeners – through the concentration of all players on unforseeable events coming from the realm of short-waves, in which one only very rarely knows who composed or produced them, how they came into being or from where, and in which all possible acoustic phenomena can appear.”
In practice the performers search for desirable sounds on the radio. These are for the most part the more abstract and noisy sounds found in the spectrum. Then they replicate those sounds on their instruments and transform them by using variations in register, volume, duration or rhythmic density. There are additional instructions in the score for players to form synchronous duo, trio and quartet events, where players play together in tandem, or alternatively trade short events with one another.
Part of the reason Stockhausen proscribed shortwave receivers rather than just the AM and FM broadcast band receivers most often used by John Cage is that they pulled in sounds from around the world. This played into his idea of creating a kind of world music. Shortwave also has a rich variety of sounds that allows the musicians greater freedom in finding sound material transform.
He continued to use shortwave radios in the pieces Spiral, Pole for 2, and Expo for 3. Writing of Spiral the composer says, "Doesn't almost everyone own a short-wave receiver? And doesn't everyone have a voice? Wouldn't it be an artful way of life for everyone, to transform the unexpected (which one can receive on a short-wave radio) into new music - i.e. into a consciously-formed sound process which awakens all intuitive, mental, sensitive and artistic faculties, and makes them become creative, so that this awareness and these faculties rise like a spiral?!"
Expo is kind of the penultimate of these pieces, though it shares close similarities with Spiral and Pole, differing mostly in the number of players. All can be heard as being part of the same family of process pieces using shortwave radio. Expo was written for Stockhausen's 1970 stay in Japan at the World Fair in Osaka ("EXPO '70"). For the Fair Stockhausen designed a large spherical auditorium that was then developed by his collaborator Fritz Bornemann. Outfitted with 50 loudspeakers the audience was literally surrounded on all sides by sound. Karlheinz was able to control the movement of the sound mix around these speakers, moving the audio vertically and horizontally. Sometimes he created rising and falling spiral motions using what was termed a "rotation mill". There were also various balcony stages and platforms as the podium that gave the works peformed there further spatial dimension. For 183 his crew of twenty performed daily from 3:30 to 9pm. With breaks for individual musicians I’m guessing. The German pavilion became one of the main attractions at Expo '70.
These pieces represent a kind of music where both musicians and listeners must surrender completely to the process without worrying about the outcome. As humans this “not worrying about the outcome” of an action or a path taken can be a brutal challenge. These works embody a philosophy that has the effect of helping me to worry less about outcomes in my life. Process music as applied to my life gives me a sense of freedom from the outcome of an action. This allows me to be more present with the action itself as it happens, whether it is writing, radio, or some other activity. Listening to process music reminds me that I need to surrender to what I am doing in the moment. Surrender is difficult. Part of the joy to be found in the arts is submitting to how they grasp hold of us. Listening itself becomes a transformation.
To the amateur radio or SWLing enthusiast the sounds of Kurzwellen will be familiar. The static crashes and buzzes, warbling of telemetry, announcers in multiple languages and mysterious numbers stations are sweet nectars of sound for the radio hobbyist. Listening to these recordings is like drinking a fine wine. I prefer it served in a darkened room with ears open to the world.
http://stockhausenspace.blogspot.com/ (plus/minus series of articles)
The works of Karlheinz Stockhausen, by Robin Maconie, 2nd edition.
Gesang der Jünglinge
There is a mystery in the sounds of the vowels. There is a mystery in the sound of the human voice as it is uttered from the mouth and born into the air. And there is a mystery in the way electrons, interacting inside an oscillating circuit, can be synthesized and made to sing. Karlheinz Stockhausen set out to investigate these mysteries of human speech and circuitry as a scientist of sound, using the newly available radiophonic equipment at the WDR’s Studio for Electronic Music. The end result of his research was bridged into the vessel of music, giving the ideas behind his inquiries an aesthetic and spiritual form. In doing so he unleashed his electroacoustic masterpiece Gesang der Jünglinge (Song of the Youths) into the world.
Part of his inspiration for Gesang der Jünglinge came from his studies of linguistics and phonetics at the Bonn between 1954 and 1956, with his mentor Werner Meyer-Eppler. The other part came from his spiritual inclinations. At the time of its composition Stockhausen was a devout Catholic. His original conception for the piece was for it to be a sacred electronic Mass born from his personal conviction. According to the official biography, he had asked his other mentor Herbert Heimert to write to the Diocesan office of the Archbishop for permission to have the proposed work performed in the Cologne Cathedral, the largest Gothic church in northern Europe. The request was refused on grounds that loudspeakers had no place inside a church. No records of this request have been uncovered, so this story is now considered apocryphal. There are doubts that Eimert, who was a Protestant, ever actually brought up the subject with Johannes Overath. Johannes was the man at the Archdiocese responsible for granting or denying such requests and by March 1955 had become a member of the Broadcasting Council. It is likely Heimert and Overath were associates. What we can substantiate is that Stockhausen did have ambitions to create an electronic Mass and that he experienced frustrations and setbacks in his search for a suitable sacred venue for its performance, one that would be sanctioned by the authorities at the church.
These frustrations did not stop him however from realizing his sound-vision. The lectures given by Meyer-Eppler had seeded inspiration in his mind, and those seeds were in the form of syllables, vowels, phonemes, and fricatives. Stockhausen set to work creating music where voices merged in a sublime continuum with synthetic tones that he built from scratch in the studio. To achieve the desired effect of mixing human voice with electronics he needed pure speech timbres. He decided to use the talents of Josef Protschka, a 12-year old boy chorister who sang fragments derived and permutated from the “Song of the Three Youths in the Fiery Furnace” in the 3rd book of Daniel. In the story three youths are tossed into the furnace by King Nebuchadnezzar. They are rescued from the devouring flames by an angel who hears them singing a song of their faith.This story resonated strongly with Stockhausen at the time. He considered himself to be a fiery youth. Still in his twenties he was full of energy, but was under verbal fire and critical attack from the classical music establishment who lambasted him for his earlier works. Gesang der Jünglinge showed his devotion to the divine through song despite this persecution.
The electronic bedrock of the piece was made from generated sine tones, pulses, and filtered white noise. The recordings of the boy soprano’s voice were made to mimic the electronic sounds: vowels are harmonic spectra which may be conceived as based on sine tones; fricatives and sibilants are like filtered white noise; and the plosives resemble the pulses. Each part of the score was composed along a scale that ran from discrete events to statistically structured massed "complexes" of sound. The composition is now over sixty years old, yet the synthetic and organic textures Stockhausen pioneered for it are still fresh. They speak of something new, and angelic.
Stockhausen eventually triumphed over his persecution when he won the prestigious Polar Music Prize (often considered the "Nobel Prize of music") in 2001. At the ceremony he controlled the sound projection of Gesang der Jünglinge through the four loudspeakers surrounding the audience.
These breakthroughs in 20th century composition practice wouldn’t have been possible without the foresight of the WDR in creating an Electronic Music Studio and promoting new music on their stations.
As the world caught wind of the work being done at the WDR’s Electronic Music Studio, other radio stations and broadcasting corporations followed suit. NHK (Nippon HosoKyokai) in Japan built their electronic music studio in 1955, directly modeling it on the one at WDR. In 1958 the BBC created their famous Radiophonic Workshop. (I blame starting to watch Doctor Who as a ten year old, with its strange soundtrack and incidental music, for what became my lifelong fascination with electronic music.) The studio at NHK was just over ten years old when they invited Stockhausen over to work there and create two pieces for their airwaves.
When he arrived in Japan Karlheinz was severely jet lagged and disoriented. For several days he couldn’t sleep. That’s when the strange hallucinatory visions set in. Laying awake in bed one night his mind was flooded with ideas of "technical processes, formal relationships, pictures of the notation, of human relationships, etc.—all at once and in a network too tangled up to be unraveled into one process.” These musings of the night took on a life of their own and from them he created Telemusik.
Of Stockhausen’s many ambitions, one of them was to make a unified music for the whole planet. He was able to do that in this piece though the results sounded nothing like the “world music” or “world beat” genre often found on CD racks in coffee houses and gift shops. In the 20 minutes of the piece he mixed in found sounds, folk songs and ritual music from all over the world including the countries Hungary, Spain, China, Japan, the Amazons, Sahara, Bali and Vietnam. He also used new electronic sounds and traditional Japanese instruments to create what he called "a higher unity…a universality of past, present, and future, of different places and spaces: TELE-MUSIK." This practice of taking and combining sound sources from all over is now widely practiced across all genres of music in the form of sampling. But for Karlheinz it wasn’t simply making audio collage or taking one sample to build a song around it. Even though he used samples from existing recordings to make something different, he also developed a new audio process that he termed intermodulation.
In his own words he speaks of the difference between collage and intermodulation. “I didn’t want a collage, I wanted to find out if I could influence the traits of an existing kind of music, a piece of characteristic music using the traits of other music. Then I found a new modulation technique, with which I could modulate the melody curve of a singing priest with electronic timbres, for example. In any case, the abstract sound material must dominate, otherwise the result is really mishmash, and the music becomes arbitrary. I don’t like that.” For example he used "the chant of monks in a Japanese temple with Shipibo music from the Amazon, and then further imposing a rhythm of Hungarian music on the melody of the monks. In this way, symbiotic things can be generated, which have never before been heard"
Stockhausen kept the pitch range of Telemusik piece deliberately high, between 6 and 12 kHz. This is so that the intermodulation can project sounds downwards occasionally. He wanted some of the sections to seem “far away because the ear cannot analyse it” and then abruptly it would enter “the normal audible range and suddenly became understandable". The title of the piece comes from Greek tele, "afar, far off", as in "telephone" or "television". The music works consistently to bring what was “distant” close up. Cultures which were once far away from each other can now be seen up close, brought together by the power of telecommunications systems, new media formats, new music. By using recordings of traditional folk and ritual music from around the world Stockhausen brought the past brought up close and into the future by mixing it with electronics.
To accomplish all this at the NHK studio he used a 6-track tape machine and a number of signal processors including high and low-pass filters, amplitude modulators and other existing equipment. Stockhausen also designed a few new circuits for use in the composition. One of these was the Gagaku Circuit named after the Japanese Gagaku orchestra music it was designed to modulate. It used 2 ring-modulators in series to create double ring-modulation mixes of the sampled sounds.12 kHz was used in both the 1st and 2nd ring-modulation, with a glissando in the 2nd ring modulation stage. Then music was frequency-filtered in different stages at 6 kHz and 5.5 kHz.
Writer Ed Chang explains the effect of the Gagaku Circuit: “For example, in one scenario the 1st ring modulation A used a very high 12 kHz sine-wave base frequency, resulting in a very high-pitched buzzing texture (for example, a piano note of A, or 0.440 kHz, would become a high 12.440 kHz and 11.560 kHz).The 2nd ring-mod B base frequency (in this case with a slight glissando variation on the same 12 kHz base frequency) has the effect of ‘demodulating’ the signal (bringing it back down to near A). This demodulated signal is also frequency filtered to accentuate low frequencies (dark sound).These 2 elements (high buzzing from the 1st signal and low distorted sounds from the 2nd) are intermittently mixed together with faders. By varying the 2 ring-mod base frequencies and the 3 frequency filters, different effects could be achieved. This process of modulation and demodulation is what Stockhausen means when he says he was able to ‘reflect a few parts downwards’.”
The score was dedicated to the Japanese people and the first public performance took place at the NHK studios in Tokyo on 25 April 1966.
Telemusik prepared Stockhausen for his next monumental undertaking, Hymnen (Anthem) made at the WDR studio. The piece had already been started before Telemusik but he had to set it aside while in Japan. Hymnen is a mesmerizing elaboration of the studio technique of intermodulation first mastered at NHK in Japan. It is also a continuation of his quest to make a form of world music at a time when the people around the planet were becoming increasingly connected. To achieve this goal he incorporated forty national anthems from around the globe into one composition. He had collected 137 anthems in the process of composing the piece, by writing to radio stations in those countries and asking them to send recordings to Germany. The piece has four sections though it was first slated for six. The last two never materialized. These anthems from around the world are intermodulated into an intricate web of sound lasting around two hours long. Thrown into the kaleidoscopic mix are all manner of other sounds produced from sine wave generators, shortwave radio, his voice speaking, and many others. Whenever I listen to Hymnen the sounds of the music from different nations reminds me of someone tuning across the shortwave bands. In the audio spectrum and in the radio spectrum borders and boundaries are porous, permeable. And that is one of the things I love about amateur radio: the sharing of good will between women and men from all across the globe, our signals reaching each other across space to make the formerly distant close. Hymnen ends with a new anthem for a utopian realm called "Hymunion". Perhaps it can be reached through the shared communion that comes from truly listening to each other.
John Cage's composition Imaginary Landscape No. 4 wasn't the end of his engagement with the use of radio as a sound source. In fact his imagination, now glowing like a hot tube, was just getting warmed up. I will turn to his next experiments shortly, but I wanted to dwell for a moment on his earliest radio work, that I overlooked in last month’s article. I had quite forgotten about Cage's involvement with the Boy Scouts in Los Angeles in the early 1920's . It was during this time period that his fascination with radio was sealed. His father had built a crystal set that could be plugged into an electric light system. For his effort it got his father listed in the city directory as a "radio engineer" though he had been more recently famous for his work on submarines. Cage sr. had invented parts and systems for subs that helped keep them level and also a system for running the engines on gasoline instead of batteries, which increased the speed of the subs. His father's flair for invention seemed to have been passed on to Cage jr. As a Tenderfoot in the Boy Scouts John got the idea of hosting a scouting program on the radio. First he obtained permission from his organization, and then he approached LA station KFWB who rejected his proposal. He next took his idea to KNX, and they gave the show the green light. It broadcast weekly on Friday afternoons. John at the time had considered himself destined to be in the ministry as his grandfather had been. As such he began each program with ten-minutes of oratory from a local religious person, be they minister, rabbi, or priest. The rest of the show was devoted to singing Scout songs over the air, sometimes with John accompanying his fellows on the piano. Other topics included such favorites as building fires and tying knots. KNX is still on the air on 1070 kHz in L.A. as one of the original clear channel stations, blasting a non-directional 50,000 watts. KNX had begun with a humble 5-watts when amateur Fred Christian put it on the air as 6ADZ. It was from these small beginnings, and his first taste of the airwaves, that he built on as a composer, presenter, experimenter, creating works for radio and incorporating radios themselves into a number of works.
After Imaginary Lanscape No. 4 Cage's next piece involving radio was written for a television program. His piece, Water Walk, lasts about three minutes and consists of many small actions relating to water. He timed each of his sound making actions to the precise second required by the score using a stop watch. Written for such fun sound making things as gong with water gun, and crushed ice in electric mixer, it also includes five radios and a piano. He stopped at the radios and adjusted frequency and volume, then released steam from a kettle, and plinked a few keys on the piano. Water Walk appeared live on television twice, first in 1959 in Milan, on the show Lascia o Raddoppia, an Italian version of the then popular Double or Nothing Game Show. Returning back home he got the chance to share it with American audiences on I've Got a Secret in 1960.
Six years down the road came Variations VII that was presented on two of the nights of 9 Evenings: Theatre and Engineering that paired artists, musicians and performers with engineers from Bell Labs in presenting new works fusing technology to contemporary art practices. The 9 Evenings was the first in a series of projects that came to be known as E.A.T., or Experiments in Art and Technology. This was the first organized large scale collaboration between artists, engineers, and scientists. Some of the engineers included Max Mathews (whose work was included previously in this column), Bela Julesz, Billy Klüve, John Pierce, Manfred Schroeder, and Fred Waldhauer, alongside many others, around 30 in total. There were 10 artists involved including Cage, Robert Rauschenberg, David Tudor, and Robert Whitman. The collaboration between the artists and engineers produced a number of "firsts" for technology in the theater. Some were specially-designed systems and equipment. Others repurposed existing gear in innovative ways. Closed-circuit television and television projection was used on stage for the first time; an infrared television camera captured action in total darkness; a Doppler sonar device translated movement into sound; a fiber-optics camera picked up objects in a performer's pocket; and portable wireless FM transmitters and amplifiers transmitted speech and body sounds to loudspeakers. The performances took place between October 13-23, 1966 at New York's 69th Regiment Armory, at Lexington Avenue and Twenty-Fifth Street. Around 1000 people attended each evening.
The engineering side for Cage's piece was overseen by Cecil H. Coker whose primary area of focus was acoustic research, specializing in articulatory speech synthesis. Coker, with two colleagues, wrote the first software text-to-speech program in 1973. Coker had worked with Cage before on the piece Variations V helping to develop a system of for using photoelectric cells to provide lighting and randomly triggered sounds. Variations VII was considerably more involved though it still used photoelectric cells as a key component for triggering sounds.
In composing Variations VII, Cage used no previously prepared sources of music. It consisted only of "those sounds which are in that air at the moment of performance." Part of the elaborate set up included ten telephone lines installed to the Armory and kept open at various locations in New York City. Some of the places they were connected to included Luchow's restaurant, the Aviary, the 14th Street Con Edison electric power station, the ASPCA lost dog kennel, The New York Times press room, Merce Cunningham’s dance studio, and one next to fellow composer Terry Riley's turtle tank. Magnetic pickups on the telephone receivers fed these sound sources into Cage's sound manipulation system, and from there to a dozen loudspeakers, one ceiling speaker. He also used 20 radios, one tuned to the police department dispatch), 2 television bands, and 2 Geiger counters. Oscillators and a pulse generator were other sound sources. Rounding it all off were a dozen household appliances such as blenders, fans, a juicer, and washing machine, wired with contact microphones. If that wasn't enough sounds from four wired body parts, heart, brain, lungs and stomach were included in the unpredictable mix. The entire set up stood on a platform with equipment stretched across two long tables. Cage, David Tudor and three other musicians moved around between the rows twisting knobs, plugging and unplugging cords and circuits, and flipping switches. Adding further randomness to the mix were the 30 photocells and lights mounted at ankle level around the performance area. These activated and triggered different sound sources as the performers, and audience who came in close to watch, moved around the set up.
Video artist Naim June Paik compared the roaring noise of Variations VII to a Niagra Falls of sound. Nothing like it had ever been heard before. And since so many of the sounds came from live sound sources an exact sound replica can never be recreated. Paik also considered to be Cage's masterpiece performance in the realm of electronic music.
The Maker and Hacker movements have had a great success in continuing to build relationships between the technically minded and the artistically minded. Ham radio has different restrictions imposed on it by the FCC. However it seems to me that somehow Hams could still work in creative ways with artists and musicians, and continue to forge vital connections between art and technology.
Begin again: a biography of John Cage by Kenneth Silverman, Alfred Knopf, New York, 2010.
Where the Heart Beats: John Cage, Zen Buddhism, and the Inner Life of Artists, by Kay Larson, Penguin Press, New York, 2012.
Reception: the radio works of Robert Rauschenberg and John Cage, by Alana Pagnutti, Smith and Brown, 2016.
The development of telecommunications technology and electronic circuits had a major impact on the creation of new musical instruments from the very beginnings of the field. But it was only in 1951 that a composer first got the idea that the radio itself could be used as a musical instrument. Since then the use of radio as a source for live, unpredictable sound, music, and voice has become commonplace across the genres of contemporary classical, and the various styles of electronic, rock and pop music. The next several installments of the music of radio series will explore some of the key composers and pieces of music that used radios as the primary instrument. Using the radio as an instrument has become part of what composer Alvin Curran has called "the new common practice" or grab-bag of themes, principles, and methods being used to create the sonic backdrop of the landscape that everyone now inhabits in this age of electronic multimedia.
"It's not a physical landscape. It's a term reserved for the new technologies. It's a landscape in the future. It's as though you used technology to take you off the ground and go like Alice through the looking glass." John Cage wrote this about his series of Imaginary Landscape compositions that first began in 1939 with No. 1, written for two variable-speed turntables, frequency recordings, muted piano, and cymbal. It was potentially the first piece of electroacoustic music ever composed. The turntables played test tones. Some were constant, others had a sliding pitch. From the very beginning the piece was envisioned for radio, to be performed for either live or recorded broadcast. Since Cage had been a boy, he had been fascinated by the medium. Born in 1912 broadcasting was still in its infancy when it first reached his ears. Radio was so new anything could be done with it. The lackluster formats most common on the broadcasting portions of the spectrum now could well use an injection of the wonder the medium held in those first few decades.
Imaginary Landscape No. 1 was written while Cage held a teaching position at the Cornish School in Seattle. The school had been founded by Nellie Cornish, who had received some education in radio technology from Edward R. Murrow when visiting him at the CBS station in New York. In 1936 she created at Cornish the first school for radio technology in the United States. The studio at the school was equipped with the latest broadcasting and recording gear. It was there that Cage first began to experiment with the use of electrical sounds for musical purposes. At that time he was deep into writing percussion music and he began incorporating the sounds of radio and oscillator frequencies into these pieces. Reporting on Imaginary Landscape No. 1 the Seattle Star wrote that it was a "staccato roar of radio static and ghastly, ghostly whistles with intermittent shrieks". While this might have terrified listeners of the time, anymore people take such music as a matter of course, paying it no mind, especially when it is used in such things as the soundtrack or incidental music in film and television.
In 1941 Cage had found himself spending a large part of the year in Chicago. It was here that his interest in radio music continued to grow. Around this time he had published an article "For More New Sounds" in the journal Modern Music. In this essay he wrote about the similarities to be found between the materials used to create sound effects in radio studios and the instruments in the percussion wing of an orchestra. One of his interests was to bring radio sound-effects to the concert hall. He wrote, "organizations of sound effects, with their expressive rather than representational qualities in mind, can be made. Such compositions could be represented by themselves as 'experimental radio music'". That same year he got to work with the poet Kenneth Patchen in creating a radio play for CBS. The first draft of the musical score was scrapped by the sound engineers however. Some of the sounds he wanted to create, such as the escape of compressed air were too expensive to produce for the program, he was told. After some revisions he eventually gave CBS something they considered acceptable. The resulting piece by Cage and Patchen, The City Wears a Slouch Hat, was broadcast on May 31st, 1942. The surreal text by the poet was mixed with sounds of telephones, crying babies, rain, foghorns and Cage's metallic percussion instruments. In 1942 he also wrote No. 2 and 3 in the Imaginary Landscape series. No. 2 was written for tin cans, conch shell, ratchet, bass drum, buzzers, water gong, metal wastebasket, lion's roar and amplified coil of wire. No. 3 required musicians to play tin cans again, muted gongs, audio frequency oscillators, variable speed turntables with frequency recordings and recordings of generator whines, amplified coil of wire, amplified marimbula (a Caribbean instrument similar to the African thumb piano), and electric buzzer.
Imaginary Landscape No. 4 was first performed in 1951 and is scored for 12 radios played by 24 musicians, two on each radio, one to control the tuning, the other to control the volume. It is a great example of indeterminate music. The only guarantee about the piece is that no performance of it will never be heard the same way. This is guaranteed because John incorporates chance operations to determine how much the dials of each radio are to be turned by each performer. The novelty of each performance is also guaranteed by the nature of radio itself. Depending on the place and time of a performance, the things coming out of the radio speakers are going to be different. During its premier concert at Columbia University's McMillin Theater those in the audience heard the word "Korea" over and over again, as well as snippets of a Mozart violin concerto, news about baseball, static, and silence. The performance took place around midnight and many of the stations in New York had already gone off the air for the night. Of course the silence never bothered Cage, who considered in an integral part of the experience. He had said that "silence, to my mind is as much a part of music as sound."
Listening to a recording of this piece from 2008 reveals the prevalence of country music and commercials. Voices come in and say things like "60 percent off" and read the weather and the latest buzz words in the news cycle. Many people listening today might be as confused about the "musical" quality of such a piece as they were back in 1951. But what John Cage has done is to ask people to tune in and experience the unpredictable sounds and signals coming in from the radios and from the world, as a form of music.
The Imaginary Landscape compositions came to a close with No. 5 a work for magnetic tape recorder and any 42 phonograph records. This piece in the series was written in the same year as he began work on Williams Mix, for eight simultaneously played independent quarter-inch magnetic tapes, that became the first piece of octophonic music. As John Cage continued to compose until his death in 1992, he continued to work musically with new technology, including early computer music compositions in the 1960's. A number of other composers and musicians have taken a vast amount of inspiration from Cage's work with radio and continued to build on it. These will be explored in further transmissions.
A lot of these recordings are available to listen to on the wonderful UbuWeb:
Begin again: a biography of John Cage by Kenneth Silverman, Alfred Knopf, New York, 2010.
Where the Heart Beats: John Cage, Zen Buddhism, and the Inner Life of Artists, by Kay Larson, Penguin Press, New York, 2012
In wireless communications spread spectrum radio is a transmission technique where the frequency of the signal is intentionally varied. This gives the signal a much greater bandwidth than if its frequency had remained constant. In the conventional transmission and receiving of signals, the frequency does not change over time, except for small fluctuations due to modulation. The signal is kept on a single frequency so two people communicating can exchange information, or so a listener in the broadcast bands knows exactly where to go to find his favorite station.
That is all fine and dandy for typical uses of radio. But as radio has developed the inventors and researchers who expanded the state of the art found a couple of hitches that made it problematic for certain types of signals to remain parked on one frequency. The first was interference caused by deliberate jamming on the desired frequency. This category also included other non-malicious interference coming from transmissions on nearby frequencies. The second issue with using only one frequency in a communication is when the information being transmitted is of a sensitive nature. Constant-frequency signals are easy to intercept. The military and others can make use of codes and encryption to veil transmissions on single frequencies, but codes can be broken. Radio researchers found that another layer of communication security could be added by the use of frequency-hopping which was the first technique established in spread spectrum radio.
Though attributed to multiple inventors, the first patent for frequency hopping was granted to actress Hedy Lamarr and composer George Antheil in 1942 for their "Secret Communications System" that was designed to protect Allied radio-guided torpedoes from being jammed by the Axis powers. Both Hedy and George are most remembered for their main fields of activity, movies and music, but they each had a touch of the polymath inside of them, and their other passions allowed them to make a significant advance in the radio arts.
Hedy was born in 1914 in Vienna and started training in the theater as a teenager in the 1920's. By the age of eighteen she had married her first of six husbands. Friedrich "Fritz" Mandl was a wealthy ammunitions manufacturer whose weapon systems later gave her inspiration for the patent. During this time she had started a career in film in Czechoslovakia with the 1933 film Ecstasy which became controversial for its frank depictions of nudity and sexuality. Hubby Mandl got a bit ticked off by these movie scenes and attempted to stop Hedy from continuing her career as an actress. In her autobiography Ecstasy and Me she claimed that she was kept virtually a prisoner in their Austrian castle home. She wrote, "I knew very soon that I could never be an actress while I was his wife.... He was the absolute monarch in his marriage.... I was like a doll. I was like a thing, some object of art which had to be guarded—and imprisoned—having no mind, no life of its own". And Hedy had a keen mind with natural talent for science and invention.
Both Mandl and Lammar had Jewish parents, but Mandl also had business ties with the Nazi government, to whom he sold his weapons. Mussolini and Hitler were among those who attended the lavish parties Mandl hosted at their Schloss Schwarzenau castle. Hedy would accompany him to his meetings where she got to associate with scientists and professionals involved in military technology. It was at these conferences where her interests in inventing and applied science were first sparked.
As her marriage grew unbearable she decided to flee to Paris where she met movie mogul Louis B. Mayer who was scouting for talent. With all the trouble brewing in Europe he found it easy to persuade her to move to Hollywood where she arrived in 1938 and began work on the film Algiers. She was in number of other popular feature films, including I Take This Woman (1940), Comrade X (1940), Come Live With Me (1941), H.M. Pulham, Esq. (1941), and her most famous role in Cecil B. Demile's Samson and Delilah (1949). After starring in the comedy My Favorite Spy (1951) with Bob Hope her acting career started to peter out.
It was during the height WWII and her career when she was also grew bored with acting. Hedy had complained that the roles given to her required little challenge in terms of technique or the delivery of lines and monologues. Mostly the films she had starred in cast her for her beauty rather than her talent and ability. Stifled by the lack of more demanding roles she found an outlet for her intellectual capacities through the hobby of tinkering and inventing which was nurtured by her friendship with aviation tycoon Howard Hughes.
Lamarr had some ideas about using radio controlled torpedoes in the war effort. To help her in its implementation she eventually tapped composer George Antheil, who had also found success in Hollywood scoring films. Antheil had been a part of the Lost Generation, and like many of his contemporaries such as Ernest Hemingway, he had moved to Europe after the horrors of the first World War to live a bohemian and artistic life amidst the cafes and salons of Paris in the 1920's. It was during this time period when he composed his best known work Ballet Mecanique. It began its life as an accompaniment to the Dadaist film of the same name made by Fernand Léger and Dudley Murphy, with cinematography by Man Ray. The techniques Antheil developed in this composition were to be key to the success of his shared frequency hopping patent.
Ballet Mecanique was scored to use a number of player pianos. He described their effect as "All percussive. Like machines. All efficiency. No LOVE. Written without sympathy. Written cold as an army operates. Revolutionary as nothing has been revolutionary." There are no human dancers. The mechanical instruments are what make it a ballet. Antheil's original conception was to use 16 specially synchronized player pianos, two grand pianos, electronic bells, xylophones, bass drums, a siren and three airplane propellers. There were a number of difficulties involved in this set-up that broke away from traditional orchestral arrangements. The synchronization of the player pianos proved to be the largest obstacle. Consisting of periods of music and interludes of relative silence created by the droning roar of airplane propellers. Antheil described it as "the rhythm of machinery, presented as beautifully as an artist knows how."
Besides composing Antheil was a writer and fierce patriot. He was a member of the Hollywood Anti-Nazi League and wrote a book of predictions about WWII titled The Shape of the War to Come. He also penned a newspaper column on relationship advice that was nationally syndicated and he fancied himself an expert on the subject of female endocrinology. His interests in this area was what first brought into contact with Hedy. She had sought him out for advice on how she might enhance her upper torso. After he proposed that she could make use of glandular extracts their conversation turned to the kind of torpedoes being used in the war.
Lamarr was herself a staunch supporter of her adopted country, though she didn't become a naturalized citizen until 1953. Using knowledge she gained from her first marriage with the munitions manufacture she had the insight that radio controlled torpedoes would excel in the fight against the Axis powers. However the radio signals could easily be jammed and the torpedo sent off course. Working with Antheil she devised their "Secret Communications System".
The action of composing for the player pianos helped Antheil with one of the aspects of creating their system, which had a striking resemblance to the still top secret SIGSALY system. It is best described in the overview of their patent number 2,292,387: "Briefly, our system as adapted for radio control of a remote craft, employs a pair of synchronous records, one at the transmitting station and one at the receiving station, which change the tuning of the transmitting and receiving apparatus from time to time, so that without knowledge of the records an enemy would be unable to determine at what frequency a controlling impulse would be sent. Furthermore, we contemplate employing records of the type used for many years in player pianos, and which consist, of long rolls of paper having perforations variously positioned in a plurality of longitudinal rows along the records. In a conventional player piano record there may be 88 rows of perforations, and in our system such a record would permit the use of 88 different carrier frequencies, from one to another of which both the transmitting and receiving station would be changed at intervals. Furthermore, records of the type described can be made of substantial length and may be driven slow or fast. This makes it possible for a pair of records, one at the transmitting station and one at the receiving station, to run for a length of time ample for the remote control of a device such as a torpedo. The two records may be synchronized by driving them with accurately calibrated constant-speed spring motors, such as are employed for driving clocks and chronometers. However, it is also within the scope of our invention to periodically correct the position of the record at the receiving station by transmitting synchronous impulses from the transmitting station. The use of synchronizing impulses for correcting the phase relation of rotary apparatus at a receiving station is well-known and highly developed in the fields of automatic telegraphy and television."
Although the US Navy did not adopt their technology until the 1960s the principles of their work continue to live on and are now used in everyday devices such as Wi-Fi, CDMA, and Bluetooth technology. Spread spectrum systems are also used in the unregulated 2.4 GHz band and on some walkie-talkies that operate in the 900 MHz portion of the spectrum. Other spread spectrum techniques include direct-sequence spread spectrum (DSSS), time-hopping spread spectrum (THSS), and chirp spread spectrum (CSS).
In 2008 Elyse Singer wrote the script for an off-Broadway play, Frequency Hopping, that features the lives of Lamarr and Antheil. It won a prize for best new play about science and technology. Hedy and George's pioneering work eventually led to their posthumous induction into the National Inventors Hall of Fame in 2014.
Ecstasy and Me by Heddy Lamarr
The Bad Boy of Music by George Antheil
George Antheil, Ballet Mecanique: Digital Re-creation of the Carnegie Hall Concert of 1927, Conducted by Maurice Peress, Music Masters Inc. 1992.
In last month's episode I explored the genesis of the first song uttered by a computer, Daisy Bell, and how that song ended up in 2001: A Space Odyssey. In this last installment on the history of speech synthesis I'll track the use of the vocoder in popular music on up to its implementation into the DMR radios that are currently a big buzz in the ham community.
In 1968 synth wizard Robert Moog built the first solid state vocoder. Two years later Moog built another musical vocoder, working with Wendy Carlos. This was a ten-band device inspired by Homer Dudley's original designs. The carrier signal came from a Moog modular synthesizer. The modulator was the input from the microphone. The brilliant application of this instrument made its debut appearance in Stanley Kubrick's film A Clockwork Orange, where the vocoder sang the vocal part from the fourth movement of Beethoven's Ninth Symphony, the section titled "March from a Clockwork Orange" on the soundtrack. It's something I could sit down and listen to on repeat over and over while enjoying a fine glass of moloko velocet. This was the first recording made with a vocoder and I find it interesting that the two earliest uses of speech synthesis for music ended up in films made by Kubrick. The song "Timesteps", an original piece written by Wendy, is also features on the soundtrack. She had originally intended to include it as a mere introduction to the vocoder for those who might consider themselves "timid listeners" but Kubrick surprised Wendy by its inclusion in his dystopian masterpiece.
Coming down the road in 1974 was the classic album Autobahn by the German krautrockers Kraftwerk. This was the first commercial success for the power-station of a group. Their previous three albums had been highly experimental, though well worth an evening of listening. Kraftwerk's contribution in the popularization of electronic music remains huge. Besides using commercial gear such as a Minimoog, the ARP Odyssey, and EMS Synthi AKS, Kraftwerk were dedicated homebrewers of their own instruments. Listening to the album now I can imagine the band soldering something together in the back of a Volkswagen Westfalia as they cruise down the highway at 120 km/h on to their next gig.
Three years later in 1977 Electric Light Orchestra released the album Out of the Blue, much to the delight of discerning listeners everywhere. There is nothing quite like the music of ELO to lift me up out of the melancholy I often find myself in during the middle of winter when spring seems far away. "Mr. Blue Sky" and "Sweet Talking Woman" are songs that toggle the happy switches in my brain. When I hear them things brighten up. This is in no small part to the judicious use of the vocoder. ELO was in love with the vocoder and it can be found littered across their recordings. (As a bit of a phone phreak another favorite cut is "Telephone Line".)
During the 1980's the vocoder started being used in the early hip-hop and rap groups. Dave Tompkins, author of How to Wreck a Nice Beach: The Vocoder from WWII to Hip-Hop notes the echo of history in the vocoders use alongside two turntables for the SIGSALY program and how DJs use two turntables to mix and scratch phat beats while a rap MC will drop lyrics over top of the sounds being produced by the vinyl, sometimes processing those vocals through the vocoder. The use of the vocoder continues to present times on hip-hop and jazz fusion albums such as Black Radio (1 & 2) from Robert Glasper Experiment.
While the vocoder was enjoying great success in the entertainment industry, its use in telecommunications was still ticking away, though a bit quieter, in the background. Since 1970's most of the tech in this area has focused on linear-predictive coding (LPC). It is a tool used for representing the spectral envelope of a digital signal of speech in compressed form, using the information from a linear predictive model and is a powerful speech analysis technique. When it came out the NSA were among the first to get their paws on it because LPC can be used for secure wireless with a digitized and encrypted voice sent over a narrow channel. The early example of this is Navajo I, a telephone built into a briefcase to be used by government agents. About 110 of these were produced in the early '80s. Several other vocoder systems are used by the NSA for encryption (that we are allowed to know about).
Phone companies like to use LPC for speech compression because it encodes accurate speech at a low bit rate, saving them bandwidth. This had been Homer Dudley's original intention with his first vocoding experiments back in the 1930's. Now LPC has become the GSM standard protocol for cellular networks. GSM uses a variety of voice codecs that implement the technology to jam 3.1 kHz of audio into 6.5 and 13 kbit/s of transmission. Which is why to my ear, smart phones, for all the cool things they can do with data, apps and GPS, will never sound as good with voice as an old school toll call on copper wires. LPC is also used in VoIP.
LPC has also been used in musical vocoding. Paul Lansky created the computer music piece notjustmoreidlechatter using LPC. A 10th order derivative of LPC was used in the popular 1980s Speak & Spell educational toy. These became popular to hack by experimental musicians in a process known as circuit bending, where the toy is taken apart and the connections re-soldered to make sounds not originally intended by the manufactures. This technique was pioneered by Cincinnati maker and musician Q. Reed Ghazala into a high art form. Reed's experimental instruments have been built for Tom Waits, Peter Gabriel, King Crimson's Pat Mastalotto, Faust, Chris Cutler, Towa Tei, Yann Tomita, Blur and many other interesting musicians. And not so interesting ones (to me) such as Madonna. A future edition of The Music of Radio will cover his work in detail, but a lot can be found on his website anti-theory.net.
Finally vocoders are utilized in the DMR radios that are currently gaining popularity among hams around the world. In Ohio the regional ARES groups are being encouraged to utilize this mode as another tool in the box. DMR is an open digital mobile radio standard. DMR, along with P25 phase II and NXDN are the main competitor technologies in achieving 6.25 kHz equivalent bandwidth using the proprietary AMBE+2 vocoder. This vocoder type uses multi-band excitation to do it's speech coding. Besides it's use in DMR the AMBE+2 is also used in D-Star, Iridium satellite telephone systems, and OpenSky trunked radio systems.
From what I've heard I didn't really care for the audio quality of DMR, as on cell phones. My ears would rather dig through the mud of the HF bands than listen to the way speech is compressed in these modes. I think the vocoder is better suited to musical studios where it can be used for aesthetic effects. However with the push to use these in ARES, and needing something to play with at OH-KY-IN's digital night on the fourth Tuesday of the month, I do plan on taking the plunge into DMR. And when I do I will know that every time I have a QSO using the DMR platform I will be taking part in a legacy starting with Homer Dudley's insights into human vocal system as a carrier wave for speech. A legacy that stretches across the fields of telecommunication, cryptology and popular music.
Chip Talk: Projects in Speech Synthesis by David Prochnow, Tab Books, 1987.
...and some other research on the interwebs.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.