The elements of Linear Predictive Coding (LPC) were built on the basis of some of Norbert Wiener’s work from the 1940’s when he developed a mathematical theory for calculating the optimal filters for finding signals in noise. Claude Shannon quickly followed Wiener with his breakthrough work A Mathematical Theory of Communication, that included a general theory of coding. [For more on Wiener and Shannon see Chapter 3.] With new mathematical tools in hand, researchers started exploring predictive coding. Linear Prediction is a form of signal estimation and it was soon applied to speech analysis.
In signal processing, communications and related fields the term “coding” generally means putting a signal into a format where it will be easier to handle a given task. A coding scheme, like morse code for instance, is when an encoder takes the signal and puts into a new format. The decoder takes it out of its new format and puts it back into the old one.
The “predictive” aspect of coding has been used for in numerous scientific theories and engineering techniques. What they have in common is that they predict future observations based on past observations. Joined together the term “predictive coding” was coined by information theorist Peter Elias in 1955 in his two papers on the subject.
In LPC samples from a signal are predicted using a linear function from previous samples. In math a linear function that has either one or two variables without exponents or it is a function that graphs to the straight line. The error between a predicted sample and the actual sample is also transmitted along with the coefficients. This works with speech because the samples from nearby correspond to each other to a high degree. The error is also transmitted because if the prediction is good the error will be small and take up less bandwidth. In this sense, LPC becomes a type of compression based on source codes.
Towards the end of the 1960’s Fumitada Itakura, and Bishnu S. Atal and Manfred Schroeder independently discovered, as in the case of the telegraph and telephone, the elements of LPC. Later, Paul Lansky applied it making delightful music exploring the spectrum between music and speech.
Fumitada Itakura was interested in math and radio from an early age, and he had been an amateur radio operator in his youth. His elementary school happened to be just a mile from the radio laboratory at Nagoya University where his father knew some of the professors, so he had occasion to visit it and ask questions.
As an undergraduate he became interested in the theoretical side of math and started to learn about stochastic processes. As he extended his ability ever further, he eventually became involved in the mathematical aspect of signal processing. His research paper for his bachelor in electrical communication was on the statistical analysis of whistlers, a very low frequency electromagentic radio wave produced by lightning, and capable of being heard as audio on radio receivers. To study it he built a bank of analog filters to do the signal processing, and made digital circuits to try and find patterns in the time-frequency of the whistlers. It wasn’t easy work, but he persevered. In analyzing the whistler signal he had to work on filtering out a lot of the other noisy material that comes in from the magneto-ionosphere. The work required him to use band-pass filters and the sound spectrogram that had originally been designed for speech analysis.
This eventually led to further work with statistics and audio. When he went to graduate school he studied applied mathematics under Professor Kanehisa Udagawa. At Udagawa’s lab he became a part of a group studying pattern recognition and he started a project to recognize hand written characters in 1963. When professor Udagawa died of a heart attack he had to find someone else to study under to continue his course. This led him to work at the NTT.
Dr. Shuzo Saito had been a graduate of Nagoya University and was looking for someone to work with in speech research. Saito’s friend professor Teruo Fukumura suggested Itakura. Saito had an interest in speech recognition and encouraged Itakura to get involved. Fukumura began teaching him the basic principles of speech using using Gunnar Fant's Acoustic Theory of Speech Production. Itakura started making sound spectrograms of his voice speaking vowels. His voice was high and husky so it didn’t make as clean of a spectrogram as it would have with someone who had a regular voice. In this there was a hidden gift. He realized if they could do good analysis on a signal that had more random characteristics, they could do even better when analyzing regular speech. From this point, he went and applied statistics to speech classification, based on a paper he had read by J. Hajek. Reading math papers had been a hobby of his and it led to his work on Linear Predictive Coding.
Dr. Saito suggested to Itakura that he look for practical results based on his theory, so he started working with a vocoder and got some initial results on his idea, and wanted to go further. Dr. Saito suggested he look at pitch detection, as vocoders often had trouble recognizing voices because of their poor ability in this area. He conceived of a new method of pitch detection that used an inverse filter and oscillation. From this he proposed integrating the linear predictive analysis with his new pitch detection method to create a new vocoder system. In late 1967 he succeeded in synthesizing speech from the vocoder and brought the results to Dr. Saito. From then on Itakura has worked on vocoding.
Of the many modes in which speech is produced, the way vowels sound is very important, as it relies on the periodic opening and closing of the vocal cords. Air from the lungs gets converted by them into a wideband signal filled with harmonics containing many properties. This signal resonates the vocal cavities before leaving the mouth where the final sounds are shaped.
This speech signal gets analyzed, the signal of the formants estimated and removed in a process called inverse filtering. The rest of the remaining sounds, called the buzz, are also estimated. The signal that remains after the buzz is subtracted is called the residue. Numbers which represent the formants, the buzz and the residue, can be stored or transmitted elsewhere. The speech is then synthesized through a reversal of the original stripping process. The parameters of the buzz and residue are used to create a signal, and the information stripped from the formants is recreated to create a new filter. The process is done in short chunks of time.
Taking speech apart and putting it together on the other end was a huge technical feat that saves tons of bandwidth. Speech synthesis could fit five calls onto the same channel that regular voice took up with one.
Mafred Schroeder and Bishnu S. Atal
At Bell Labs he met up with Manfred Schroeder who had come from Germany. Schroeder was born in 1926 and came of age during WWII. During the war Schroeder had built a secret radio transmitter that spooked his parents. Transmitting radio was risky business because it was the province of spies and people who wanted to communicate outside the country. When Schroeder saw members of the army or SS outside his house with radio direction finding equipment, he shut off the transmitter for a month. He also listened to the BBC for news, and the American Forces Network transmitting from England, then illegal to listen to. Many people had been sent to concentration camps just for listening to foreign stations, and spreading news to others. The Nazi powers attempted to keep tight control on all information going in and out of the country. A special radio was even manufactured by the state, the People's Radio or Volksempfänger, that was built in such a way that it only could receive approved German stations whose programs were under the directorship of Joseph Goebbels.
He excelled at school and was often ahead of even the teachers, and during the war was drafted to a radar team to track incoming aircraft flights and do other work, where he gained extensive experience with the technology.
Schroeder was also a math fanatic, like Itakura was, and when he did go to university, always took extra math classes on the side of his physics work. He had been fascinated by crypto math and he loaded up on function theory and probability classes. Eventually Schroeder got a job offer from Bell Labs in 1954, based on previous work he had done experimenting with microwaves and he emigrated to the United States.
Bell Labs wanted him to continue his research with microwaves, but he thought he’d switch gears and get into the study of speech instead. For two years he worked on speech synthesizers, and didn’t have much luck in getting them to sound good, so then turned his attention to speakers and room acoustics. Many researchers who were following the dictates of their own curiosity and inclination were left alone to pursue their studies, and see what came out of them and where it took them.
John Peirce at Bell Labs wanted Schroeder to use Dudley’s vocoding principles to send high fidelity voice calls over the phone system. This caused Schroeder to hit up against the same issue as Itakura had, the problem of pitch. Part of the issue was extracting the fundamental frequencies from telephone lines not known for superb sound quality. As Schroeder investigated he realized he could take the baseband signal, or those frequencies that have not been modulated, and distort it non-linearly to generate frequencies that the vocoder would then give the right amplitude. This ended being a success. This became voice excited vocoding and the speech that came out of the other end was the most human sounding of any speech synthesis up to that point.
In 1961 Schroeder hired Dr. Bishnu S. Atal to work with him at Bell Labs. Atal was born in 1933 in Kanpur, Uttar Pradesh, India. He studied physics at the University of Lucknow and received his degree in electrical communications engineering from the Institute of Science in Bangalore, India in 1955, before coming to America to study for his Ph.D at the Brooklyn Polytechnic Institute. He returned to his home country to lecture on acoustics from 1957 to 1960 before he was lured back to the U.S. by Schroeder to join him in his investigations in speech and acoustics.
In 1967 Schroeder was pacing around the Lab with Atal, and they were conversing about needing to do more with vocoder speech quality. His work on pitch had improved the quality of vocoding, but it wasn’t yet what it could be. What they needed to do, they realized as they talked, was to code speech so no errors were present. As they talked the idea of predictive coding came up.
They realized that as speech became encoded they could predict the next samples of speech based on what had just come before. The prediction would be compared with the actual speech. Alongside this the errors, or residuals, would be transmitted. In decoding the same algorithm was used to reconstruct the speech on the other end of the transmission. Schroeder and Atal called this adaptive predictive coding, with the name later changed to linear predictive coding. The quality of speech was as good as that which came out of his voice excited vocoder. They wrote a paper on the subject for the Bell System Journal and presented on it at a conference in 1967, the same year Itakura succeeded with his technique.
Since 1970's most of the technology around speech synthesis and coding has been focused on LPC and it is now the most widely used form. When it first came out the NSA were among the first to get their paws on it because LPC can be used for secure wireless with a digitized and encrypted voice sent over a narrow channel. The early example of this is Navajo I, a telephone built into a briefcase to be used by government agents. About 110 of these were produced in the early 1980s. Several other vocoder systems were used by the NSA for the purpose of encryption.
LPC has become essential for cellphones, and is a Global System for Mobile Communications (GSM) standard protocol for cellular networks. GSM uses a variety of voice codecs that implement the technology to put 3.1 kHz of audio into 6.5 and 13 kbit/s of transmission. LPC is also used in Voice Over IP, or VoIP, such as is used on Skype and Zoom calls and meetings.
A 10th order derivative of LPC was used in the popular 1980s Speak & Spell educational toy. These became popular to hack by experimental musicians in a process known as circuit bending, where the toy is taken apart and the connections re-soldered to make sounds not originally intended by the manufactures. [For more on Ghazala and circuit bending, see chapter 7.]
Vocoding technology is also utilized in the Digital Mobile Radio (DMR) units that are currently gaining popularity among hams around the world. DMR is an open digital mobile radio standard. DMR radios use a proprietary AMBE+2 vocoder that works with multi-band excitation for its speech coding and compression to achieve a 6.2 kHz bandwidth. Again the compression and the digital codecs often result in sound artifacts and glitching to occur while talking. Besides it's use in DMR the AMBE+2 is also used in D-Star, Iridium satellite telephone systems, and OpenSky trunked radio systems.
Paule Lansky: notjustmoreidlechatter
Since LPC allows for the separation of pitch and speed and the pitch contours of the speech can be altered independently of the speed, it can also be used by the creative thinker for musical composition. Paul Lansky was one such thinker and he used LPC to great effect in a series of compositions exploring synthesis and the qualities of speech.
Paul Lansky was born in 1944 in New York and counted George Perle and Milton Babbit as among his teachers. Lansky got his Ph.D in music from Princeton in 1973. Like many others of his generation, Lansky started off being schooled in the school of serialism. His teacher Perle had developed an iconoclastic twelve tone modal system, and Lansky used this to write a piece. For his dissertation he continued to explore Perle’s methodology and used linear algebra as a way to create a model of his teachers system. His interest then extended to take in electronics and computers as a way of exploring the mathematical possibilities inherent within serialism.
His first foray into electronic composition was on Mild und Leise from 1973. Proper old school, it was composed using a series of punch cards. Learning the mechanics of the system to achieve his desired outcome was as much a part of the procedure as the composition. For it he used the he Music360 computer language written by Barry Vercoe on an IBM 360/91. The output from the computer went to a 1600 BPI digital tape which that had to be carried over to a basement lab in the engineering quadrangle at Princeton to listen to. It used FM synthesis which had just been worked out at Stanford [for FM Synthesis see Chapter 4.] The harmonic language came from Perle’s system. The result is very emotionally resonant pure electronic music. Lansky has ever been keen to foreground the music in front of the technology used to make the music, and that is true here. The piece was later sampled by Radiohead in their song idioteque on their Kid A album.
1979 saw Lansky beginning to work with LPC as a part of his computer music programming practice, and it was put to use in a series of compositions starting with Six Fantasies on a Poem by Thomas Campion. James Moorer at Stanford University had begun
Linear Predictive Coding based derivatives were pioneered by James Moorer at Stanford University in the 1970’s. His wife Hannah McKay reads the poem and LPC techniques and a variety of processing and filtering methords are used to alter and transform the reading in fabulous ways.
In his notes to the recording of Six Fantasies, he writes about how it has become common to view speech and song as distinct categories. Lansky thought that “they are more usefully thought of as occupying opposite ends of a spectrum, encompassing a wealth of musical potential. This fact has certainly not been lost on musicians: sprechstimme, melodrama, recitative, rap, blues, etc., are all evidence that it is a lively domain.”
Thomas Campion as composer and poet became an archetype emblematic of the “musical spectrum spanned by speech and song.” The poem used by Campion was his Rose cheekt Lawra which was embedded within his 1602 treatise Observations in the Art of English Poesie. Here Campion offered his attempt at a quantitative model for English poetry, where meter is determined by the quantity of vowels rather than by rhythm, as was done in ancient Latin and Greek poetry. Lansky describes the poem as a “wonderful, free-wheeling spin about the vowel box. It is almost as if he is playing vowels the way one would play a musical instrument, jumping here and there, dancing around with dazzling invention and brilliance, carefully balancing repetition and variation. The poem itself is about Petrarch's beloved Laura, whose beauty expresses an implicit and heavenly music, in contrast to the imperfect, all too explicit earthly music we must resign ourselves to make. This seemed to be an appropriate metaphor for the piece.”
Lansky continued to explore the continuum between speech and song with his pieces, Idle Chatter, just_more_idle_chatter, and, Notjustmoreidlechatter. Though clearly connected by theme, they are not a suite, but independent works. Idle Chatter from 1985 also continues with the use of his wife as vocalist, and the IBM 3081 as the means of transforming her voice, and again using a mix of LPC, stochastic mixing, and granular synthesis with a bit of help from the computer music language Cmix. If you like glossolalia, and if you ever wanted to try to hear what it sounded like at the Tower of Babel, these recordings are an opportunity.
Of Idle Chatter, Lansky wrote, ““The incoherent babble of Idle Chatter is really a pretext to create a complicated piece in which you think you can `parse the data’, but are constantly surprised and confused. The texture is designed to make it seem as if the words, rhythms and harmonies are understandable, but what results, I think, is a musical surface with a lot of places around which which your ear can dance while you vainly try to figure out what is going on. In the end I hope a good time is had by all (and that your ears learn to enjoy dancing).”
People had a strong reaction to the piece, and in response to their reaction, Lanksy wrote, just_more_idle_chatter in 1987. He gave the digital background singers more of a role in the piece, but the words still only approach intelligibility and never really reach a stage where the listener can comprehend what is being said, only that something is being spoken. The next saw his “stubborn refusal to let a good idea alone” with the realization of Notjustmoreidlechatter. Here again the chatter almost becomes something that can be discerned as a word before slipping back down into the primordial soup of linguistic babble. The last two of these pieces were made using the DEC MicroVaxII computer.
Over time, though Lansky wrote many more computer music pieces, and settings for traditionl instrumentation, he couldn’t just let the words just be. For the pieces on his Alphabet Book album he conducted further investigations in a magisterial reflection on the building blocks of thought: the alphanumerics, the letters and numbers, that allow for communication, the building up of knowledge, and contemplation.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Fumitada Itakura, an oral history conducted in 1997 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA.
Manfred Schroeder, an oral history conducted in 1994 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA.
Charles Dodge was another early computer musician who got in on the speech synthesis game. Born in Iowa in 1942 he was in his early twenties when he first became interested in the possibilities of computer music. As a graduate student at Columbia University he studied composition under Richard Hervig, Chou Wen-chung, and the electronic musician Otto Luening. When he met Godfrey Winham of at Princeton University, he began to think seriously about composing his own works with computers. Winham was an influential music theorist whose wife was a singer whose wife Bethany Beadslee was the voice for much new music, including Milton Babbit’s Philomel.
In the sixties Bell Labs was one of the very few places computer music was being made, and it was one of the few places to go to hear how it sounded. Max Matthews encouraged musicians who were making music on university computers to come to Bell Labs to convert it into sound, in the evening after the primary work at the Labs was finished. Charles Dodge was one of these composers, and when he came to listen to his work he became mesmerized by the fascinating sounds of the speech research going on down the hall, and often thought it was more interesting than the sounds he’d created using the computer.
In the early 70s he had the opportunity to create some new works at Bell Labs with access to programs written by Dr. Joseph Olive for speech synthesis. Olive was a leading researcher in the area of text-to-speech. Olive was one of those people who had an intense mathematical mind. He had received a physics PhD from the University of Chicago, but he was also interested in music.
With help from Olive and some poems written and given to him by his friend Mark Strand, Dodge went about creating Speech Songs. He writes, “I'd never been able to write very effective vocal music and here was an opportunity to make music with words. I was really attracted to that. It wasn't singing in the usual sense. It was making music out of the nature of speech itself. With the early speech-synthesis computers, you could do two things: you could make the voice go faster or slower than the speed in which it was recorded at the same pitch or you could shift the pitch independent of the speech rhythm. That was a kind of transformation that you couldn't make in the usual way of making tape music. It was fascinating to put my hands on two ways of modifying sound that were completely, newly available.”
To synthesize the electronic voices for the poems he used called speech-by-analysis. Only words that had put into the computer before using an an analog-to-digital converter could be synthesized. The recorded speech is analyzed by the computer to pull out the various parameters from the spoken word in short segments. Then speech can be recreated by the artificial voice using the same parameters as had been analyzed. For musical purposes, though, those parameters can be altered to change aspects of the sound such as shifting the pitch contour of a phrase or word into a melodic line. Change the speed without altering the pitch is another possibility. Formants and resonance are other aspects that can be changed by the programmer-composer.
The poems themselves are humorous and surrealistic, and the way the artificial voice reads them adds to the effect. Dodge was specifically interested in humor, because as he wrote in the liner notes, “Laughter at new music concerts, especially in New York, is rare these days.” He was delighted when audience members laughed at his creation. For a type of music that is so often cerebral and conceptual, its good when some belly laughs can be had.
Another piece on the album, The Story of Our Lives, also used techniques of speech synthesis. In this case instead of replacing the recorded human with an artificial voice, they changed the program so that it took from a bank of 64 sine tones that glissandoed at different rates. To create the effect of more than one voice being heard at a time, the different voices were mixed together on the digital computer.
Speech Songs came out in 1972 and in 1978 he put together a he made a recording of the radio In Casando by Samuel Beckett, where the musical aspect was two computer synthesized audio channels. This was also when he founded the center for computer music at CUNY’s Brooklyn College and began teaching for their graduate program. His 1970 composition, Earth’s Magnetic Field will be explored in chapter 8 of this book.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Sferics is one of Lucier’s most elegant and simple works. It is just a recording. Other versions of Sferics could be produced, and many science and radio hobbyists make similar recordings without ever having heard of Alvin Lucier. The phenomenon at the heart of Sferics existed long before they were ever able to be detected and recorded. Listening to this form of natural radio requires going down to the Very Low Frequency (VLF) portion of the radio spectrum.
The title of Lucier’s work refers to broadband electromagnetic impulses that occur as a result of natural atmospheric lightning discharges and are able to be picked up as natural radiofrequency emissions. Listening to these atmospherics dates all the way back to Thomas Watson, assistant of Alexander Graham Bell, as mentioned at the beginning of this book. He picked them up on the long telegraph lines which acted as VLF antennas. Since his time telegraph operators and radio hobbyists and technicians have heard these sounds coming in over their equipment. For some chasing after these sferics has become a hobby in itself.
The VLF band ranges from about 3 kHz to 30 kHz and the wavelengths at this frequency are huge. Most commercial ham radio transceivers tend to only go as low as 160 meters which translates to between 1.8 and 2 MHz in frequency. A VLF wave at 3 kHz is by comparison a length of 100 kilometers. The VLF range includes a portion of the spectrum that is in the range of human hearing, from 20 Hz to 20 kHz. Yet since the sferics are electromagnetic waves rather than sound waves a person needs radio ears to listen to them: i.e., an antenna and receiver.
On average lightning bolts strike about forty-four times a second, adding up to around 1.4 billion flashes a year. It’s a good thing the weather acts as a variable distribution system of these strikes, though some places get hit more than others. The discharge of all this electricity means there are a lot of electromagnetic emissions from these strikes going straight into the VLF band where they can be listened to with the right equipment. Because these wavelengths are so long, you could be in California listening to a thunderstorm in Italy or India, or in Maine listening to sferics caused by storms in Australia.
The sound of sferics is kind of soothing and reminds me of the crackle of old vinyl that has been unearthed from a dusty vault in a thriftstores basement. There are lots of pops and lots of hiss. As these are natural sounds picked up with the new extensions to our nervous system made available by telecommunications listening to sferics has the same kind of soothing effect as listening to a field recording of an ocean, or stream meandering through lonely woods. But for a long time, listeners, hobbyists and scientists didn’t really know what these emissions were caused by. During the scientific research activities surrounding the International Geophysical Year (IGY) overlapping 1957-58 their presence and source was verified.
The IGY yearlong event was an international scientific project that managed to receive backing from sixty-seven countries in the East and West despite the ongoing tensions of the Cold War. The focus of the projects was on earth science. Scientists looked into phenomena surrounding the aurora borealis, geomagnetism, ionospheric physics, meteorology, oceanography, seismology, and solar activity. This was an auspicious area of study for the scientists, as the timing of the IGY coincided with the peak of solar cycle 19. When a solar cycle is at its peak, the ionosphere is highly charged by the sun making radio communications easier, and producing more occurrences of aurora, among other natural wonders.
One of the researchers was a man by the name of Morgan G. Millett, and his recordings would go on to have a direct influence on Alvin Lucier. Millet was an astrophysicist who had established one of the first programs to use the fresh discoveries occurring in the VLF band as a way to investigate the properties of space plasma around the earth, in the region now known as the upper ionosphere and magnetosphere. His inquiries into this area allowed for deep gains of knowledge in a new area of study before space-crafts began making direct observations of this area.
Millet was also a ham radio operator with the call sign W1HDA. He had been interested in radio since he was a teenager, and throughout his career found ways to use his inclination and knack to research propagation. Throughout the 1940s and early 50s Millet and his colleagues conducted radar experiments near his home in Hanover, New Hampshire. The purpose of these studies was to observe two modes of propagation that magnetoionic theories had predicted would occur when radio waves entered the atmosphere.
During the IGY he chaired the US National Committee's Panel on Ionospheric Research of the National Research Council. In this capacity he oversaw the radio studies being conducted all around the earth. As part of that work he joined the re-supply mission to the US Antarctic station on the Weddell Sea in early 1958 as the senior scientific representative. For his own specific research he maintained a series of far-flung stations spread across the Americas. It was from these that he made a number of recordings of natural radio signals.
Lucier later heard these at Brandeis. The composer writes, “My interest in sferics goes back to 1967, when I discovered in the Brandeis University Library a disc recording of ionospheric sounds by astrophysicist Millett Morgan of Dartmouth College. I experimented with this material, processing it in various ways -- filtering, narrow band amplifying and phase-shifting -- but I was unhappy with the idea of altering natural sounds and uneasy about using someone else's material for my own purposes.”
Millets recordings were made at a network of receiving stations and he interpreted the audio data he collected to obtain some of the earliest measurements of free electron density in the thousands of kilometers above earth. A colorful vocabulary was built up to describe the sounds heard in the VLF portion of the spectrum. Sferics that traveled over 2,000 kilometers often shifted their tone and came to be called tweeks; the frequency would become offset as it traveled in distance, cutting off some of the sound and making it sound higher in the treble range. Whistlers were another phenomenon heard on the air. They occurred when a lightning strike propagated out of the ionosphere and into the magnetosphere, along geomagnetic lines of force. The sound of a whistler is one of a descending tone, like a whistle fading into the background, hence its name. It is similar to the tweek, but elongated due to it stretching out away from the surface followed by a return to the Earth’s magnetic field.
Dawn chorus is another atmospheric effect some lucky eavesdroppers in the VLF range may be able to pick up from time to time. It is an electromagnetic effect that may be picked up locally at dawn. The cause of this is thought to be generated from energetic electrons being injected into the inner magnetosphere, something that occurs more frequently during magnetic storms. These electrons interact with the normal ambient background noise heard in the VLF band to create a sound that is actually similar to that of birdsong in the morning. This sound is likely to be heard when aurorae are active when it is dubbed auroral chorus. Millets experimental work in recording these phenomena created a foundation to study such things as how the earth and its magnetic field interact with the solar wind.
Listening to Millet’s recordings wasn’t enough for Lucier. “I wanted to have the experience of listening to these sounds in real time and collecting them for myself. When Pauline Oliveros invited me to visit the music department at the University of California at San Diego a year later, I proposed a whistler recording project. Despite two weeks of extending antenna wire across most of the La Jolla landscape and wrestling with homemade battery-operated radio receivers, Pauline and I had nothing to show for our efforts. . . .” The idea was shelved for over a decade.
In 1981 Lucier tried again. He got a hold of some better equipment and was able to go out to a location in Church Park, Colorado, on August 27th, 1981. For the Colorado recording he collected material continuously from midnight to dawn with a pair of homemade antennas and a stereo cassette tape recorder. He repositioned the antennas at regular intervals to explore the directivity of the propagated signals and to shift the stereo field. This was all done at Church Park, August 27th, 1981.
It was in the early 80s that Millet continued his own radio investigations. He built a network of radar observing stations to study gravity waves that propagate to lower latitudes of Earth from the arctic region. These gravity waves appear as propagating undulations in the lower layers of the ionosphere.
Lucier wasn’t the only musician to be interested in this phenomenon. Electronic music producer Jack Dangers explored these sounds under his moniker as Meat Beat Manifesto on a song called The Tweek from the album Actual Sounds & Voices. Pink Floyd used dawn chorus on the opening track of their 1994 album the Division Bell. VLF enthusiast Stephen P. McGreevy has been tracking these sounds for some time, and has collected a lot of recordings and been releasing them on CD and the internet via archive.org. At the time of this writing he has made eight albums of such recordings.
On the communications side of things the VLF band’s interesting properties have been exploited for use in submarine communication. VLF waves can penetrate sea water to some degree, whereas most other radio waves are reflected off the water. This has allowed for low-bitrate communications across the VLF band by the worlds militaries. Some hams have also taken up experimenting with communication across VLF, learning more about its unique propagation in doing so.
Just as the Hub was getting off the ground and into circulation as a performing ensemble, one of its members, Scott Gresham-Lancaster, was working with Pauline Oliveros on a new project she had initiated in creating the ultimate delay system: bouncing her music off the surface of the moon and back to earth with the help of an amateur radio operator.
Since Pauline had first started working with tape she had always been interested in delay systems. Later she started exploring the natural delays and reverberations found in places such as caves, silos and the fourteen-foot cistern at the abandoned Fort Worden in Washington state. The resonant space at Fort Worden in particular had been important in the evolution of Pauline’s sound. It was there she descended the ladder with fellow musicians Paniotis, a vocalist, and with trombonist Stuart Dempster to record what would become her Deep Listening album. Supported by reinforced concrete pillars the delay time in the cistern was 45 seconds, creating a natural acoustic effect of great warmth and beauty. This space continued to be used by musicians, including Stuart Dempster, and the place was dubbed by them, the cistern chapel. Pauline had another deep listening experience in a cistern in Cologne when visiting Germany. Between these experiences, the creation of the album, and the workshops she was starting to teach, she came up with a whole suite of practices and teachings that came to be called Deep Listening. The term itself had started as a pun when they emerged up from the ladder that had taken them into the cistern.
Pauline describes Deep Listening as, “an aesthetic based upon principles of improvisation, electronic music, ritual, teaching and meditation. This aesthetic is designed to inspire both trained and untrained performers to practice the art of listening and responding to environmental conditions in solo and ensemble situations.” Since her passing Deep Listening continues to be taught at the Rensselaer Polytechnic Institute under the directorship of Stephanie Loveless.
The idea of bouncing a signal off the moon, which amateur radio operators had learned to do as a highly specialized communications technique, was another way of exploring echoes and delays, in combination with technology in a poetic manner. Pauline first had the idea for the piece when watching the lunar landing in 1969.
“I thought that it would be interesting and poetic for people to experience an installation where they could send the sound of their voices to the moon and hear the echo come back to earth. They would be vocal astronauts. My first experience of Echoes From the Moon was in New Lebanon, Maine with Ham Radio Operator Dave Olean. He was one of the first HROs to participate in the Moon Bounce project in the 1970s. He sent Morse Code to the moon and got it back. This project allowed operators to increase the range of their broadcast. I traveled to Maine to work with Dave. He had an array of twenty four Yagi antennae which could be aimed at the moon. The moon is in constant motion and has to be tracked by the moving antenna. The antenna has to be large enough to receive the returning signal from the moon. Conditions are constantly changing - sometimes the signal is lost as the moon moves out of range and has to be found again. Sometimes the signal going to the moon gets lost in galactic noise. I sent my first ‘hello’ to the moon from Dave's studio in 1987. I stepped on a foot switch to change the antenna from sending to receiving mode and in 2 and 1/2 seconds heard the return ‘hello’ from the moon.”
Though farther away in space than the walls of the worden cistern, the delay time between the radio signal going there and coming back is much shorter. In a vacuum radio waves travel at the speed of the light. Earth Moon Earth, or EME as it is known in ham radio circles was first proposed by W. J. Bray, a communications engineer who worked for Britain’s General Post Office in 1940. At the time, they thought that using the moon as a passive communications satellite could be accomplished through the use of radios in the microwave range of the spectrum.
During the forties the Germans were experimenting with different equipment and techniques and realized radar signals could be bounced off the moon. The German’s developed a system known as the Wurzmann and carried out successful moon bounce experiments in 1943. Working in parallel was the American military and a group of researchers led by Hungarian physicist Zoltan Bay. At Fort Monmouth in New Jersey in January of 1946 John D. Hewitt working with Project Diana carried out the second successful transmission of radar signals bounced off the moon. Project Diana also marked the birth of radar astronomy, a technique that was used to map the surfaces of the planet Venus and other nearby celestial objects. A month later Zoltan Bay’s team also achieved a successful moon bounce communication.
These successful efforts led to the establishment of the Communication Moon Relay Project, also known as Operation Moon Bounce by the United States Navy. At the time there were no artificial communication satellites. The Navy was able to use the moon as a link for the practical purpose of sending radio teletype between the base at Pearl Harbor in Hawaii, to the headquarters at Washington, D.C. This offered a vast improvement over HF communications which required the cooperation of the ionospheric conditions affecting propagation.
When the artificial communication satellites started being launched into orbit the need to use the moon for communicating between distant points was no longer necessary. Dedicated military satellites had an extra layer of security on the channels they operated on. Yet for amateur radio operators the allure of the moon was just beginning, and hams started using it in the 1960s to talk to each other. It became one of Bob Heil’s favorite activities.
In the early days of EME hams used slow-speed CW (Morse Code) and large arrays of antennas with their transmitters amplified to powers of 1 kilowatt or more. Moonbounce is typically done in the VHF, UHF and GHz ranges of the radio spectrum. These have proven to be more practical and efficient than the shortwave portions of the spectrum. New modulation methods also have given hams a continuing advantage on using EME to make contacts with each other. It is now possible using digital modes to bounce a signal off the moon with a set up that is much less expensive than the large dishes and amounts of power required when this aspect of the hobby was just getting started.
“For instance, an 80W 70 cm (432 MHz) setup using about a 12-15 dBi Yagi works well for EME Moonbounce communication using digital modes like the JT65,” writes Basu Bhattacharya, VU2NSB, a ham and moonbouncer located in New Delhi, India.
On the way to the moon and back, the radio path totals some 50,000 miles and the signals are affected by a number of different factors. The Doppler shift caused by the motion of the moon in relation us surface dwellers is an important factor for making EME contacts. It is also something that effected the sound of the Pauline’s music when it got bounced off the lunar surface.
“The sound shifted slightly downward in pitch… like the whistle of a train as it rushes past,” said Pauline of her performance.
“I played a duo with the moon using a tin whistle, accordion and conch shell. I am indebted to Scott Gresham-Lancaster who located Dave Olean for me in 1986 and helped to determine the technology necessary to perform Echoes From the Moon. Ten years later Scott located all the Ham Radio Operators for the performance in Hayward, California which took place during the lunar eclipse September 23, 1996. Following is the description of that performance: The lunar eclipse from the Hayward Amphitheater was gorgeous. The night was clear and she rose above the trees an orange mistiness. As she climbed the sky the bright sliver emerged slowly from the black shadow - crystal clear. The moon was performing well for all to see. Now we were ready to sound the moon.
“The set up for Echoes From the Moon involved Mark Gummer - a Ham Radio Operator in Syracuse New York. Mark was standing by with a 48 foot dish in his back yard. I sent sounds from my microphone via telephone line in Hayward California to Mark and he keyed them to the moon with his Ham Radio rig and dish and then he returned the echo from the moon. The return came in 2 & 1/2 seconds. Scott Gresham-Lancaster was the engineer and organized all. When the echo of each sound I made returned to the audience in the Hayward University Amphitheater they cheered. Later in the evening Scott set up the installation so that people could queue up to talk to the moon using a telephone. There was a long line of people of all ages from the audience who participated. People seemed to get a big kick out of hearing their voices return - processed by the moon. There is a slight Doppler shift on the echo because of the motion of both earth and moon. This performance marked the premiere of the installation - Echoes From the Moon as I originally intended. The set up for the installation involved Don Roberts - Ham Radio Operator near Seattle and Mike Cousins at Stanford Research Institute in Palo Alto California. The dish at SRI is 150 feet in diameter and was used to receive the echoes after Don keyed them to the moon. With these set ups it was only possible to send short phrases of 3-4 seconds. The goal for the next installations would be to have continuous feeds for sending and receiving so that it would be possible to play with the moon as a delay line.”
It's a set up that could work for other musicians who want to realize again Oliveros’s lunar delay system. Or it could be modified to create new works. The thrill of hearing a sound or signal come back from the moon remains, and if creative individuals get together to explore what can be done with music and technology, new vistas of exploration will open up.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Over the course of the 20th century a music concerned with various aspects of space and spatialization began to take shape. It was a music with its roots in both the aether and the living room, this latter because of the influence of Erik Satie. Satie was to have many influences on musical developments after him. One stream was the noisy yet minimalist vein that came from the influence of his piece Vexations. The other was as the spiritual god father of ambient, descending from his conceptions of Furniture Music. This latter is what concerns us here.
In French the term is musique d’ameublement a phrase he coined in 1917 and is generally taken to mean background music. It’s literal translation is furnishing music, though in English it has been standard to call it furniture music. It was a breakthrough idea in western music as it the music itself was to be a part of the room, a sonic background to furnish the space and not intended as something that needed to be directly focused on. Many of Satie’s pieces can be experienced as furniture music, but he only gave the name to five short pieces. The names are often indicative of how the music relates to a specific space.
Satie had a notion of music that could "mingle with the sound of the knives and forks at dinner." His first set of furniture pieces gave that notion a form.
The first set of furniture music he wrote has names like “Tapisserie en fer forgé – pour l'arrivée des invités (grande réception) – À jouer dans un vestibule – Mouvement: Très riche (Tapestry in forged iron – for the arrival of the guests (grand reception) – to be played in a vestibule – Movement: Very rich)” and “Carrelage phonique – Peut se jouer à un lunch ou à un contrat de mariage – Mouvement: Ordinaire (Phonic tiling – Can be played during a lunch or civil marriage – Movement: Ordinary)”
The second set was composed as intermission music for a comedy by Max Jacob that has since been lost. As intermission music the idea of background ambience to fill the space is again asserted. Not much else was done with the furniture music and it remained largely unknown to the public except for being mentioned in a few biographies of the composer. In the 1960s some facsimiles of his scores appeared in the then new biographies coming out on Satie, with publication of the scores following in the 70s.
In America Satie’s ideas and music found a champion in John Cage. Cage was stimulated by the idea of furniture music and it inspired his own experiments and theories for a minimalist background music. Furniture music became a nucleus around which the minimalist and avant-garde composers rallied around with its emphasis on being played not as the centerpiece, but as something to create a space which people lived and moved inside of. Atmosphere, timbre, texture, long durations, repetition, and drone a part of the milieu.
These tendencies towards texture and drone were picked up by Brian Eno who built upon the idea of furniture music on his album Discreet Music (discussed in terms of its relation to cybernetics and information theory in Chapter 3). Eno thought of Discreet Music, as just what one of the definitions of the word discreet means: unobtrusive and unnoticeable.
''The ambient records are similar to paintings,'' Eno says. ''You don't gaze at a painting for hours each day. But you're aware of its presence, and occasionally you choose to go into it deeply - at a time when you're receptive and want it to affect your mood.''
The minimalist and ambient aspects of furniture music built on by Cage and Eno became major strands of what was to become Space Music. Another major strand came again from that great force of nature, Karlheinz Stockhausen, and the German electronic musicians who followed his lead starting in the 1960s and 70s.
The Spatialization of Space
At the WDR Stockhausen became a colleague of Robert Beyer in 1953 (see Chapter 5). In a 1928 paper Beyer wrote about “Raummusik” or spatial music. It wasn’t about music from the stars, or music to create an atmosphere in a specific space as Satie had done with his furniture pieces, but was focused on the possibilities of having different sound elements localized at specific points within a concert hall or listening space. With the advent of electroacoustic music the spatialization of sound also became about certain sounds being in specific loudspeakers and moving sounds from one loudspeaker to another within a system. Stockhausen took the idea of spatial music, and the term, and ran with it, with composed spatial elements running throughout many of his works.
And while this spatial element was very dear to Stockhausen, he was also interested in creating music inspired by outer space and the greater cosmos. Following a performance of Hymnen in 1967 he said, “Many listeners have projected that strange new music which they experienced—especially in the realm of electronic music—into extraterrestrial space. Even though they are not familiar with it through human experience, they identify it with the fantastic dream world. Several have commented that my electronic music sounds ‘like on a different star,’ or ‘like in outer space.’ Many have said that when hearing this music, they have sensations as if flying at an infinitely high speed, and then again, as if immobile in an immense space. Thus, extreme words are employed to describe such experience, which are not ‘objectively’ communicable in the sense of an object description, but rather which exist in the subjective fantasy and which are projected into the extraterrestrial space."
Many of Stockhausen’s pieces of music are concerned with outer space, the constellations, and stars. It was a recurrent theme throughout the compositions he wrote in the 1970s, and he spiraled back to space and the stars again and again throughout his creative life. As such a few of the relevant pieces will be explored here and others will be examined in more depth in their own sections of this chapter.
Sternklang is a piece of music that pulls together Stockhausen’s interest in combinatorial systems (Glass Bead Games), spatial music, and intuitive music, among other things.
"park music", to be performed outdoors at night by 21 singers and/or instrumentalists divided into five groups, at widely separated locations. In a park at night the sky is open to all who want to receive the light and blessing of the stars, of those things coming into being. In the score Stockhausen says simply that the music is sacred and that it is best performed on in the warmth of summer on when the moon is full.
Stockhausen says of the piece, “STERNKLANG is music for concentrated listening in meditation, for the sinking of the individual into the cosmic whole”.
The music itself bears many similarities to Stimmung, in that overtone singing is done by the vocalists based on various combinations of vowel phonetics. The instrumentalists are also required to create overtones and also use synthesizers, sometimes processing their sound through the synth to create the required overtones. The groups are spaced approximately 60 meters apart from each other, creating the spatial effects for listeners who are wondering around the park, stopping here and there to listen to the different ways the music sounds in separate but overlapping spaces. Loudspeakers amplify the different groups, and each group is supposed to be situated that they can hear at least one or two other groups.
These separate groups of players perform independently of one another, but they also synchronize together at ten different times during the performance. The synchronization is done through the work of the torch-bearers and sound-runners. They run from one group to another, the torch bearer lighting the way, the sound runner, giving a musical “model” to the other groups. In the center of the park a percussionist synchronizes the musicians to a common tempo.
This complex work has an equally complex score, made up of a text illustrating the concept, a Formscheme, five pages each with six of the Models to be played in a variety of combinations, ten pages with ten Special Models, and a page of Constellations. All this material is given to the different groups of musicians who use parts of it for the structure according to the instructions. From this material many completely different performances of Sternklang could be given, due to the combinatorial aspect. Yet they would all sound consistent as Sternklang. The score is a vessel into which the musical energies are poured, and though the contents may differ between performances, the vessel itself lends its form.
The Special Models are the only times when the five groups are synchronized via the tam-tam, yet even within these there are part-patterns that may differ. Mixed in at different points of the music are the Constellations. These points are based on the actual constellation shapes interpreted as relative pitch and loudness. Meanwhile, the thirty different Models give instruction for how to sing the pitch material using the phonetic vowels from the constellation names so as to accentuate the overtones. Just as in Stimmung, the names are considered to be ones full of magical power. In all the overtones played there is a unique oscillation, created by the mouth by the vocalists, while the synthesizer players use timbre filters, and the trombone players use mutes.
The five different groups can be conceived as their own constellations, at times vibrating with their own rhythms, songs, tones. At other times they come into synchronized harmony. Drifting about these constellations are the human listeners, being exposed at different points to the intense and pure musical light of the star sounds.
He followed up Sternklang with Ylem, Tierkreis and Sirius. When Licht took over his compositional life starting in 1977 he managed to continue to work in themes of space, and worked dizzying amounts of spatialization and sound projection techniques into the various pieces that make up his magnum opus. Of these the pieces Weltraum (Outer Space, 1991–92/1994), Komet (Comet, 1994/1999), Lichter—Wasser (Lights—Waters, 1998–99) are especially significant. Michaelion (1997) is likewise discussed (at the end of the chapter or in the section on shortwave radio). In the Klang cycle his final series of works, he continued to be inspired by the stars. The electronic chamber piece Cosmic Pulses sees him completely leave the orbit of previous Earth music’s in his spatial exploration of space.
Stockhausen’s influence fed more or less directly into the Kosmiche genre of music in Germany starting in the late 1960s.
Other Planes of There
If you’ve ever listened to the music of Sun Ra you know that space is the place. To say that Sun Ra was interested in space music from a cosmic perspective is an understatement.
The man from Saturn himself said "When I say space music, I'm dealing with the void, because that is of space too... So I leave the word space open, like space is supposed to be."
In the 1930s when Herman Blount was taking a training course to become a teacher in Huntsville, Alabama, he received some visitors who established his true calling. He was to be a teacher, but not a school teacher. These visitors, Blount said, were aliens, who had antennas that grew above their eyes and on their ears, perhaps attuned to the wavelengths of cosmic music. They transported Sonny Blount, and this transportation caused him to metamorphosize into Sun Ra, after his visit to the planet Saturn. There he was given a set of metaphysical equations that surpassed the trivial knowledge of Earth. At the proper time, these beings told him, when life on Earth was filled with despair, he could set out to teach humanity. The vehicle for his teaching was music, and his message was one of discipline.
This experience informed Ra's work for the rest of his life. It changed him on a fundamental level, and from it he continued his quest into music and metaphysics. Sun Ra steeped himself in mystic lore. His birth name came from Black Herman, the stage name of stage magician, hoodoo practitioner, and seller of patent medicines. His act was mixed the illusions of being "buried alive" and other escapes and that of a traveling medicine show catering to African-Americans. Black Herman was the author of Secrets of Magic, Mystery, and Legerdemain, that contained a mythologized biography, and a selection of material on sleight of hand, hoodoo folk magic, astrology, lucky numbers, dreams and more. The name Herman itself calls to mind that trickster and communicator Hermes, though it's etymology is actually German from the words harja- "army" and mann- "man".
Though Herman Blount changed his name, in many ways he followed in the footsteps of his namesake, and lived a life of magic and mystery. Like Black Herman he created a mythology around his life that became part of his teaching vehicle, just as his music became a vehicle for space travel.
Ra's band was not a band. They were a group of "tone scientists". They weren’t an orchestra, they were an arkestra, and their music was a way to travel the outerspace ways, and to bring the sounds of the cosmos down hear onto Earth. The way Ra’s compositions swing, showed that they weren’t tied to the gravity well of our planet, but orbited around vast interplanetary spheres.
For all the free-wheeling moments of parts of the Sun Ra's ouvre, it came from his total discipline. His music sounds wild, out there, but it came from his total devotion to music. He abstained from alcohol, and encouraged his band members to do the same. He abstained from sex, drugs, and even sleep. The rock and roll ethos was his antithesis. For him there was sanctity to his calling as a musician, tied up as it was with also being a messenger from another world. His band practiced for hours and hours, in the middle of the night when Ra couldn’t sleep, late in the afternoon when he was jolted out of a brief catnap, in the morning when they no longer remembered what day it was, they were playing music. It was always in their mind and they were ready to swing.
Sun Ran and his Arkestra were so prolific it is beyond the scope of this section to go into the vast penumbra that is his legacy and work. The theme of space reverberates throughout his records. So were the sounds of the space age.
Sun Ra was one of the first jazz musicians, if not the first, to get into the synthesizer game, bringing the sound of the Minimoog into his already swirling cosmic pallette. Sun Ra believed it was important for black musicians to get into the world of electronic music, to start exploring the experimental sounds of the space age made possible by technology. For the makers of synthesizers, Jazz was a genre where they had yet to have a presence. All that changed between 1969 and 1970 when Sun Ra was invited to visit the Moog workshop in Trumansburg, NY.
As one of the great jazz pianists Sun Ra had already availed himself of the electric sounds that became available in the 50s and 60s. These included electric piano, electric Celeste, Hammond organ, and the Clavioline. The Clavioline was memorably used on Joe Meek's production of Telstar by the Tornadoes. It was a vacuum tube based monophonic keyboard that gave an otherworldly vibe to many songs. Sun Ra loved the expanded timbre palette these keyboard instruments gave his voracious appetite for sound and he was always looking for what else might come down the line, and the Moog was his ticket into the seventies.
Sun Ra had met Robert Moog when a journalist at the jazz rag Downbeat arranged for Sun Ra to visit the Moog factory. Sun Ra got a chance to got his expert hands on the Minimoog which was still in pre-production. The great synthesizer maker even gave the great Ra a prototype to take back with him.
At the time the portable synthesizer was just an idea. The synths at the time were messy affairs taking up rooms and patched with huge amounts of cables. While the results of these instruments switched on many to their well-tempered sounds, as a touring instrument the Moog was untested, and its little brother the Minimoog was still in infancy. Sun Ra not only tested it's possibilities but took it out into the greater solar system on a scouting mission that brought space sounds into Sun Ra's live and recorded sessions. His track Space Probe, for example, was an extended solo with the Minimoog.
As new keyboards found their way into the market they would often find their way to Sun Ra who continued to include such stalwarts as the Yamaha DX7 into his interplanetary musical concepts.
From Kosmiche to Hearts of Space
Kosmiche can be considered a synonym for Krautrock. The term was in use in Germany before the Krautrock label got thrown onto bands like Can (whose members Holger Czukay and Irmin Schmidt were students of Stockhausen), Ash Ra Temple, Faust and Guru Guru by the music press in England. Krautrock itself can be seen as a highly psychedelic vision of rock music with a heavy emphasis on synthesizers and propulsive motorik rhythms dressed with jazz improvisations and avant-garde tape editing techniques. It owed less to blues music, than rocks American and English counterparts, yet was indebted to the scenes of free improvisation happening in art music and jazz circles. A lot of it can be cosmic and spacey, but the extended synthesizer escapades of Popul Vuh, Amon Duul II, and especially Tangerine Dream and Klaus Schulze all went on to put their stamp on the emergent genre of ambient space music that would be epitomized in the set lists of the of the radio show Hearts of Space.
On Tangerine Dream’s 1971 album Alpha Centauri the music was described in the liner notes as “kosmiche musik”. Julian Cope noted that the album was like Pink Floyd’s Saucer Full of Secrets, but minus the rock. It spread further, when their record label, OHR, put out a compilation with the name as a title. These Germans had found inspiration in the range of sounds now available to them with the Moog Modular, and with the EMS VCS3. They were also eager to separate their sound from their troubled nations past, and focusing on outer space, at the height of the space race and optimism about humanities exploration of the cosmos, was one solution. Space rock continued as one vein of this music, and another more ambient strain continued to emerge from others who found inspiration from the star sounds of Alpha Centauri.
Klaus Schulze was another heavy influence on this emergent sound. Before he began his prolific solo career he’d already been playing with Tangerine Dream on their first album Electronic Meditations, after which he left to form Ash Ra Temple, made one album with them and departed. He also played sessions with the acid soaked Cosmic Jokers. Once he went solo he truly flourished as an artist. His first solo album Irrlicht came out in 1972 and featured a modified electrical organ as the main sound source and samples of classical symphonic music played backwards and run through a messed up amplifier to transform the sounds, which he mixed to tape for a three-movement symphony. Cyborg was his next album, and featured a similar set up, while Timewind from 1975 saw his first use of a sequencer which became a staple of his process. The pieces here are sidelong masterpieces easy to lose a sense of time while listening to.
It was in these same years that Stephen Hill founded his radio show Music from the Hearts of Space, originally on KPFA. He used the pseudonym Timmotheo, and when his co-host Ann Turner joined him, she used the on air alias Annamystic. In its original incarnation it was a three-hour long late night excursion into all things “space music”. Hill had been an architect by training, and he was interested in all kinds of contemplative music, and also music that could fill up a space. The kosmiche sounds coming out of Germany certainly fit the bill. The program grew to fill its own niche and encompassed a mix of a wide range of ambient, electronic, world, new age, classical and experimental music.
Space music can act as an isolation chamber when skillfully constructed, and excels over an expanded range of time. Steve Roach and Robert Rich both got started in the late seventies with albums coming out in the early eighties. There complimentary styles were perfect for the further growth of ambient space music and the two artists became closely associated with the milieu of music presented on Hearts of Space.
At the age of twenty when Steve Roach wasn't practicing to up his game as a Motocross racer, he was listening to the sounds of Vangelis, Klaus Schulze, and Tangerine Dream. After he suffered from a bike crash that led him into a near-death experience, where he heard "the most intensely beautiful music you could ever imagine" he reorganized his life and dedicated it to recreating the music he had heard. Out of this experience came his landmark and timeless album called "Structures from Silence." Roach has said that others who have had near-death experiences tell him that they heard similar music. He had acquired his first synthesizer about six years before the accident, in 1978 and taught himself to play, inspired by the music he'd been listening to. In 1982 his first album, Now, came out. Then the bike crash. From that time on his life has been devoted to bringing people music that communicates a spiritual perception of space and time, flow, at once in touch with the landscapes of the earth, as with the vast expanse of silence within the void.
The three long tracks on Structures from Silence encapsulate the listener within a web of harmonic waves. From that release onwards Roach has been relentless in his mission to bring a music of space, stillness, and quiet noise into the hearts and heads of his many listeners. The music of Roach became a staple on Hearts of Space, and a bridge between the adjacent worlds of ambient and new age. Tribal soundworlds were also explored when Roach visited Australia. He fell in love with the desert outback and the didgeridoo. He learned to play the instrument, and started incorporating into his music. Roach was also studying the Aboriginal Dreamtime, and going on walkabouts in the desert of his of California. These influences came to the fore on his 1988 classic Dreamtime Return. The desert became a spiritual home for Roach, and he eventually moved to Arizona where the wide open landscape continues to be a source of inspiration. Out of these experiences, and collaborations with many artists, Roach helped to create the tribal ambient and tribal techno subgenres.
Another artist in a similar vein, who has also collaborated with Roach, is Robert Rich, whose music is another frequent touchstone on Hearts of Space playlists. They also began their careers around the same time, with Rich releasing his first album Sunyata in 1982. Like Roach his signature soundworlds have helped to further define an organic and at times tribal strain of ambient. Rich also goes in for explorations into propulsive beat centered trance rhythms, with extensive explorations of alternate tuning systems, recalling the works of Terry Riley and Steve Reich, abetted with the help of a sequencer. Robert Rich also has a penchant for all night concerts, just as Riley did with his longform raga inspired minimalism, but Rich took his performances in a different direction, with quieter sounds. He used his sleep concerts as a vehicle for exploring the nature of sleep, consciousness and dreams.
Hearts of Space founder Stephen Hill notes, “What's now being called Ambient music is the latest chapter in the contemplative music experience. Electronic instruments have created new expressive possibilities, but the coordinates of that expression remain the same. Space-creating sound is the medium. Moving, significant music is the goal.”
Radio remains a perfect medium for presenting this type of music and Hill and Turner would do long hour long blocks with no voice interruptions as DJs until the end of each hour, when they would announce what they heard. This allowed the listeners to sink into the experience with being brought out of their contemplative reverie.
In 1983 after ten years on KPFA Hearts of Space started to be syndicated on 35 National Public Radio stations around the United States via the Public Radio Satellite System. It continued into the era of net streaming and in 2009 it was still on two hundred public radio stations. It moved into orbit with Sirius XM for a time. On November 12, 2021, it reached its latest milestone, 1,300 installments.
Earth Station One: John Shepherd Beamforms to Space
Other shows mining the same vein have also achieved great success on the public radio circuit, with one of the most popular being Echoes created in 1989 and hosted by John Diliberto. Earth Station One, created by John Shepherd, was the most innovative, as Shepherd not only played classic space music, but attempted to broadcast it to the extraterrestrial lifeforms he believes live in outer space.
Something must have been in the air in the early seventies, if not in the acid, as John Shepherd embarked on his own quest to transmit space music into space beginning age 21 in 1971. He’s been listening to radio shows about the UFO phenomenon, and was an avid electronics hobbyist, who had begun tinkering in his teens, building equipment on his own out of surplus and whatever parts he could scrounge. He was also a Science Fiction buff, and wanted to be able to build the kind of machines he saw in TV and film.
As he played around with parts the idea of building something that could communicate with aliens came to him. Between some ARRL manuals and an electronics 101 course he took in highschool, and what the he taught himself, he started putting together a station at his grandparents home in Michigan. He had a friend in Transverse City who was as into music as he was into electronics and SF films. They would listen to his friends collection of over 4,000 albums for eight and ten hour shifts.
In his first attempts at communicating with extraterrestrials he used binary tone pulses on 150-watt transmitter. Then he upped his game as Project STRAT (Special Telemetry Research and Tracking) was born out of the stew of influences affecting him and his destiny. Why not transmit music? He put together other set ups, and in time had a 60,000 volt transmitter to beam shows that featured Can, Kraftwerk, Cluster, Neu! And other bands from the German kosmiche scene into outer space, outside of earth and lunar orbit, out into the void. His shows also featured different world music, minimalist composers, and sometimes jazz.
“I felt that music was a sort of universal language and would best suit the open form of communication. It doesn’t need much in the way of translating and most of the music I selected was of the instrumental variety. I felt the more genuine forms of music offered something meaningful. It has to be something that inspires the mind and imagination. That's when it's special,” he said.
His eccentric passion was entirely funded by odd jobs, and he kept at his quest to communicate with higher intelligences using technology and art for twenty-seven years. Without much in the way of financial help for his pet project, he finally had to shut down the station in 1998. It’s legacy however lives on, and with synthesis of electromagnetic communications, and music, perhaps others will step in to bring the space music of Earth to those ear perked aliens, listening, out there, somewhere in orbit.
Ambient remains a popular genre for listeners and musicians, and it is my belief that these related forms of contemplative sounds will have spaces on the spectrum for decades to come, that the music of the spheres will continues to reverberate across airwaves and ionosphere, and even out into the solar system and beyond.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
References / RE/Sources:
Notes for A Brief History of Space Music
The 'furniture music' of rock star Brian Eno
by David Sterritt, The Christian Science Monitor, May 3, 1984
Electronic and Experimental Music: Pioneers in Technology and Composition, Thomas B. Holmes, Routledge Music/Songbooks, 2002
Sun Ra sources:
Space is the Place: The Life and Times of Sun Ra by John F. Szwed
Kosmiche-Musik and Its Techno-Social Context, Alexander C. Harden, IASPM Journal, ISSN 2079-3871.
In 1967 FM audio synthesis was discovered by John Chowning during his experiments at Stanford University. It uses frequency modulation on audio waveforms in a similar manner to the way frequency modulation is done on radio waves. FM radio had come along before FM synthesis, and it was Chowning who first came along and did the research necessary to apply the pertinent equations to audio.
John Chowning was born in Salem, New Jersey in 1934. As a young man he had joined the service and studied music at the Navy School of Music. After leaving the navy he went to Wittenberg, Ohio where he got a bachelor degree in music in 1959. Diploma in hand he hopped across the pond to Paris to study with Nadia Boulanger, who introduced him to the music of Pierre Boulez and Karlheinz Stockhausen. He had been bitten by the bug of electroacoustic and music and became fascinated with the idea of using loudspeakers in composition.
After three years of studies in Paris he went to Stanford in 1962. In 1963 Max Mathews wrote his famous paper on the Music IV program he had made using the computers at Bell Labs. It was an entirely new way of making music. In January of 1964 a friend of passed along a copy of Mathews article to him. At the time he hadn’t even yet seen a computer, yet one of the statements made by Mathews rang inside his head: “There are no theoretical limitations to the performance of the computer as a source of musical sounds, in contrast to the performance of ordinary instruments.” Excited by the ideas in the article Chowning took a computer programming course and it convinced he could learn.
He also got in contact with Max Mathews and made a visit to Bell Labs the following summer. Mathews gave him the punch cards that made up the Music IV program from the Bell Labs compiler. He took this gift back with him to Stanford.
The world of computer music was small, as it was totally new field. His school however was equipped with state of the art computers at the Stanford Artificial Intelligence Lab (SAIL), where a spirit of interdisciplinary investigation reigned. SAIL had been established in 1962 by Professor John McCarthy, a computer and cognitive scientist who founded the field of artificial intelligence, having co-written the paper where that term was first used. John McCarthy pioneered computer timesharing among different users, a solution to the long periods of time when the machines worked out the intricacies of a program to be output.
With the help of the friendly hacker and undergraduate David Poole he got Music IV program up and running at SAIL. They used an IBM 1301 disc, which served as the common storage unit between an IBM 7090 and DEC PDP-1 computer. The music generated by the program and saved to the drive. Poole helped Chowning obtain the audio by writing a double buffer program that eventually allowed for two analog signals to be recorded to stereo tape.
Fellow lab rats like Poole provided a hospitable and encouraging environment for learning and he taught himself the other skills he needed to know as one of the first computer composers and musicians: programming, signal theory and acoustical physics, all fields of study outside of his initial academic wheelhouse.
These new skills opened up further possibilities for Chowning to conduct deep research into the nature and properties of sound and music, and enabled him to translate the algorithms for RF frequency modulation into something that would work for audio frequency modulation.
“Music is a symbolic art” Chowning has said. The Western classical tradition is accustomed to the role of the composer as someone who often puts music into notation before it is ever heard played by a full ensemble. The music came first from the imagination, then composed on paper, and only later played by musicians. The computer and its programming languages gave composers a different tool for musical realization, and in its capacity as a sonic instrument, gives access to a gamut of timbre that had before only been available in the platonic realm of ideas. With the tool itself realized, further realizations followed, and the ideas were able to be brought down from the platonic realm into listenable form.
SPATIALIZATION AND DOPPLER SHIFTS AND VIBRATO
As with many other composers of his generation who’d been stimulated by the work of Stockhausen, Chowning became very interested in the spatialization of sound. The experiments he conducted using a quadraphonic speaker set up around a listener in the shape of a square led him to his discovery of frequency modulation within the range of audio spectra and the subsequent creation of FM synthesis.
One of Chowning’s experiments was to divide the levels of intensity between the pairs of left and right speakers. The differences of intensity created sound illusions of distance or closeness. Next he worked with Doppler shifts and reverb effects to create the experience of sound moving within what he termed the “listener sound space,” an arrangement of speakers with listeners seated within.
Reverb was a key ingredient in his acoustic work and he discovered that if the reverb is applied equally to all channels it negates any spatialization or perceived distance effects of the audio. From this he learned there are two types of reverb: global and local. The global is applied to all sounds equally in a mix. The local is applied only to certain signals emerging from specific loudspeakers. Chowning then came up with an equation that showed how reverberation within a small space remains basically constant even as signal distance is increased. In a large space the equation can determine the distance of a sound based on the ratio of reverbant and non-reverberant signal.
With equations in hand, he programmed a spatialization routine with the Max program in 1972. It had a graphical aspect that allowed the composer to draw the trajectory of sound movement from one speaker to another. This program had two different aspects of velocity that could be used: angular and radial. The angular velocity is the rate of change of the sound intensity. The radial velocity worked with the rate of frequency shift in sound, i.e. the Doppler effect.
The Doppler effect is most often heard in everyday life as the sounds of objects moving closer to a listener, and then farther away. Striking examples are from traffic of all sorts such as trains, airplanes, and the whizz of automobiles and motorcycles, the blaring sirens of an emergency vehicle. The Doppler effect can also be experienced when there is a loud stationary source of sound, but the listener is moving around it, such as the bump of bass emanating from a house party on a Friday night while a couple walk their dog around the block where the party is taking place.
Doppler shift was first discovered in the light spectrum by Austrian physicist Christian Doppler who wrote of the phenomenon in his 1842 paper, On the coloured light of the binary stars and some other stars of the heavens. Three years later Buys Ballot ran tests to see if Doppler shift was also present in the audio spectrum, and he showed that it was. He was able to show that the pitch of a sound is higher than the emitted frequency as sound source approaches, and becomes lower than the emitted frequency when it recedes. Further, the French physicist Hippolyte Fizeau independently discovered the property within electromagnetic waves in 1848. Since that time a number of equations came into use to mathematically model the phenomenon. The Doppler effect has gone on to be used in a number of settings such as radar, satellite navigation and communication, medicine, astronomy and the ubiquitous use of sirens, among others.
In working with sound intensity, Doppler effects, and reverberation Chowning realized there was much more going on in the perception of the loudness of sounds in space than just the distance and decay rate of audio as it travels. Vibrato was another factor in acoustics which could change the way a sound was perceived. Vibrato provided the next key he needed to unlock audio FM synthesis.
The computer generated waveforms Chowning created were not natural. In nature sounds are quasi-periodic, yet a computer is capable of making a perfect periodic sound. Some critics of computer music have pointed out the unnatural sound generated by these electrons. To make the timbres sound more natural variations have to be created in the waveform to make them quasi-periodic. Chowning did this by micro-modulating the frequency with vibrato.
This led to two discoveries. For one, when a sound is made of multiple partials, he realized that adding small but equal amounts of vibrato to each partial creates perceptual fusion. This fusion creates the illusion in the listener that the sound is one single tone. Perceptual fusion is also at work in film. The eye thinks all the motion is one continuous whole when it is in reality a sequence and series of projected frames. His second discovery was source aggregation. This can be created when small non-equal amounts of vibrato are applied to groups of partials. The listener perceives these as separate tones and sounds. He made extensive use of this latter effect in his 1981 composition Phone.
THE BIRTH OF FM SYNTHESIS
The same principle is at work in radio FM, where a carrier signal is modulated by the input signal, is used in FM synthesis. Audio FM synthesis is achieved by using one signal, called the modulator, to change the pitch of another signal, called the carrier, within a similar audio range. This modulation adds new information to the carrier signal and changes its timbre. The use of multiple modulators on one carrier gives the synthesist further variables for shaping the final sound signal.
The stage had been set for this discovery as Chowning continued to explore the effects of vibrato. He noticed that when the rate of vibrato entered the audio range at 20 Hz partials started to form within the spectrum. He also noted how the relationship between the modulator and the carrier determined whether a sound was harmonic or inharmonic, as well producing changes in the timbre.
As he continued to explore he learned that if the modulator frequency is a whole number multiple of the carrier frequency, than the partials will be harmonic. Next he discovered the modulation index. This is the ratio between the depth of modulation and modulation frequency. He learned this could be used to change a signals bandwidth over time when the amplitude envelope of an entire signal is added to the value of the modulation index. This creates extra audible partials to change the sound.
Similar effects had been achieved with additive synthesis, but those often require up to sixteen or more oscillators, whereas FM synthesis could achieve great results with two oscillators, the modulator, and the carrier, though more can also be used.
In the summer of 1967 Chowning had visited Jean-Claude Risset and Max Mathews at the Bell Laboratories. A few months later in the fall he had made his discovery. In December he visited Bell Labs again. Risset took notes about what John Chowning had discovered with FM synthesis.
From his notes Risset did his own work and ended up creating the first composition using FM Synthesis, Mutations, in 1969. Mutations was commissioned by GRM and was composed on computer and two-track tape. Made at Bell Labs it explores the idea of composing at the very level of sound itself, programming it and creating it all on the computer. Gradual changes or mutations occur over the course of the piece, “including the shift from a range of discontinuous heights to continuous frequency variations.”
The piece used the endless glissando, or barber pole of sound, Risset had devised for a previous piece Little Boy in 1968. This musical barber pole was similar to the Shepard tones also created at Bell Labs using Max Mathews MUSIC software.
Mutations received its premiere at the Moderna Museet in Stockholm in 1970.
Though Risset gets to claim composing the first work to make use of FM tones, Chowning wasn’t far behind with his work Turenas in 1972. It makes use of FM synthesis, his surround sound set up and programming for Doppler shifts in Music IV. The word itself is an anagram of natures and Chowning strove to create realistic timbral sounds with artificial means.
The first of the pieces three movements makes use of the mathematical formula for a Lissajous pattern, also called a Bowditch curve. This is a pattern produced when two sinusoidal curves intersect, their axes at right angles to each other. It was first studied in 1815 by American mathematician Nathaniel Bowditch, while the curves were later studied by the Jules-Antoine Lissajous, a mathematician from France, who used a compound pendulum that poured out narrow streams of sand to study the pattern.
The curve is well known in the world of electronics where it can be made visible using an oscilloscope. With the oscilloscope the shape of the curve shows characteristics of electronic signals. The curves are used to study the properties of any pair of simple harmonic motions at right angles to each other.
The Lissajous patterns came to be used in determining the frequencies of sounds or radio signals. A known signal frequency is put onto the horizontal axis of an oscilloscope, and the signal that needs to be measured is put on the vertical axis. The pattern that results is a function of the ratio of the two frequencies.
When Chowning had originally made a sketch of the proposed movement of sound in space for Turenas at Stanford, an engineer commented that it looked like the Lissajous pattern. Chowning decided to go ahead and use the Lissajous pattern proper. One of the properties of a Lissajous path is that its rate of change slows down as it reaches its peak, like a car set on cruise control at seventy miles per hour.
Chowning used a double Lissajous to surround the listener in these mathematical patterns. The second movement is a tour de force of everything Chowning had learned. He uses reverberation, vibrato, modulation, and many timbral transformations to showcase the veracity of FM synthesis.
STRIA AND THE GOLDEN MEAN
The great astronomer and explore of the harmony of the spheres, Johannes Kepler said, “Geometry has two great treasures: one is the theorem of Pythagoras, the other the division of a line into mean and extreme ratios, that is Phi, the Golden Mean. The first way may be compared to a measure of gold, the second to a precious jewel.”
Chowning’s most famous work, Stria, from 1977 adheres with rigor to the use of the Golden Mean in the composition of all parameters and aspects of the work. It also makes strict use of FM synthesis. Goethe said, “Geometry is frozen music.” Chowning took the sacred proportions of the Golden Mean and unfroze them so that they could be heard.
The Golden Mean is often also called the Golden Ratio, or Golden Section and has been studied since at least the time of Euclid. It is commonly symbolized by the Greek letter Phi, giving it another moniker, the Phi Ratio. The Golden Mean can be found when a line is divided into two, so that, the whole length divided by the long part is also equal to the long part divided by the short part. In math two numbers are in the Golden Mean if the ratio of the sum of the numbers, x + y, divided by the larger number, x, is equal to the ratio of the larger number divided by the smaller number, x/y. Phi is an irrational number equal to 1.618, and then continues on, forever.
The Golden Mean can be found in the sacred art and architecture of many traditional civilizations, from Egypt to Islam, from China to the great cathedrals of the Gothic Middle Ages, and many points in between. It can be found in many natural forms, such as certain leaves and the shell of the Nautilus pompilius. Wherever it is found there exists a manifestation of this natural harmony.
In his FM research Chowning discovered that when he composed using powers of the Golden Mean, applying them to the carrier-to-modulator frequency, low order side band components were obtained that were also powers of the Golden Mean.
The macrostructure of Stria is related to the Golden Mean, and the microstructure of Stria relates to the Golden Mean. It all resolves around 1.618. The first frequency heard in the piece is 1618 Hz. All the durations in the piece relate to the Golden Mean.
Stria was written using MUSIC 10 at SAIL, and travels from highs to lows as it traverses the mathematics of the Golden Mean in different ways. The precise use of computer controlled timbre and vibrato throughout give Stria a sound that is artificial, yet it is also natural sounding because of the use of the Phi Ratio as the structural component. Listening to it is like receiving a geometric download from the platonic realm.
CCRMA AND IRCAM
Chowning founded the Center for Computer Research and Musical Acoustics (CCRMA) officially in 1974, though the basis for it had already begun inside of SAIL. The other founding members were Leland Smith, John Grey, Andy Moore, and Loren Rush. The first course in computer composition had already been given at Stanford in 1969, taught by Chowning, Max Mathews, Leland Smith and George Gucker. Having shared the space and valuable computer time with other researchers at SAIL it was soon time for those interested in the specifics of composing with computers to have their own department at Stanford.
One of the technologies developed by CCRMA was called the Samson Box, or the Systems Concepts Digital Synthesizer, the brainchild of MIT graduate Peter Samson. This system was used until Apple came out with their Unix based system. Michael McNabb composed his piece Invisible Cities (based off the novel by Italo Calvino) on the Samson machine.
Just like at SAIL the use of the Samson box had to be timeshared. “Although the Box was a computer highly optimised for digital signal processing, we didn’t control it in real time because we decided to make it accessible to everyone, and ran a time-sharing environment so that most of the time in composition was spent in preparing the command files for the device. Once those files were written, the music — four channels of audio with integrated reverberation — could be produced in real time and recorded to analogue tape. The Box then became available to the next user in the queue. Running it as an assignable device like a computer printer avoided the problems that would have occurred if we had run it in a studio in which one user could tie it up for hours on end.”
Meanwhile in Paris in 1970 President Georges Pompidou tasked Pierre Boulez with founding an institution for musical research. Boulez then assembled his own team, which included the founders of the CCRMA, to build this sound-house at the Pompidou Center. It became the world famous IRCAM, or the Institute for Research and Coordination in Acoustics/Music. Chowning and his associates set up their French colleagues with the same computer system used at CCRMA. IRCAM was famous for its development of MAX by Miller Pluckette. Other innovations and application of research followed.
Chowning composed his piece Phoné at CCRMA. The piece later had its premiere at IRCAM. In Phoné Chowning expanded upon his previous compositions in FM synthesis to give the work the feeling and texture of the human voice.
For the community in Silicon Valley the CCRMA showcased their works in many outdoor concerts at the Frost Amphitheater, a venue used by the likes of the Grateful Dead, Jefferson Airplane and other stalwarts of hippie culture. The concerts of abstract avant-garde music made with computers became popular with the locals. “All these people who worked in the Valley then heard about these machines on which they're working also being used for a concert. And then we made it into a picnic thing at Frost. People would come early with their family and bring wine and get drunk and sit in the sun with the sunset. It became a happening, sort of. We always did really big sound systems and always quad. It was a big event and lots of fun.”
Their work continued on into the 1980s, adapting itself to new iterations of computers and programs, with new compositions by a variety of composers coming out of all the work. Chowning stayed on until 1996 when hearing problems, and the interpersonal fatigue caused by life in academia, caused him to step down into the role of professor emeritus.
If all of this sounds a touch esoteric, then let it not be forgotten that John Chowning made a lasting imprint on the popular music of the 80s and onwards when Yamaha licensed the technology of FM synthesis to create their DX7 synthesizer. Work on its development began in 1974 but a commercial synth wasn’t available until 1983. With its ability to imitate acoustic sounds of piano, brass, woodwind, and other, as well as create new timbres distinct from earlier analog synths, it quickly became a hit with musicians when it was released.
The DX7 was hard to program through its complex menus. Many who worked it with it used it’s out-of-the-box presets, and these sounds became staples in 80s music. Brian Eno got one and it became a crucial part of the setup and workflow in his home studio.
Eno notes, “I use the DX7 because I understand it. I was quite ill for a while, and I filled the time by learning it. I think it’s just as good as anything else. Sticking with this is choosing rapport over options. I know that there are theoretically better synths, but I don’t know how to use them. I know how to use this. I have a relationship with it.”
The DX7 is programmed with 32 sound-generating algorithms, each a different arrangement of its six sine wave operators. These give the DX7 its classic bright and glassy sound. The keyboard itself spans five octaves, and has sixteen-note polyphony. New patches can be created within its deep menu system rather than with cables as had been done with analog synths and these patches could be named and saved inside its memory bank.
After the success of the DX7, Yamaha released a plethora of lower cost FM synthesizers. A cheap version of the DX7 soundchip also went into the Sega Genesis, making it the sound a generation of video game heads grew up jamming their thumbs too.
John Chowning thought the DX7 could also be used to teach about the properties of sound.
“Many basic acoustic phenomena can be demonstrated quite easily using the DX7. It could become an incredibly powerful tool for learning acoustics and psycho-acoustics at a very simple level.”
Since he stepped down from heading CCRMA Chowning has continued to hack audio, compose, and write. He has spent his life investigating the nature of sound and acoustics, he has programmed the music from his head to be output by computers, passing the vibrations from his mind to keyboard and mouse, until the airwaves of this world vibrate with vision.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Frederick E. Terman, Radio Engineering, pp. 483-489 (McGraw-Hill, New York, 1947).
Lawlor, Robert. 1982. Sacred Geometry: Philosophy & Practice. United Kingdom.: Thames & Hudson.
At the same time Reed Ghazala was discovering circuit bending, another Midwesterner was getting involved in the creation of the sound systems that would change the way live rock and roll music was performed around the country and around the world.
Bob Heil is an exemplar of the creative fusions that can happen when an ear turned on to the power of music also develops the knack for technical innovation. Born on October 5 in 1940 at age ten he was an avid accordion player. At age thirteen his parents gave him a B3 Hammond organ. This gift gave him a life in music, and in turn radio, that kept him busy with creative fun and innovation for close to seven decades.
Heil quickly mastered the Hammond and at an early age got a job playing organ in a restaurant where he made fifteen bucks every weekend. Two years later, with even more chops, he became the organist at the Fox Theater in St. Louis. Built in 1929 by William Fox the movie palace was designed to be a showcase for the films of the Fox Film Corporation. Throughout the 1960s it was one of the leading movie theaters in St. Louis and has now been given another life as a performance arts venue.
The organ at the Fox Theater was massive and had over 4000 pipes. Heil had to tune and voice the pipes. This job gave Heil hands on practice in concentrated listening. He had to go in, learn all the harmonics for the pipes and be able to dissect what he was hearing.
Heil, K9EID, has left his mark in music and amateur. The passion for radio also came to him young when got his ticket as an amateur radio operator at the age of fifteen in 1956. The hobby was quick to become an obsession. He plugged the earnings from his organist jobs into radio gear and began a lifetime of tinkering and working with audio and radio circuits. At the time there was excellent propagation on the amateur radio bands, and the six meter band, known to aficionados as “the magic band” was hopping with contacts both close and far. Anyone who wanted to get on the portion of the spectrum to make contacts and hear distant stations was in luck. One night while Heil was tuning around the six meter band he heard something horrid and strange. It was an operator talking on single sideband, not at all common at that time in the six meter portion. On another evening Heil heard him again and they got to talking. Soon they started meeting up on the radio to talk every night. They became fast friends on the air and one day this new friend, Larry Burrell, K0DGE, asked him to come see him in person.
Larry happened to be chief engineer at KMOX. Heil was blown away when Burrell showed him around the studios and control rooms of the mighty Midwestern AM station. Heil wanted to get on 6 meter single sideband just like his older friend and asked him if he would build him a unit. His friend told him no, he wouldn’t build one for him but he would help Heil build one for himself. This proved to be a far greater gift than being given a radio. As Burrell elmered Heil and helped him build his own rig to do single sideband on six meters it sparked Heil’s love for building. After putting together a transverter for 6 Meter single sideband, he built one for 2 meter single sideband.
Organs and Antennas
At school Heil wasn’t doing so hot. Music and radio were his passions, and he continued to fund his habit for radio from the work he got as a musician. Yet somehow he managed to scrape by, and with his parent’s encouragement, got into another beloved aspect of the hobby, setting up antennas. These antenna’s would prove to become important later in his career as a maker of high fidelity microphones and other audio equipment for musicians and radio operators.
One antenna he put up was a Telrex 6 Meter spiral array. Another was a 75 Meter dipole, a phased array also made by Telrex. Playing around with these antennas Heil learned how take them in and out of phase using coaxial cable.
Antenna phasing is used by hams and shortwave radio stations for beamforming -a technique that focuses a wireless signal towards a specific direction and receiving device, rather than having the signal spread out in all directions as it typically does from conventional broadcast antennas. Phased arrays are especially desirable on the lower HF band where conventional beams are not feasible. In the VHF and UHF ranges of the radio spectrum most hams use Yagi type antennas for beamforming. A Yagi is different than a phased arrays in that only one element is driven by the transceiver. The rest of the antenna elements are parasitic, in that they re-radiate the signal driven by the radio at different phases. However, when an array is truly phased, all the elements are driven directly by the radio in different phases. Having a phased array allowed Heil to send and receive signals in specific directions so he could work different amateur radio stations in North America and around the globe going east, south, west or north.
One day Bob Heil got a call from Robert Drake, founder of the R. L. Drake radio company. Founded in 1943 Drake’s company made high and low pass filters for government and amateur radio operators, and after WWII he started making equipment for hams. Robert Drake was interested in one of the radios Heil had built, a kilowatt transmitter for 2 meter SSB.
As Heil recalled Drake telling him, “’We have a little meeting here at our club and I would love for you to come here and spend a day with us. It's actually a couple of days. We do it once a year in the Biltmore Hotel downtown. We cleared out all the furniture on one of the floors and we'll have Art Collins and the guys in one room. You have Carl Mosley and his antennas in another. We'll have Wes Schum from Central Electronics. We'll have Bill Halligan of Halligan,’ and on and on. He names his list; I'm going, ‘Whoa. What do you want me to do Sir?’ ‘We want you to come and tell us how you built this station.’”
This gathering was the Dayton Hamvention, and it quickly grew into one of the two largest annual gatherings for amateur radio operators and manufactures in the world. Heil came and gave his presentation and it was well received by the manufacturers and other hams in attendance. Part of the very purpose of the amateur radio service as defined by the FCC is to advance the state of the radio art. It is this experimental aspect of the radio hobby that has long been a beacon for some of its brightest stars.
After Heil’s presentation he got to talking with a British man who was there with his J. Beam Company. The man was looking for someone like Heil to do some experiments with an antenna they had built, and they asked him if he would like to carry out the work. He was more than willing, so they sent him what any ham would be happy to play with: a 128 element antenna array built for the 2 meter band. After shipping the massive array to him, he was helped by a contractor and fellow ham K9EBA who helped him put up such a beast of an antenna. He had another friend who worked for Motorola who also helped. The fact that his parents let him put up fifty foot wide antenna in the vacant lot behind their house was another blessing working in his favor.
This was the antenna Heil used to get started in 2 Meter moonbounce using VHF SSB, but before he got into that he first got another job, this time at the Holiday Inn in St. Louis where he built them a pipe organ for their four star restaurant. It was extremely rare to have a pipe organ in a restaurant and this helped the Midwest spot become a destination for travelers and organ fans on both sides of the continent.
In building the organ Heil again had the support of mentors, this time from Martin Wick of the Wick Pipe Organ Company, whom he’d met through one of his music teachers. He became close friends with Wick and would stop at his plant in Highland, Illinois on his way from Heil’s hometown of Marissa before going to play at the Holiday Inn in St. Louis. Wick had shown him one of the little theater organs he’d installed in a private home, and that gave Heil the idea of building a similar instrument for the restaurant at the hotel.
Once approval for the plan was in place he would go up to Highland every day to work on putting it together under the guidance of Wick and his employees. It took him about a year and a half to build the organ with five ranks of pipes, a blower, reservoirs, relays and a large console. Ever curious Heil wanted to learn how to voice and tune the organ himself just to see if he could do it, and with a bit of guidance from his mentors, he added this skill to his chest of valuable knowledge.
After he built the organ he got paid to play it six nights a week, and when he looked over the rack as he played he saw the sign for Mosley Electronics. Fate had conspired to place him just across the street from the Mosley antenna plant.
Mosley Electronics was the brainchild of Carl Mosley, W0FQY, later K0AXS, a ham who got his start in the world of radio back in 1918 when spark gap transmitters electrified the air with their crackling sound. In the late 1930’s and early 1940’s Mosley started making equipment, starting with the 3/4" tube socket that was standard equipment for most amateur radio operators at the time. He was working out of his basement when he started this operation, but soon he had so many orders he had to grow his business, hire employees, get additional help.
As his business grew Mosley entered the market for creating accessories for television as the TV era dawned in the 1950’s, building feed thru insulators, wall outlets and plugs. In 1951 he got into the antenna game with his famous “Vest Pocket” design for his fellow hams. The development of the design lead from monoband to multi-band and from there to the tri-band Vest Pocket utilizing one feedline. This innovation led to the antenna becoming a mainstay, and for antennas in general to be the centerpiece of his business, and the building of the factory in St. Louis.
Military and industrial antennas were also being made by Mosley and it was these innovations that led to the creation of the WWV antenna for transmitting time signals. In 1955 his company created the Trap-Master TA-33 amateur tri-band beam setting the standard in the field.
From Marissa to the Moon
St. Louis was also the home of McDonald Aircraft. In 1959 they were busy building the Mercury capsule for NASA. Once a month seven astronauts from the agency came to train at McDonald, and they stayed at the Holiday Inn. They listened to Heil play the organ, and he got to be on friendly terms with the space cadets. One of them was a man named Alan Shepard whose father had also been an organ player and he was intrigued by the fact that the hotel had put such a custom built instrument inside the restaurant. As Heil and Shepard got to know each other, Heil told him about his ham radio hobby. He showed Shepard some pictures of the huge VHF array he had put up.
Heil recall’s their conversation: "‘Wait a minute, you have this thing working?’ I said, ‘Yes.’ ‘Can we borrow it?’ I said, ‘Well, of course.’ ‘Ah,’ he said, ‘This would be great.’ I said, ‘Well, you need to take it down?’ ‘No, no, no,’ he said, ‘You have a phone patch?’ I said, ‘Yes Sir.’ He said, ‘Here's what we’re going to do. We're going to send you a signal from Houston in the telephone line. You patch it into your transmitter, into this 128 element. You point that sucker up to the moon and what we want to know is what kind of delay time [it has].’”
Mathematically NASA had already calculated, without computers, what the delay time would be in bouncing a radio signal off the moon. Yet with Heil’s array they would be able to test how accurate their calculations were. Heil was around 20 or 21 at the time and his hobby had brought him into playing with the big leagues just a few years into the space race.
“They would send little signals, just little shots, and they would listen for it. They had, of course, fantastic . . . I didn't know exactly what but probably 50 foot dishes, who knows, but it was NASA. That was just such a big deal for me,” Heil said of the time.
For four hours a night, six nights a week he would play the organ for his job, and the rest of the time he spent building amateur radio gear, doing moonbounce experiments on VHF SSB with NASA, and making contacts on the radio. Around this time Joe Hall helped him get one of his transverter’s that he had built onto the market, and it was the first one to be sold commercially.
All this, and Heil had never gone to college, having barely graduated highschool. “Amateur radio was my college professor” he is fond of saying.
Heil Sound System
In 1966 Heil was inspired to open up his own Hammond organ and music store in his hometown of Marissa, Illinois. He dubbed it Ye Old Music Shoppe and it was destined to become the rock and roll capital of the world. One day a high school kid came in with a guitar amplifier and asked Heil if he’d be able to fix it. Ever curious he took a look inside and saw the tubes and other components were similar to the ham radio gear he tinkered on. With his trusty soldering iron he fixed it up for the guy. This happened to be one of the guitarists who was later a member of REO Speedwagon. He and other rock and rollers started patronizing Heil’s shop and he started to develop a reputation with the rock music crowd, even though it was a genre he knew nothing about himself.
His shop started renting Hammond B3 organs to musicians and bands who were on tour in the area, often playing at the Keil Auditorium. People like Janis Joplin, Jimi Hendrix and Ted Nugent would come in, and after they rented the organ, they’d ask him about the PA system in the venue. Heil didn’t know much about the PA’s in the concert hall, it wasn’t an interest to him. He was interested in the sound systems for his organs. But he knew the little bitty columns of speakers where the bands played tended to sound horrible.
Fate intervened in his life once again at this juncture, when he went to go visit his old friend George Bales the stage manager at Fox Theater in 1968. When he got there he saw a bunch of boxes outside the stage door. George told him the theater was putting in a new set of speakers, and those were the old ones, being thrown out.
"‘Wait a minute. You're throwing those away? Can I have them?’" he asked his friend. His friend said “‘Sure””. Heil recalls, “The Ham Radio in me kicked in, I went and rented a truck.’”Ham’s have always been great scavengers of material and parts. Where one person might see old electronic junk a ham sees possibilities.
Heil got them and took them to a vacant building he had in Marissa and started experimenting. The speakers were Altec 4’s and they were huge, about 10-feet-wide, 8 feet deep and 8 feet tall, and he had four of them. He put some radio horns in them, and got some JBL drivers and some McIntosh amplifiers. Next he needed a mixer and got an Altec. From all of this gear he put together a great sounding PA. Unknown to him, nobody else in the music business was putting together sound systems in this manner.
A manager for one of the venues got wind of the PA and asked him if they could use it when they brought in different acts from Nashville, and Heil said yes. To Heil it was just a big hi-fi system, but the acts and the venue manager went zonkos over the sound it produced. Dolly Parton was among the first musicians who got to use the system.
At that point people around St. Louis started to talk about Heil’s achievement. Another manager came up to him at a show and asked him if he would take the PA on tour with the band the guy worked for. Heil explained he’d never been on a tour, but that he had a couple guys who liked rock music who worked for him, and that he’d get them and the gear rounded up and along to do these shows in Ohio. After two days into the this gig he found out the lead guitarist for the band was a ham radio operator. His call sign was WB6ACU and his name was Joe Walsh, and the band was the James Gang. Walsh and Heil hit it off and so began a lifelong friendship.
The next big jump in the progression of Heil Sound took place on February 2, 1970. The Grateful Dead were scheduled to play at the Fox Theater. Good friend of the Grateful Dead, the "Bear" Augustus Owsley Stanley III was going to run their sound system. Owlsey was himself an amateur radio operator, having secured a license during his stint as an electronic specialist for the United States Air Force. While in the service he also picked up his general radiotelephone operator license. His technical background served him well as an audio engineer and as a clandestine LSD chemist, who supplied the Dead and their fans with copious amounts of the hallucinogenic drug. It is estimated that between 1965 and 1967 alone that Owsley had produced no less than 500 grams of LSD, amounting to a little more than five million doses. When he first got started making the stuff, acid wasn’t yet illegal, but it quickly became so, and it didn’t take long for the law to catch up with the man and his operation. With drug charges pending against him, Owsley had been ordered not to leave the state of California.
That pesky little detail didn’t stop him from going on the tour though. As Heil recalls, “They were going to do a short little Midwest, East Coast tour and their sound man was on probation out of the state of California. He wasn't supposed to be out of the state, but the drug agents and the FBI they found out that he was going to be on tour so they went to the first job. The first job and they sat and waited till they were finished playing. The group came on to St. Louis the second date. Now there were no cell phones. There was no communication in those days. The group shows up at the Fox at 4 o'clock in the afternoon. There's no PA. There's no Owsley. The group was the Grateful Dead. Well, they call back to their office found out that Owsley was in jail. The PA was confiscated; their group was not going to continue.”
George Bales from the Fox called up Heil with this situation on his hand, asking if he still had those speakers he had given him. The Grateful Dead were at the theater without a PA and they needed some help.
Bales put Heil on the phone with Jerry Garcia and the two talked about the equipment Heil had at his disposal and Garcia got amped. They would be able to pull off the concert in style. “We went up there and we did the show and it was marvelous”.
For the gig Heil also brought in a Langevin studio recording console he’d modified to use with the speaker system in a live music setting. He’d had help in rewiring the board from his friend Tomlinson Holman who was at the time going to school at the Universeity of Illinois. Holman later went to have his own prestigious career in sound as the creator of the THX theater sound protocol. One of the things that made the mixing board innovative was an electronic crossover Heil had built into the console.
Heil had some help from some early Deadheads in getting the show together. "My two roadies, Peter Kimble and John Lloyd, knew all the Dead songs — they were big fans. So that night they moved the PA, set it up and mixed the show."
Heil had also innovated a trick to deal with the pesky problem of feedback, every stage musicians bane. "We would run the microphones out of phase from the monitors, something that nobody had been doing yet. Since they were out of phase with the microphones and the FOH system, anything that leaked in from the monitors would be canceled out. As a result, we could get these things incredibly loud before they would feed back. That's one of the things that Jerry Garcia really loved."
The show was a massive success, and the Grateful Dead asked Heil, his crew, and his sound system to join them on the road. On that night the live sound system for rock and roll was born.
“They took us right out of there that night on the rest of the tour. Jerry and I became very good friends. We could be here a long time talking about the things that we did together, the equipment, the technology, that's where I'm at with this. It wasn't so much of the group as it was Jerry and his love for gear and what we could do with different things and help them.”
From that point on Heil started receiving more and more requests to do the live sound for touring rock bands. He did the sound for Humble Pie which is when he became friends with Peter Frampton, and he worked with ZZ Top among many others. Heil's setup had become an instant hit, and soon to be the template for the modern concert touring sound system.
He was on tour with Chaka Khan in Chicago when he got a call from The Who in Boston where they needed his help. He wanted to help them, but didn’t want to leave Chaka Khan stranded and wasn’t sure how he’d even be able to make it to Boston with the truck of gear. Heil Sound stored and kept all their traveling equipment in a 40-foot semi, the first people to do so. The Who suggested he rent a Tiger airplane, who were an airfreight company. He got a friend with another PA system to cover Chaka Khan and they drove their semi onto a 707 jet and flew to Boston the next day.
Heil’s sound system did what the Who needed it to do and set the standards for playing large arenas and coliseums. The Who used Heil’s system on the rest of the tour and from this encounter Heil forged a lifelong friendship with Pete Townsend. Townsend later called him to London because he had an idea for Bob. He wanted to know if could build a PA for quadraphonic sound. Once again up for the task Heil Sound built the system used for the Quadrophenia tour in 1974.
As the 1970’s progressed at any one time Heil would have three of his custom PA systems on tour with acts like J. Geils, Jeff Beck, ZZ Top, and others, with a crew of 35 people working to make it all happen. Heil was also responsible for the first use of monitor speakers by musicians in concerts so they could hear themselves playing in these huge venues, and was the first to build stage monitors that didn’t feedback. All his knowledge in building came from the expertise with electronics he’d developed as a ham radio operator.
The Talk Box
With his buddy Joe Walsh he also built a talk box for guitar that could withstand the rigors of stage. The talk box is an effects unit that shapes the frequency content of a sound, usually of a guitar, by way of applying voice to the sound of the instrument. The original talk box had been invented by musician, band leader, and amateur radio operator Alvino Rey, W6UK back in 1939. Rey got the idea that he could wire a carbon throat microphone in such a way as to modulate his electric steel guitar. The carbon throat mics had in turn been originated for use by military pilot communications, so pilots could communicate even in extremely windy and noisy communications. Rey put one on the throat of his wife Luise King who was a singer in The King Sisters group. She would stand behind a curtain and mouth the words alongside the guitar to modify its sound. It was a move that added unique coloration and novelty to his performances.
Some producers at a studio in Nashville had shown the trick to Joe Walsh, having given him a little box with a big hose that he drove with his guitar amp. It was good enough for the studio but the set up wasn’t powerful enough for the big live concerts Joe was playing at the time in his band Barnstorm. Heil and Walsh, along with the latter’s guitar tech “Krinkle” combined a 250-watt JBL driver and a hi-pass filter to make the first Heil Sound Talk Box. It was used on Walsh’s solo single, Rocky Mountain Way.
Later Heil’s Talk Box was used to great effect by Peter Frampton, who received one as a Christmas gift. His girlfriend hadn’t known what to get him for the holiday and called up Heil for advice. Heil had just the thing for him and sent her a hand-built Talk Box whose components were housed in fiberglass and used a 100-watt high-powered driver. This was the tool that gave his Frampton Comes Alive! album and tour it’s signature sound, to the point where Peter Framptom and the talk box are almost synonyms.
A Dish for Hungry for Satellite Hunters
As the 1970’s rolled on into the ‘80’s Heil got bit by the satellite bug. His friend Bob Cooper was a guy he had done some of his moonbounce experiments with back in 1962. When he heard about some of Cooper’s shenanigans building a satellite dish that used a coffee can as a low noise amplifier (LNA) to pick up the backhaul of HBOs feed he made a point to reconnect with his old friend. Once a month Cooper had an informal get together in Oklahoma where he showed others how to build these satellite receiving systems, and Heil got into the game of TVRO or television receive-only.
Communications freaks love to receive anything and satellite transmissions are particularly exciting to some devotees. At the time a dedicated group of communications hobbyists were getting into receiving the uncut and unedited content of satellites as it was transmitted unencrypted an “in the clean” to different local stations who would slap on their particular channel graphics and logos before presenting as a packaged TV program. For instance, sports broadcasts, would be transmitted with raw footage later to be edited during the highlights section of a local news program.
After getting into the technical aspects of this for awhile, Heil got to be one of the first ten on the test team for the commercial satellite operation DirectTV in 1991. His store was one of the first to sell DirectTV. It was around this time his company also worked on installing custom home theaters, but after his stint of time served in this capacity, he got out of the satellite game, and his mind turned once again to the radio hobby.
Hi-Fidelity for High Frequency
One day Heil turned on his radio and didn’t like what he heard on the air. It wasn’t what his fellow hams were rag chewing about that caused him to be disconcerted. It was how they sounded when they talked to each other. He wondered where all the great sounding Art Collins radio gear had gone, and how it was that such good equipment had be replaced by gear that did not have the same audio quality. It was in seeking a solution to this problem that he started making microphones for hams and musicians.
Of the many mentors Heil had over the years, Paul Klipsch was another whose knowledge and friendship changed his life. Klipsch was an engineer and a pioneer of high fidelity audio. Among the many patents he held was one for seismic prospecting and recording seismic waves. Seismic prospecting is a method of geophysical exploration where vibrations are made in the earth by firing small explosive charges, and other means, into the ground. The resulting waves are measured and studied so to reveal the underlying strata, or composition of layers of rock and soil. [Klipsch work in these fields possibly overlapped with the seismic work and interests of Gordon Mumma.]
Klipsch had been dissatisfied with the quality of phonographs and early speakers in the same way Heil had been dissatisfied with the sounds of hams on the air: they both thought each had sounded bad. Neither were content to let things stand in such a state. Klipsch used his technical abilities to create better sound systems and environments, that led to the development of the corner horn speaker that was a vast improvement over previous iterations of the phonograph horn.
Klipsch had his lab in old AT&T exchange building and Heil liked to visit him there. He directed Heil to study the work put out by the idea factory of Bell Labs, specifically the work of Dr. Fletcher and Dr. Munson. These two Bell Labs scientists gave Heil a secret weapon in his quest for audio excellence: the Fletcher Munson Curve.
Dr. Harvey Fletcher had been born in Utah in 1884, graduated from Brigham Young High School in 1904 and University in 1907. Gifted in physics and mathematics he decided to go to the University of Chicago for his doctorate. Nervous about going to the big city on his own he persuaded his sweetheart to marry him, and they went together, even though he had not yet been admitted to the school. Robert A. Millikan, a Nobel prize winning physicist, became a mentor to Fletcher and helped him get started at the University, where he eventually earned the first summa cum laude ever awarded by the institution. During this time period Fletcher worked closely with Millikan who figured out how to measure the charge of an electron, research that was fundamental to the growth of electronics and broadcasting technologies.
Fletcher eventually hitched his star to the Western Electric Company in New York, and from there went on to become the Director of Physical Research at Bell Laboratories. It was there under the auspices of pure research that his gifts fully blossomed. He published 51 papers, wrote two books, and had nineteen patents. In particular his two books, Speech and Hearing, and Speech and Hearing in Communication, set the precedent for further work on the clarity of audio.
One of the things Fletcher was interested in was how the sound of a typical talker was heard by a typical listener. He realized that small imperfections in speech could have drastic effects on a listener’s ability to perceive what was said. For the telephone system this meant they had to do everything they could to make sure their own technology did not interfere with its primary purpose of allowing distant voices to connect with each other. The instruments used to convert sound waves into electrical form and then back into sound waves needed to be able to do so without causing distortion.
Fletcher also conducted with his colleague Wilden Munson the first research on the frequency response of the human ear in 1933. By playing a series of tones they were able determine how listener's perceived loudness at different frequencies and from their results they learned that the frequency response of the human ear is non-linear. They also learned that frequency perception varies based on amplitude. They used the data from these experiments to create the Fletcher-Munson curve, which shows that the frequency range which the human ear finds most sensitive is between 2 kHz and 5 kHz. It was all published in their paper, “Loudness, its definition, measurement and calculation" published in the Journal of the Acoustical Society of America.
AT&T used this research to equalize the phone lines and keep the maximum articulation of speech at the sweet spot between 2 and 3 kHz. Assiduous study of the Fletcher-Munson curve allowed Heil to make his next breakthrough and implement these findings in a line of equalizers and microphones.
Equalizer’s had already been made for the Hi-Fi stereo market, but for some reason hadn’t been put together for use by hams. Heil corrected that, and in 1982 he was the first to build one specifically for use on the ham radio bands, the EQ200. He made this available as a DIY kit, after an article he wrote on it for QST Magazine set the ham community aflame. “Voice communication absolutely needs articulation,” he wrote. His equalizer helped to roll off all the frequencies below 100 hz, which only muddied things up and were a waste of RF energy.
From Phased Array Antennas to Microphones
After he had the equalizer Heil realized there was still a problem with microphones used by hams. “They're bassy, they're tubby, they have no rear rejection,” as he put it. So Heil got into the microphone business. He worked with Icom and Yaesu on the microphones for their radios, and then went on to make his own microphones for ham radio, first the HC series, and later the Gold Line.
Heil’s friend Joe Walsh was a big fan of Heil’s microphones for ham radio, so much that he thought they should be reworked for the stage with the professional musician in mind. In 2006 Walsh asked him to adapt his Gold Line ham microphone for him. Working closely with Walsh, he came up with the Gold Line Pro for his fellow musicians. Because he learned how to take it out of phase, it is the only microphone to have 40 db of rear sound.
The success of his microphone came on top of all his previous experience and knowledge in radio and music. For the microphones he got an insight from his phased array antenna systems he used as a ham. Antenna phasing is used for ham radio beamforming, or pointing a signal in specific direction a person wants to transmit towards. In shortwave broadcasting, for instance, it is used to aim a signal at certain parts of the globe. Hams use it for making contacts in countries and states they want to work. Generally they are a set of different antennas combined to work as one.
To beamform on the shortwave and HF ham frequencies different lengths of coaxial cable are used and attached to antennas that different create radiation patterns depending on selection. Another way is to hook them up into an RF matching network that provides -90° and +90° delays and relays for the configuration of each element. This enables a station to listen to other stations using the same-frequency in different locations.
Heil took this knowledge of taking antennas in and out of phase to pick up particular stations, and used it in the microphone which he realized could also be made to out of phase and give it a huge amount of gain in the rear side of the mic, something uncommon. His design proved to be as popular with musicians as it was with hams.
At the time of this writing Heil is eighty years old, and continues to get on the air every day with his various ham rigs and talk on his phased array antenna system. He was honored by the Rock and Roll Hall of Fame with a display on Heil Sound, the only display at the museum to feature an equipment producer. Heil remains a passionate organ player, and it is fitting that he is able to be heard playing live every week on shortwave radio at the time of this writing. International station WTWW out of Lebanon, Tennessee blasts his organ playing at 100,000 watts on 5085 kHz every Saturday at 8 PM Central Time.
Heil’s sound systems have rocked the world and they never would have been possible if he hadn’t been swept up into the hobby of ham radio.
Motes from presentation to OhKyIn Amateur Radio Society from a talk called “The Science of Audio” Bob Heil gave over Zoom on January 5, 2021.
Archived on YouTube: https://www.youtube.com/watch?v=RJiO_vFa2Tc
For more on Bob Cooper, this interview from Mother Earth News: https://www.motherearthnews.com/nature-and-environment/satellite-television-zmaz80mjzraw
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
The first time Chris Brown heard the League of Automatic Music Composers was on KPFA as he was driving to a piano-tuning appointment in 1981. The music was wild, unified as an organism, yet with divergent tentacles or strands wiggling off in multiple directions like a psychedelic octopi. It was Chris’ first exposure to networked computer music, and the wriggling tentacles had put their first hooks into his brain.
Five years later Chris was working with a group who had dubbed themselves Ubu, incorporated, named after the 1896 play Ubu Roi by Alfred Jarry. This group had members from the LAMC and was now at work organizing experimental music concerts at galleries and community music spaces. One of the concerts the group decided to organize was called THE NETWORK MUSE – Automatic Music Band Festival. Held in an old church it brought together four different groups working with homebrewed computer music and presented performances over a few days. One of these groups was the duo of The Hub, then comprised of just Tim Perkis and John Bischoff.
At the concert Bischoff and Perkis were using a KIM-1 as a mailbox to post data used in controlling their individual music systems. This information then became available to the other player to use however and whenever they chose as they performed their combined system. The Hub had been their solution to the often messy tangle of wires and electronics that had been common during the LAMC years. Their interface was an elegant solution and a variety of computers and their users could plug into the system.
In 1987 composers Phil Niblock and Nick Collins instigated the formation of an expanded ensemble when some members of The Hub were invited to New York to give a performance at two separate locations linked together by a modem. This required the additional players and they were readily pooled from the other groups who had participated in the Network Muse. The two locations to be linked were both performance spaces, Exerimental Intermedia (XI) run by Niblock and the Clocktower (now MoMA PS1). The idea was to have a trio play at each location, that when connected via the modem became a sextet.
Bischoff and Perkis had already started playing as a trio with Mark Trayle in a group called Zero Chat Chat in the aftermath of the Automatic Music Band Festival, so it was a simple matter to recruit Chris Brown, Phil Stone, and Scott Gresham-Lancaster, who had all played in different groups at the festival to form a second trio. This expanded sextet became the Hub. They designed three pieces to play for the network, using the modem that divided the acoustics of the sextet into two trios that were still joined via the wires of information. These pieces were “Borrowing and Stealing”, “Simple Degradation” and “Vague Notions”. They also played three other pieces that were improvised independently, local to each group.
As Kyle Gann wrote in a review of the piece for the Village Voice at the time, “Equally peculiar (for those who attended a different space each night) was the oblique correspondence of identical pieces between the Clocktower and EIF, for the two audiences did not hear the same sounds. Each group fed information into the others' performance, but basic materials differed, making each piece a kind of sonic conceptual butterfly: same body, wildly different wings.”
To many people having a group playing in two different physical locations was just a neat technological stunt. While interesting to promoters, it wasn’t the main interest of the band, though the performance did help congeal the Hub and the six composers continued to work together under the rubric. Yet the idea of the modem concert continued to haunt them and it was a spectacle they were asked to repeat in different forms. Their interest wasn’t however in the distances that separated them, but in the interactivity of the network itself, and the sounds of iconoclastic music programming of each musician that could be influenced by the musical programs of the others.
The Hub also kept up with the new computers that continued to hit the market. The next iteration of the Hub device was based on the SYM-1 single-board computer made by Synertek. The processor was 1 MHz and it had 8k of RAM on the deck and a hexadecimal keypad for programming in machine language like the KIM. What made this an upgrade for the computer music chamber ensemble was that they built an expansion board onto the SYM that had four 6850 ACIAs (asynchronous communication interface adapters). These had connections to the 8-bit databus, seven address lines, system clock, and read and write controls. This bit of hacked together gear gave them options for connecting, interacting, musically communicating.
The homebrewed circuits were housed inside a box of clear plastic underneath the SYM with connectors on the outside. Three of the connectors were used to network three players with 1200 baud RS232 serial connections. The fourth connector went to an identical SYM-HUB they had built to host the other trio -the other half of the six piece band. These two Hubs could now communicate with each other quite speedily at 9600 baud, even though most modems in that era couldn’t send information that fast.
Phil Stone and Tim Perkis wrote a program in an assembly language used to receive and transmit messages from the players, each with their own serial port, to the Hub. The program also constantly copied stored data to the second Hub so that both memory areas had data from all the members of the group.
Stone and Perk’s wrote some comments on the program, “Devices connected to each channel make requests to write to the HUB processor table memory, and to read it. Each makes its request by sending command bytes of which the high four bits form a command field (CF) and the low four a data field (DF). In the HUB processor there are three variables kept for each channel: a current WRITE.ADDRESS (12 bits); the current READ.ADDRESS, (12 bits) and the current WRITE.DATA (8 bits). These variables for each channel can be set only by commands from that channel. All channel commands are dedicated to setting these variables, or initiating a read or write to the HUB table memory.”
The music of the Hub is in its way just as cerebral as the means used to make it. Having assembled their gear and membership, they now set about to play the endless game of composition, programming and recombination. The group were musicians first and technologists as a close secondary interest. Where most musicians work from a score, the Hub works from a spec. Individual notes are not preordained, but specifications for how a piece is to be constructed is all put in the spec. The spec can be read closely along with the schematics of the Hub. Like the blueprint for a house, the spec gives an outline or structure to the game of networked music. Even though the spec is often designed by one composer, the individual aspects of how it is prepared are left up to the programmers individual.
Being based in the Bay Area, having a history with CCM and Mills College, and being part of the experimental music and arts scene meant there was a great deal of overlap between people, and a lot of potential for fruitful collaborations. Several members of the Hub knew Ramón Sender. During the Hub years Sender had gotten interested in the collaborative aspects of writing made possible with computer networks. A fruitful collaboration was cooked up between the Hub, Ramon Sender and the Poetry group on the WELL, the Whole Earth ‘Lectronic Link one of the oldest virtual communities and a regular online hangout spot for members of the counterculture.
The first version ofHubRenga was performed over the air on a KPFA’s Music Special radio show hosted by Charles Amirkhanian on September 7, 1989. In this transmission the Hub was joined by novelist and musician Ramón Sender, and poets from another network, the poetry conference of The Well (Whole Earth ‘Lectronic Link), a pioneering electronic community that operated in the Bay Area to facilitate communication between people interested in arts and alternative lifestyles. The poetry conference was a forum about poetry which subscribers to The Well could join to exchange ideas and work collaboratively. Sender was one of the hosts of the forum for a number of years.
For the HubRenga piece, the computer network of the Hub was connected to the network of the WELL. For this performance, the Japanese poetry game called the Renga was used as a format for the textual aspect of the work. Renga is a genre of collaborative Japanese poetry where alternating stanzas are linked in succession by multiple poets. Renga is typically composed live when a group of poets are gathered together. For HubRenga Ramón acted as moderator inside the KPFA studio, and browsed the poetic submissions as they came into the poetry conference forum on WELL, reading them aloud as part of the music, accompanied by an unnamed female reader.
The WELL poetry group, had been working, through Sender, for a few months with the Hub before the big date at KPFA. In keeping with traditional Renga practices, the poets worked around a theme. In departing from those practices they used a non-traditional theme. Usually the themes are based on the season when they are performed: summer, spring, autumn, winter. In this case the poets chose to use Earth as the theme. The poets came up with a common list of set words to use throughout the performance and this was given to the composer-programmers. They wrote programs that used these words as triggers. When a Hub member received a text from the WELL on his computer, their program filtered it for specific keywords, determined in advance from the list to trigger specific musical responses.
The keywords chosen by the Hub as triggers were:
embrace echo twist rumble keystone whisper charm magic worth Kaiser schlep habit mirth swap split join plus minus grace change grope skip virtuoso root bind zing wow earth intimidate outside phrase honor silt dust scan coffee vertigo online transfer hold message quote shimmer swell ricochet pour ripple rebound duck dink scintillate old retreat non-conformist flower sky cage synthesis silence crump trump immediate smack blink
This was the kind of interactive system the Hub thrived on and HubRenga was performed again in Los Angeles, along with Bonnie Barnett, an original member of Pauline Oliveros’ Womens Ensemble, who declaimed the power words. In this iteration Ramón Sender and members of the WELL Poetry Conference, participated via modem from the Bay Area.
The Hug Goes MIDI
In 1990 the Hub brought their wrecking ball to the world of MIDI music, a technical standard and communications protocol that was then only nine years old. Scott Gresham-Lancaster had been tasked with exploring its possibilities for the group. MIDI, which stands for Musical Instrument Digital Interface, allows for a plethora of electronic instruments, synthesizers, computers and other audio devices to be connected together to play, record and edit music.
One single MIDI link on one single cable can carry up to sixteen discrete channels of information and these can all be sent to different instruments or devices, say a synth, drum machine or computer. The information carried on one of those channels includes musical instructions for pitch, velocity or attack, notation, vibrato, panning within the stereo field, and clock signals that allow one device to control the tempo of the other devices in the MIDI network. As a musician plays something that is using MIDI it all gets converted to information that is commonly used to control other sound producing modules. For instance a person is playing a synthesizer and it is triggering an external drum machine, sequencer, or other digital sound module. It is also used for recording and writing music. A player can hook a MIDI capable instrument up to a computer which then records the data. This information can be assigned different voices in a digital audio workstation, modified, and edited.
This typical way of using MIDI –as one musician controlling an array of other instruments from one station- had no interest or appeal to members of the Hub. They wanted to break MIDI and use it for their own purposes. Scott beta-tested the then new Opcode Studio 5 MIDI interface. It was a single box unit that functioned as computer interface and MIDI patchbay with 15 inputs and outputs, processor and synchronizer. Scott played around with the hardware and learned how to program it so it could work as MIDI version of their namesake Hub. The new protocol would give them a faster messaging system that was also more flexible than their homebrewed system. Another advantage was that by using a standardized platform they would be able to share their working methods with other musicians in a way that was more accessible and closer to open source.
Yet the switch to MIDI meant a drastic change from the system they had been using. In the world of electronic music a new system means a new sound and they would either have to alter their existing pieces to fit with MIDI or start writing brand new pieces. It also changed the operational mode they had become accustomed too. Instead of the common memory shared between members, where data in any customized format could be deposited, the MIDI-HUB worked as a switchboard. Each player now had their musical data tagged and in this way identified them.
“No longer was it up to each musician to specifically look at information from other players, but instead information would arrive in each player's MIDI input queue unrequested. Information about current states had to be requested from players, rather than being held on a machine that always contained the latest information. This networking system was more private, enabling person-to-person messaging, but making broadcasting more problematic. To send messages to everyone, a player would need to send the same message out individually addressed to each player. If a player failed to handle the message sent, its information was gone forever. And messages were sent more quickly under the MIDI-HUB, leading to an intensity of data traffic that was new in the music. The MIDI-HUB pieces reflected the nature of this new aspect of the band's network instrumentation.”
Waxlips was the first piece written for the MIDI-HUB and it was designed by Tim Perkis as a simple way of exploring the architecture of the network and it ended up becoming a “tune up” piece for the ensemble in their performances and tours, a way to test the system and get it up to speed before tackling other pieces from their repertoire. It was written to be simple and with minimal musical structure. Each player sends and receives requests to play one note. Once the request comes in and is received, the note message gets transformed in a fixed way and is sent on to someone else. The message can be modified by any musical rule. The only limiting factor was that in the various sections of the piece, specified with signals from a lead player, the same rule must be followed so a new-message-in gets followed by the same new-message- out. The lead player “jump-starts the process by spraying the network with a burst of requests.”
Tim Perkis writes in the liner notes to the Wreckin’ Ball CD that contains recordings of Waxlips, “The network action had an unexpected living and liquid behavior: the number of possible interactions is astronomical in scale, and the evolution of the network is always different, sometimes terminating in complex (chaotic) states, including near repetitions, sometimes ending in simple loops, repeated notes, or just dying out altogether. In initially trying to get the piece going, the main problem was one of plugging leaks: if one player missed some note requests and didn't send anything when he should, the notes would all trickle out. Different rule sets seem to have different degrees of ‘leakiness’, due to imperfect behavior of the network, and as a lead player I would occassionally double up -- sending out two requests for every one received -- to revitalize a tired net."
One of the ways the MIDI-Hub enabled the ensemble to collaborate was by receiving the output data from another musicians set up. For Alvin Curran’s Electric Rags III composition, Curran improvised on his Yamaha Disklavier electric piano. The MIDI output of his improvisation was sent through the Hub system and the ensemble players used it whatever ways they wished.
They used a similar set up again for Scot Gresham-Lancaster's Vex, a take on Erik Satie's proto-minimalist and extremely long piano piece Vexations. For this version they took Satie’s score and fed it into the HUB for a synchronized performance of the piece by Alvin Curran and the Rova Saxophone Quartet. As each note arrived into their system the Hub took the notes to create an electronic embellishment for the acoustic players they were working with.
Curran was a frequent collaborator and they worked with him on a studio version of his Erat Verbum (1993 iteration). This was a six part radio composition piece made for the Studio Akustischer Kunst of the WDR, and they worked with him on the Delta section. The piece utilizes recordings of John Cage’s famous Norton Lectures, also known as I-IV, that were fed into the HUB. The members of the group perused these and retranslated them instantly into Morse Code. Curran than live mixed the dots and dashes into a stunning fantasia.
The stamp John Cage left across various musical subcultures and musicians was also evident in the work of The Hub. His spirit was kind of hovering in the background of things as they went about their work.
“One of the strands in the musical philosophy of The Hub was the interest in defining musical processes that generated, rather than absolutely controlled, the details of a musical composition. An acknowledged influence on this interest was the work of John Cage, and it seemed a natural extension to us to try to automate the indeterminate processes used in his work. Many of these processes are extremely time-consuming and tedious; and given that Cage was himself involved for a long time in live electronic performance, we felt a real-time realization of these processes during the progress of a performance was not only feasible, but aesthetically implied.”
In 1995 they got the opportunity to do a live realization of Cages’s Variations II at Mills College for a happening put together by David Bernstein called “Here Comes Everybody: A Conference on the Music, Writing, and Art of John Cage”. As part of the activities one evening of concerts was devoted to Cage’s electronic music and The Hub performed their version of his iconic composition.
Ever since the Hub had played together at their XI/ClockTower premiere in NYC, in two separate locations connected by modem over the telephone wires, there had been pressure on the group from the many techies interested in their music for them to switch from their serial communications network to ethernet. There had also been pressure on them to do further concerts where the musicians were playing in different locations but connected via a network. In a way they had done this with the HubRenga concerts where the poets connected to the Hub via the WELL. Yet they hadn’t played together as a spatially disconnected group since the first concert.
In a way this was something that was expected of them, even if they really preferred to be in each other’s company while playing. The public fascination with the idea of musicians playing together though separated but vast distances in physical space remained a constant even though they had never repeated the experiment or incorporated as a regular part of their practice as a network ensemble.
They preferred the local area network of being in each-others company as they played. They sought a balance between the spontaneous interactions of the electronic systems they set up and the reciprocal feedback between themselves as humans making music together -an inherently social activity.
Chris Brown writes, “Since that event we have continued to receive requests for concerts to be performed remotely, that is, without all of us being physically in the same space, but have always declined, in part because we really prefer to be in the space where we can hear each other's sound directly and to see each other and communicate live. The Hub is a band of composers who use computers in their live electronic music, and our practice has been to create pieces that involve sharing data in specific ways that shape the sound and structure of each piece. We are all programmers, and instrument builders in the sense that we take the hardware and software tools available to us and reshape them to realize unconventional musical ideas.”
Eventually however The Hub succumbed to pressures to produce another concert where the members were separated in two different locations. “Points of Presence” was produced in 1997 by the Institute for Studies in the Arts (ISA) at Arizona State University (ASU), that linked to members of The Hub at Mills College, California Institute for the Arts and ASU over the internet. The piece nearly spelled the end of the Hub after a decade of cooperative engagement in network music composition.
“Now in 1997 new tools have become available that allow us to reapproach the remote music idea - telharmonium, points-of-presence - in a new way. Personal computers are now fast enough to produce high-quality electronic sound in real-time, allowing instrument-builders like Mike Berry to choose a purely software environment to produce home-made musical instruments. His Grainwave software, a shareware application for MacOS PowerPCs, was adopted by the group for this piece because it allows each of us to design our own sounds, and these sounds/instruments can be installed at any physical location that has a PC on which they can play - we can be independent of the hardware that produces our music, our instruments have become data which can be replicated easily in any place.
At the same time we, along with the rest of our culture, have been spending more and more time in our lives and our work communicating and collaborating on the internet. Why should we not extend our musical practice into this domain? Can we retain here the ability to define our own musical worlds, avoiding the commercial, prefab, and controlling musical aesthetics of the technological culture?”
Yet the performance itself was plagued by technical failures. They ran into many issues with the software and couldn’t debug it easily on the fly with a room full of people expecting to hear a concert. Because they weren’t in the same place they had to rely on internet chat and telephone calls to try and fix the issues. And with the different parts unable to work together as a network, the music was never able to work or lift off the ground. They were only able to play for ten minutes as a full network and they had to supply those who came to hear them with clumsy explanations of what they were trying to do.
“The technology had defeated the music. And after the concert, one by one, the Hub members turned in their resignations from the band.”
It wasn’t to be the very end of the band. Having been built as an ad hoc network they eventually found themselves reassembled again, ready for action, and all of the members of the Hub have lively musical activities they are involved with outside of the network -bringing in new information and new ideas to their working methods.
The League of Automatic Music Composers: 1978-1983, New World Records No. 80671, released 2007. Collection compiled by Jon Leidecker (Wobbly).
The Hub: Boundary Layer. Tzadik. 8050-3. Three CD set with extensive liner notes and CD-Rom text files.
At a Distance: Precursors to Art and Activism on the Internet. Edited by Annmarie Chandler and Norie Nuemark. MIT Press. Cambridge, Massachusetts. 2005
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
As the musical computers at Bell Labs in New Jersey were winding down in the late 70s, people in the California homebrew microcomputer scene were just starting to get wound up. DIY computers had arrived and a group of electronic music experimentalists in the San Francisco Bay Area were writing programs, networking them together and seeing how they sounded in various configurations. The group was known as the League of Automatic Music Composers (LAMC), active between 1977 to 1983 before being reassembled into another musical configuration known as The Hub. LAMC can rightly be considered the first computer music group, and first network music group.
The League had its beginnings in the CCM during the time when Robert Ashley was the director. It was also the time when the first fruits of Silicon Valley were beginning to ripen and were able to be plucked off the shelf by hackers and hobbyists. At CCM these hackers and hobbyists were also experimental musicians. Because the CCM allowed for open access to its studio it drew a large crowd of people outside of strictly academic art music into its doors where they were all able to freely mix and mingle. Rock musicians met hackers, and hackers met free improvisers and jazz heads, who all met those studying the radical end of western classical music as it had evolved in the 20th century.
One of the mottos of the CCM was “if you’re not weird, get out!” It became a home for an assortment of musically inclined misfits, a place where they could fit in. Part of this already strange and heady brew was the homebrew tradition, which was very active at the Center due in part to its proximity to new integrated circuits being produced in Silicon Valley, in part due to its history as the place where the Buchla Box had been invented, and its association with the original composers who had formed SFTMC. Many of those luminaries, such as David Tudor, came to lecture and give concerts at CCM. The students had taken to the idea that building and designing circuits was part and parcel of the compositional process. The schematic diagram was seen as directly related to the graphic scores that had been innovated by the likes of John Cage, Morton Feldman and Karlheinz Stockhausen.
David Tudor and Gordon Mumma had already paved the way in their creation of electronic musical systems that once designed and built could be turned on to produce the music. These cybernetic systems were often autonomous and required little intervention from the composer as player after the system had been set up. Tudor had spent time at CCM as a composer in residence and his influence permeated the atmosphere there, particularly his idea that the job of the composer was to listen rather than to dogmatically determine every last note of a piece of music. This emphasis on listening is a theme that runs through contemporary musical practice and can be traced to this rich heritage left to us by Cage, Oliveros, and Tudor.
In Tudor’s case he emphasized the setting up of autonomous, or automatic networks of electronics; systems that were made up of phase shifters, attenuators, amplifiers, and filters such as in his Untitled piece from 1972. The aesthetic beauty of such a piece lies in the enjoyment of listening deeply to the complex interactions of the system. This system music presents a mirror to other types of systems: human social systems, the diverse ecological systems of the natural world, complex electronic communication systems, and the way the human body is a system of organs, cells, tissues, nerves, and parts all moving together, sometimes in harmony, sometimes creating dissonant tones and clashing with noise.
By the mid-seventies the first commercial microcomputers had been made available for the average consumer. They were called micro at this time to differentiate them from their mainframe predecessors that took up entire rooms in the halls of industry and the academy. This availability meant that anyone who was willing to fork over the $250 bones one of these machines cost could have their own computer. Free from the oversight of how it was used by the folks who were in charge of the institutional mainframes, enthusiasts were able to dabble. These micro computers were integrated into the circuit of California’s music scene.
Jim Horton was an early adopter, and he was quick to get his hands on one of these computers. It was 1976 and the contraption was the KIM-1. This was a single board device and its name stood for how it worked: Keyboard Input Monitor. Jim’s love of KIM soon spread out like a virus around the community and many other people started saving up their dollars to get these machines.
The KIM-1 itself consisted of just a single printed circuit board. All the components were on one side and it had a whopping memory of 1k RAM. The unit had a hexadecimal keypad used for programming. The programs themselves were saved to audiocassette. An add-on keyboard could be attached and up to 4000 characters displayed on a television or monitor. As more people bought the machines, they started to share the programs they had written for them, and helped each-other troubleshoot the persnickety machine, and so a community of devotees grew around the devices.
The KIM-1 wasn’t Horton’s first experience working with new technology. As a musician he was trained as a flutist, but had also gotten in on the game of analog synthesis. He had gained a reputation for building very large modular patches that had the ability to self-modify. He would get his friends to bring along their synths and he would connect his synth to theirs building networks of synthesizers. After building a huge and complex patch he would let the system play itself in long eight hour concerts that lasted all night. These concerts were similar to the all night concerts Terry Riley gave and a precursor to the sleep concerts later given by electronic musician Robert Rich.
Jim Horton was the quintessential starving artist and he did his work for the glory not the gold. He had saved his meager welfare checks, and instead of buying food, literally starved himself for a synthesizer. He sacrificed to acquire the equipment necessary for realizing his soundworld. Forgoing creature comforts for greater achievement, he was known for plugging straight in to whatever work was at hand, and just getting on with things. One of his bandmates, Tim Perkis, recalls that meeting Jim was a liberating experience. He said, “Horton would show up at a gig with his tangle of loose wires and electronic components in a dresser drawer he would temporarily press into service. With my head full of hesitations born of half-digested conventional wisdom about audio circuitry, it was mind-blowing to see someone just go directly to the heart of the matter, twisting bare wires together, connecting anything to anything, and doing the deeply conceptual musical work which drove him without waiting for the right equipment to appear. He lived in a poverty that never seemed like a limitation to him, and worked with whatever means he had at hand.”
In 1977 it was Jim Horton who first proposed the idea of making a microcomputer network band. It happened in an organic way. There was already a group getting together on a regular basis to share the music they were making on their KIM computers. Some of this music was also made with analog circuits and other instruments. At one of these gatherings Horton shared his idea of banding together to create a “silicon orchestra”. He had already demonstrated that synthesizers could be networked together into self-generative, ever shifting systems of musical patches. It was a natural next step to network the computers and other circuits they were building into their own system and listen to the experimental results.
Later in the year at Mills College Horton worked with Rich Gold, one of the founding members behind LAMC. The pair put on a concert where the two of them linked their KIMs together. For the performance Horton ran an algorithmic music program based on the harmonic theories of eighteenth century mathematician Leonhard Euler. Rich Gold had written an artificial language program and these two programs interacted with each other for the show. Jim also was working with other future band member John Bischoff at the time and one of the things they had figured out was a piece where tones from John’s KIM would make Jim’s KIM transpose its melodic activity according to a set key note. Then in 1978 John, Jim and Rich all joined together as a trio to give a performance at an artist space in Berkley.
Next they were joined by composer David Behrman who had come to California to co-direct the CCM with Robert Ashley, his friend and fellow member of the Sonic Arts Union. Rich Gold and Jim Horton were studying with Behrman at CCM. It was around this time when Behrman recorded his landmark album On the Other Ocean. This album is equally at home in the related but differing milieus of New Music, Ambient, and Minimalism, and on comfortable footing displaying sustained harmonies between electronic and acoustic sounds that slowly dance and revolve around each other until the difference between them blurs. The two pieces on the album feature the KIM-1 microcomputer with flute and bassoon on the title piece, and cello and the KIM-1 on the flip side, Figure in a Clearing. In these pieces the KIM-1 “listens” to the live performers, and accompanies or marks points when particular pitches are played. When Behrman joined LAMC this principle became a recurring theme in their music.
Behrman talks of his time at Mills College, “Some of the students began bringing computers to the Mills Center for Contemporary Music; on the advice of a wise Bay Area artist, Jim Horton, Paul DeMarinis and I bought KIM-1 microcomputers. KIM-1 weighed about 10 ounces and cost around 200 dollars. Around that time I'd been building switching circuits that were placed between primitive pitch-sensors and homemade synthesizers consisting mostly of triangle-wave generators. The switching circuits took a long time to solder together and could only do one thing. It seemed that this new device called the microcomputer could simulate one of these switching networks for a while and then change, whenever you wanted, to some other one. It was fun connecting its port lines to homemade synthesizers, and also to sensors, and writing very simple software to link sensor activity with synthesizer sounds. There was something fascinating about the design of software, even though on the KIM-1 it had to be done in machine language, by pressing keys on a little hexadecimal pad. This was the dawn of 'interactivity' in California, the moment when Jobs and Wozniack were introducing the Apple computer. There was a Bay Area composers group of that era, the Microcomputer Network Band, which liked to do concerts in which the participants would wire together a group of computers on a table, turn them all on, and stand back and watch to see what would happen.”
In November of 1978, now a quartet, the League of Automatic Music Composer gave its first performance using the name. Two years later Rich Gold and David Behrman had left the group to work on other projects. That’s when Tim Perkis swooped in to fill the spots.
Tim was interested in music made with alternate tuning systems from various parts of the globe, even playing in a local gamelan group. He was also a Just Intonation fanatic who happened to be skilled with electronics, having a graduate degree in video from California College of Arts and Crafts. If building your own homebrewed electronic instruments is a new kind of folk craft, than Perkis excelled at this craft work, programming his circuits to play in the various tuning systems he collected in his research.
Now in trio form, with a cadre of Bay area musicians and improvisers joining the festivities on occasion at various performances, they played together for four more years in this configuration. They had a habit of getting together on alternate Sundays to play at the Finnish Hall in Berkley, and people were welcome to come in and take in the scene.
Perkis writes, “Audience members could come and go as they wished, ask questions, or just sit and listen. This was a community event of sorts as other composers would show up and play or share electronic circuits they had designed and built. An interest in electronic instrument building of all kinds seemed to be ‘in the air.’ The Finnish Hall events made for quite a Berkeley scene as computer-generated sonic landscapes mixed with the sounds of folk dancing troupes rehearsing upstairs and the occasional Communist Party meeting in the back room of the venerable old building.”
During their time the LAMC distilled the spirit of the Bay area and infused its essence into their playful work practice and the music that came out of their curious explorations. Part band and part collective, they blended the communal zeitgeist of the day, with the fermenting intellectual and cultural atmosphere at work in such staples as the Whole Earth Catalog that promoted the use of personal computers alongside solar cells and sprout growing kits as part of the wave of interest in self-sufficiency and appropriate technology prevalent during a decade when the realities of hard limits were entering people’s consciousness. The members of the League had taken mega doses of the do it yourself ethos with regards to technical innovations. Everything they used was homebrewed or built from kits and modular components. All of it was on the table and subject to being taken apart, tinkered with, put to use in experiments. Then they would put it all back together again to see how it worked in a variety of combinations.
The League created networks of microcomputers and circuits with an ear towards making one large interactive musical instrument out of the member’s individual computers and components. One came from many. The members of the collective were all interested in computers and programming them to make music. They learned that when they networked their machines together and sent instructions to each other, the amassed circuits of silicon and solder were capable of eliciting what they called new “musical artificial intelligences.”
The sound of the leagues music is like a noisy arcade that has been rewired and rerouted in an ad hoc fashion. Amidst the distortion, the random generated tones, and the disorienting arpeggios produced by the circuits and programs, something beautiful occasionally emerges, but the sounds are always interesting and stimulating to the intellect. It’s often messy and unpredictable, but what comes out of the apparent chaos has the feel of sentience and is full of life.
Without the same kind of tools being used by Max Matthews and Laurie Spiegel and others at the big institutions, it should come no surprise that the sounds the League conjured up had more in common with 8-Bit gaming soundtracks, albeit highly dosed and on a recombinant and aleatory West Coast trip, than with the kind of sounds the bigger mainframe computers were making. It was done by a group of individuals dedicated to the notion that computers and people could create their own independent networks, built at home from the circuit board up. Their music has as much in common with the lo-fi aesthetics of garage rock as it does with the pristine waveforms built from code at Bell Labs. The limitations in computer memory, the limits of space on the circuit board, and the haphazard way it all got connected to other components gave their music the flavor of strong home brewed hooch. The sounds get the job done, and in their miasmic chaos, what comes out of the mess of wires is sublime.
The LAMC embraced their role as musical bricoleurs. According to Perkis, “We felt our work was more akin to that of our mentors and friends building gamelans (Lou Harrison and Bill Colvig), mechanical or electro-mechanical musical instruments (Tom Nunn, Chris Brown), or incorporating hacked versions of electrical and new electronic musical toys into their work (Paul DeMarinis, Laetitia Sonami), than to the contemporary institutional computer music. There was always the sense that the music arose out of the material situation, out of idiosyncratic individual players and the anarchic, ad-hoc arrangements they made.” Theirs was a mechanical musical conversation that ranged from noisy arguments to anarchic harmonies.
Their music was also steeped in the traditions of free improvisation that had developed on the West Coast. When they set up their systems, at Finnish hall, or in the living room of a bandmate, they didn’t set about to practice a certain song or pre-composed piece of music, it was rather the ever evolving continual music of the patch in progress, the program in process, the new circuit being added to the mix, or the old circuit being mixed in a new way. Each member had a station of their own equipment, running their own programs, making their own sounds and contributing them to the spontaneous mix. The stations were set up in such a way that the microcomputers could send and receive information from each other, hence being a network band. The novel interactions of each new set up became the piece. It was composed, but it was spontaneous. With each new system set up the result was automatic.
So, as with David Tudor and Pauline Oliveros, the main activity of the musician was in listening. Making adjustments, tinkering with the system, the listening to what happened, after listening again and making new adjustments, tinkering some more and listening again in and endless cycle of discovery and surprise. When they noticed a set up that elicited sounds of beauty, or a sublime alien strangeness, they took notes so they could try to realize that same musical state again. It was true experimental music made in a laboratory they put together themselves.
In 1983 all the tinkering and hauling gear was beginning to take a toll on Jim Horton. He had been suffering from rheumatoid arthritis already for some time, and in his way, endured the pain with stoic fortitude, pushing it to one side to continue living his Spartan artistic lifestyle. But it became too much. Eventually the human power supply running the operation had to be unplugged. The LAMC slowed down and then decided to disband.
Yet the end of the LAMC wasn’t the end of what Jim and the others had started, but rather a new beginning. Tim Perkis and John Bischoff went on to try and bring a touch of order to the chaotic mess of wires, gadgets and connections that had become their musical practice. They envisioned building a standard interface they could more easily network their computers together with. This they achieved and became the seed for Perkis and Bischoff’s next project, The Hub.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Mumma’s early encounters with John Cage and David Tudor, his work with them in the ONCE Festival and other situations primed him for his eventual work with the Merce Cunningham Dance Company.
Merce Cunningham was one of the great American dance artists of the 20th century. Cunningham was born in Centralia, Washington in 1919. He started off learning tap dancing from a local teacher where his ear for rhythm and sense of timing were honed from an early age. He later attended the Cornish School in Seattle from 1937 to study acting and mime, but didn’t take to it. He loved the way dance could be ambiguous while also allowing for full expression of movement. Martha Graham had seen him dance during this time period and she invited him to join her company. It was through Graham that Cunnigham’s life intersected with Cage in something of a chance operation. Graham had needed a musical accompanist for her dancers. One of her pupil’s, Bonnie Bird, recommended composer Lou Harrison, who declined, but suggested in his place the young Cage. Cunningham and Cage met in 1938 and later became romantically involved, and life partners until Cage’s death in 1992.
Cunningham sometimes played in Cage’s percussion group at the time, and they had become quick friends. Over the subsequent years Merce loved to talk to John about ideas. As each of their personal situations evolved in art and life, Merce finally took the step of establishing his own dance company in 1953 and Cage came along Cage for the ride as companies music director. Cunnigham’s Company had many opportunities as it grew over the years. Cage’s own career continued with more and more in the late 60s and throughout the 70s. As each pursued their vision other musicians needed to step in to the role of director when Cage wasn’t available. Mumma and Behrman, among others, were natural choices, due to their friendship and affinity. Mumma states it was never very clear how he ended up working with the Cunningham Dance Company, but it was something he just drifted into through these associations.
In the 60s and 70s Cunningham’s troupe made increasing use of electronics and this was an area where Mumma’s expertise could shine. He was a perfect fit; primed by his dedicated work as a creative composer, a cunning electronic technician, and as someone for whom the collaborative mode was second nature. In Mumma’s work with Cunningham’s troupe he got a chance to use all of these aspects of his character and put them to the test on tours that tested the endurance and dedication of everyone involved. The programs often involved collaborative music making and separate choreography, the latter determined by chance operations. The musicians were free to draw from their personal repertoires, and combine it with original material.
The first major piece Mumma wrote for Cunningham’s company was Mesa in 1966 for the dance Place. He was already working on something with David Tudor, who worked regularly in the company, when this came about. Instead of starting over he decided to alter the work in progress to accommodate the commission. Tudor had gotten into playing the Bandoneon, a relative of the accordion and squeezebox that has become popular in Argentina. It became the perfect instrument for Mesa because of its wide frequency and dynamic range. The Bandoneon can also produce long sustained drones and sounds, just what Mumma for the monolith that was taking shape.
Like the geological feature after which it is named, Mesa, is a tectonic slab of music sustained at one level of thrust with occasional interruptions. He had thought of using tape for the piece, but the dynamic range he wanted couldn’t be contained with the tape. That was one concept for the piece. The other was his desire to use “an inharmonic frequency spectrum with extremes of sound density.” In the performace space the placement of different portions of the sound in different loudspeakers creates a spatial diffusion. The final mixing of the sound is in the ears of the listener.
To further extend the dynamic range of Tudor’s instrument and create the timbres he imagined Mumma needed to design a circuit. The piece represented a creative problem and a technical challenge. His electronics needed to be able to translate frequencies, equalize, and required the use of logic circuitry in tricky configurations to control musical continuity. It’s another composition where the circuit diagram and instructions are more of the score than notated music.
Mumma developed Voltage Controlled Attenuators (VCA) in collaboration with Dr. William Ribbens in Ann Arbor. These extended the range while also including envelope controls. Ribbens is a Professor Emeritus of Electrical Engineering and Computer Science, and Aerospace Engineering at the University.
In performance six microphones are attached to the Bandoneon, three on each side. The microphones are different with each being sensitive to different frequency bands. As a way of “thickening the plot” and for other reasons Mumma fed one mic from each side into the other side of the circuit. Six channels of sound from one instrument source are being processed to create this massive place.
Using a logic circuit Mumma was able to route control signals and program signals to different channels during performance. He used a frequency shifter with equalization that processed parts of the sound determined by internal control signals or from Tudor playing the Bandoneon. The logic circuit itself determined the source and nature of the control signal.
Mumma used a multiplier to take portions of the spectrum and transform it by whole integers to further equalize the sounds. Phase and amplitude modulators also work with portions of the sound, gating parts of the spectrum transfer with the output from the multiplier. Further gates, formant modulators, pass band filters and other baroque electrical wizardry were also built into the circuit score of Mesa. In creating the piece he was setting up a cybersonic system.
The VCA also included delays that further shaped the envelope of the program signal. Mumma wanted to use very specific delays that were not possible with either electronic manipulation, or from a mechanical source, such as building a tape delay. Mumma writes, “the solution to this problem is inherent in the concept of MESA itself, since at this point in the system it is the envelope of the otherwise sustained sounds which is to be shaped. This is achieved by subjecting the VCA control signals to frequency-sensitive thermal-delay circuitry. The wide dynamic range of the VGA is due to special bias procedures.”
Every control signal for sound modification first comes from the Bandoneon. “Because the control signals are automatically derived from the sound materials themselves, I call the process, and the music, "cybersonic".”
What Mumma has created in Mesa is a situation where the Bandoneonist can play a duet with a piece of electronic circuitry. A third person, most often Mumma, in performance, tweaks the circuit live to override parts of the internal logic with an artist’s intuition.
One of the pieces by Mumma used by Cunnigham in a variety of settings including TV Rerun was his Telepos (1972). For this he made belts to be worn by the dancers that contained small accelerometers, a device that measures vibrations and accelerations in motion. The belts were also equipped with voltage controlled oscillators and a miniature UHF transmitter. Inspired by telemetry, or the transmission of device data that is read remotely at a different point of reception, the dancers made music by their movements “in a process similar to that encountered in space travel, undersea, or biomedical research.”
Mumma worked with the group for seven full seasons and also collaborated on works with individuals from the circle. He also continued to work with Cage. One such instance was as part of the creation of a soundtrack to an electronic game of chess.
Reunion was a big piece conceived by John Cage as a chess game to be played between himself and Marcel Duchamp and a second match with Teeny Duchamp. It had a collaborative musical element performed by Gordon Mumma, David Behrman, and David Tudor on an electronic chessboard designed by Lowell Cross. The chess board controlled certain aspects of the live electronic music.
Cage had first met Marcel in the early 1940’s when they were both in New York, but the meeting had been awkward, due to a blowup between Cage and Peggy Guggenheim, who had first introduced them. At that time Cage and his then-wife Xenia were being put up by Peggy after they had moved from Chicago. Cage took a gig at the Museum of Modern Art, when he also had a gig at Peggy’s new art gallery. She felt snubbed by him having a show she thought stole the spotlight from her presentation of his music in the city. At the time he was so in awe of Duchamp, he didn’t want to disturb him, but simply enjoyed in his presence.
In the winter of 1965-66 Cage’s circle and Duchamp’s overlapped again and they found themselves at the same parties. Cage had long been an admirer of Duchamp and they shared a number of sensibilities, one appreciating readymade objects and the appreciating readymade music of sound occurring everywhere in life. He wanted to reacquaint himself with Duchamp, but wasn’t sure how to go about it, until he asked Teeny if she thought Duchamp would tutor him in chess. She said to ask the man himself, and when he got the gumption to do so, Duchamp said yes.
He started to meet with Duchamp once a week to learn the game, and other social visits followed, including vacationing with the couple in Spain. Though he had used chess as a ruse to get to know the artist he admired, Cage was fascinated with the game and became a serious player. More often than not he lost to his teacher, who had played chess for decades.
In 1968 the idea for Reunion was hatched. According to Mumma it “descended upon us at the same time” and the exact source of it was obscured amongst the collaborators. At the time Cage was very interested in expanding the people with whom he collaborated beyond the group of musicians and electronic pioneers who had clustered around him and Cunningham.
Lowell Cross was one of the people Cage was interested in working with. At the time Cross was writing a thesis that explored the history of electronic music and electronic music studios from between 1948 and 1953, and Cage played a large role in his thesis. Cross was studying media and society under Marshall McLuhan at the University of Toronto, and also ethnomusicology with Mieczysław Koliński, and electronic music with Gustav Ciamaga and Myron Schaeffer.
Cage had been interested in Lowell’s work as an instrument builder, and had known about his device called the Stirrer, which was a panning system for moving up to four sounds in space which he had created between 1963-65. Cage called him in February of 1968 and asked him if he could build him an electronic chessboard capable of selecting and diffusing sounds around an audience in a concert hall as a game unfolded. Cross at first declined, politely, because he was swamped with his work at school.
Cage then made his move and said, “Perhaps you will change your mind if I tell you who my chess partner will be.” When Duchamp’s name was dropped it was enough to persuade the assiduous student to get even busier and build what would become the 16-input, 8-output chessboard used in the subsequent performance.
The chessboard had sensors that triggered the electronic music being produced by the musicians according to the way the pieces were moved. The outcome musically was beyond the control of the performers, who each had their own systems and set-ups feeding into the mix. The board was also equipped with contact microphones that picked up the movement of the pieces.
At the performance on March 5th, which kicked of the “Sightsoundsystems” performance series organized by composer Udo Kasmets, the chess players sat and smoked cigarettes and drank wine while the musicians made electronic sounds. The performance lasted for four hours and was a celebration of everyday life as a form of art. Marshall McCluhan was noted to have been in the audience.
It was these kind of collaborative group work situations that Mumma found himself to be drawn to and a part of over and over again. Mumma’s talent as a composer, player, electronics specialist and creative thinker made him an invaluable asset to all the groups and milieus he circulated within and between.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.