Things have been quiet around here, but I have been very busy inside my secret workshop since this past fall when my book, The Radio Phonics Laboratory, was accepted for publication by the wonderful atelier of electronica, Velocity Press. Now my book is finally ready to begin its escape from the lab and is available for pre-order. Some of you may have read the original articles that make up this book in earlier forms here on Sothis Medias, or in the Q-Fiver, the newsletter of the Oh-Ky-In Amateur Radio Society where they had their first genesis, but this book has the additional benefit of several rewrites, the hand of a skilled editor, and much additional material not included in my original articles. It also has the bonus that if you pre-order by March 14 you will get your name printed in the book and receive it first in May for those of us in North America. The Radio Phonics Laboratory is due for release on June 14 but because the publisher is in England, it won't be readily available in the US until late summer or early fall. They have to get the copies in from the printers and ship them to their distributor in California. Pre-ordering is the best option for supporting my work, the efforts of Velocity Press, and getting a copy in your hands ahead of time for summer reading. The price is £11.99 for the paperback (about $15.16 US) + shipping. The shipping to the US is a bit more expensive than domestic shipping prices, but you'll get your name printed in the book as a supporter if you order before March 14 and you'll have my gratitude. This book is the culmination of many seeds, some planted long ago when I first started checking out weird music from the library as a teenager and stumbled across the CD compilation Imaginary Landscapes: New Electronic Music and tuned in to radio shows like Art Damage. This is the culmination of many many hours of research, listening, reading and writing over for a number of years. Full details about the book are below. Thanks to all of you for supporting my writing and radio activity and other creative efforts over the years. I would be grateful for any help you can give in spreading the word about the Radio Phonics Laboratory to any of your friends and family who share the love of electronic music, the avant-garde and the history of our telecommunications systems. https://velocitypress.uk/product/radio-phonics-laboratory-book/ The Radio Phonics Laboratory explores the intersection of technology and creativity that shaped the sonic landscape of the 20th century. This fascinating story unravels the intricate threads of telecommunications, from the invention of the telephone to the advent of global communication networks.
At the heart of the narrative is the evolution of speech synthesis, a groundbreaking innovation that not only revolutionised telecommunications but also birthed a new era in electronic music. Tracing the origins of synthetic speech and its applications in various fields, the book unveils the pivotal role it played in shaping the artistic vision of musicians and sound pioneers. The Radio Phonics Laboratory by Justin Patrick Moore is the story of how electronic music came to be, told through the lens of the telecommunications scientists and composers who helped give birth to the bleeps and blips that have captured the imagination of musicians and dedicated listeners around the world. Featuring the likes of Leon Theremin, Hedy Lamarr, Max Matthews, Hal 9000, Robert Moog, Wendy Carlos, Claude Shannon, Halim El-Dabh, Pierre Schaeffer, Pierre Henry, Francois Bayle, Karlheinz Stockhausen, Vladimir Ussachevsky, Milton Babbitt, Daphne Oram, Delia Derbyshire, Edgar Varese & Laurie Spiegel. Quotes “From telegraphy to the airwaves, by way of Hedy Lamarr and Doctor Who, listening to Hal 9000 sing to us whilst a Clockwork Orange unravels the past and present, Moore spirits us on an expansive trip across the twentieth century of sonic discovery. The joys of electrical discovery are unravelled page by page.” Robin Rimbaud aka Scanner “Embark on an odyssey through the harmonious realms of Justin Patrick Moore’s Radio Phonics Laboratory echoing the resonances of innovation and discovery. Witness the mesmerising fusion of telecommunications and musical evolution as it weaves a sonic tapestry, a testament to the boundless creativity within the electronic realm. A compelling pilgrimage for those attuned to the avant-garde rhythms of technological alchemy.” Nigel Ayers “In this captivating exploration of electronic music, Justin Patrick Moore unveils its evolution as guided by telecommunication technology, spotlighting the enigmatic laboratories of early experimenters who shaped the sound of 20th century music. A must-read for electronic musicians & sound artists alike—this book will undoubtedly find a prominent place on their bookshelves.” Kim Cascone
0 Comments
Pierre Boulez was of the opinion that music is like a labyrinth, a network of possibilities, that can be traversed by many different paths. Music need not have a clearly defined beginning, middle and end. Like the music he wrote, the life of Boulez did not follow a single track, but shifted according to the choices available. Not all of life is predetermined, even if the path of fate has already been cast. Choices remain open. Boulez held that music is an exploration of these choices. In an avantgarde composition a piece might be tied together by rhythms, tone rows, and timbre. A life might be tied together by relationships, jobs and careers, works made and things done. The choices Boulez made take him through his own labyrinth of life. As Boulez wrote, “A composition is no longer a consciously directed construction moving from a ‘beginning’ to an ‘end’ and passing from one to another. Frontiers have been deliberately ‘anaesthetized’, listening time is no longer directional but time-bubbles, as it were…A work thought of as a circuit, neither closed nor resolved, needs a corresponding non-homogenous time that can expand or condense”. Boulez was born in Montbrison, France on March 26, of 1925 to an engineer father. As a child he took piano lessons and played chamber music with local amateurs and sang in the school choir. Boulez was gifted at mathematics and his father hoped he would follow him into engineering, following an education at the École Polytechnique, but opera music intervened. He saw Boris Godunov and Die Meistersinger von Nürnberg and had his world rocked. Then he met the celebrity soprano Ninon Vallin, the two hit it off and she asked him to play for her. She saw his inherent and talent and helped persuade his father to let him apply to the Conservatoire de Lyon. He didn’t make the cut, but this only furthered his resolve to pursue a life path in music. His older sister Jeanne, with whom he remained close the rest of his life, supported his aspirations, and helped him receive private instruction on the piano and lessons in harmony from Lionel de Pachmann. His father remained opposed to these endeavors, but with his sister as his champion he held strong. In October of 1943 he again auditioned for the Conservatoire and was struck down. Yet a door opened when he was admitted to the prepatory harmony class of Georges Dandelot. Following this his further ascension in the world of music was swift. Two of the choices Boulez made that was to have a long-lasting impact on his career was his choice of teacher, Olivier Messiaen, who he approached in June of 1944. Messiaen taught harmony outside the bounds of traditional notions, and embraced the new music of Schoenberg, Webern, Bartok, Debussy and Stravinsky. In February of 1945 Boulez got to attend a private performance of Schoenberg’s Wind Quartet and the event left him breathless, and led him to his second influential teacher. The piece was conducted by René Leibowitz and Boulez organized a group of students to take lessons from him for a time. Leibowitz had studied with Schoenberg and Anton Webern and was a friend of Jean Paul Sartre. His performances of music from the Second Viennese School made him something of a rock star in avant-garde circles of the time. Under the tutelage of Leibowitz, Boulez was able to drink from the font of twelve tone theory and practice. Boulez later told Opera News that this music “was a revelation — a music for our time, a language with unlimited possibilities. No other language was possible. It was the most radical revolution since Monteverdi. Suddenly, all our familiar notions were abolished. Music moved out of the world of Newton and into the world of Einstein.” The work of Leibowitz helped the young composer to make his initial contributions to integral serialism, the total artistic control of all parameters of sound, including duration, pitch, and dynamics according to serial procedures. Messiaen’s ideas about modal rhythms also contributed to his development in this area and his future work. Milton Babbitt had been first in developing has own system of integral serialism, independently of his French counterpart, having published his book on set theory and music in 1946. At this point the two were not aware of each others work. Boulez’s first works to use integral serialism are both from 1947: Three Compositions for Piano and Compositions for Four Instruments. While studying under Messiaen Boulez was introduced to non-western world music. He found it very inspiring and spent a period of time hanging out in the museums where he studied Japanese and Balinese musical traditions, and African drumming. Boulez later commented that, "I almost chose the career of an ethnomusicologist because I was so fascinated by that music. It gives a different feeling of time." In 1946 the first public performances of Boulez’s compositions were given by pianist Yvette Grimaud. He kept himself busy living the art life, tutoring the son of his landlord in math to help make ends meet. He made further money playing the ondes Martentot, an early French electronic instrument designed by Maurice Martentot who had been inspired by the accidental sound of overlapping oscillators he had heard while working with military radios. Martentot wanted his instrument to mimic a cello and Messiaen had used it in his famous symphony Turangalîla-Symphonie, written between 1946 and 1948. Boulez got a chance to improvise on the ondes Martentot as an accompanist to radio dramas. He also would organize the musicians in the orchestra pit at the Folies Bergère cabaret music hall. His experience as a conductor was furthered when actor Jean-Louis Barrault asked him to play the ondes for the production of Hamlet he was making with his wife, Madeline Reanud for their new company at the Théâtre Marigny. A strong working relationship was formed and he became the music director for their Compagnie Renaud-Barrault. A lot of the music he had to play for their productions was not to his taste, but it put some francs in his wallet and gave him the opportunity to compose in the evening. He got to write some of his own incidental music for the productions, tour South America and North America several times each, in addition to dates with the company around Europe. These experiences stood him well in stead when he embarked on the path of conductor as part of his musical life. In 1949 Boulez met John Cage when he came to Paris and helped arrange a private concert of the Americans Sonatas and Interludes for Prepared Piano. Afterwards the two began an intense correspondence that lasted for six-years. In 1951 Pierre Schaeffer hoste the first musique concrète workshop. Boulez, Jean Barraqué, Yvette Grimaud, André Hodeir and Monique Rollin all attended. Olivier Messiaen was assisted by Pierre Henry in creating a rhythmical work Timbres-durè es that was mad from a collection percussive sounds and short snippets. At the end of 1951, while on tour with the Renaud-Barrault company he visited New York for the first time, staying in Cage’s apartment. He was introduced to Igor Stravinksy and Edgard Vaèse. Cage was becoming more and more committed to chance operations in his work, and this was something Boulez could never get behind. Instead of adopting a “compose and let compose” attitude, Boulez withdrew from Cage, and later broke off their friendship completely. In 1952 Boulez met Stockhausen who had come to study with Messiaen, and the pair hit it off, even though neither spoke the others language. Their friendship continued as both worked on pieces of musique concrète at the GRM, with Boulez’s contribution being his Deux Études. In turn, Boulez came to Germany in July of that year for the summer courses at Darmstadt. Here he met Luciano Berio, Luigi Nono, and Henri Pousseur among others, and found himself moving into a role as an acerbic ambassador for the avantgarde. Sound, Word, Synthesis As Boulez got his bearings as a young composer, the connections between music and poetry came to capture his attention, as it had Schoenberg. Poetry became integral to Boulez’s orientation towards music, and his teacher Messiaen would say that the work of his student was best understood as that of a poet. Sprechgesang, or speech song, a kind of vocal technique half between speaking and singing, was first used in formal music by Engelbert Humperdink in his 1897 melodrama Königskinder. In some ways sprechgesang is a German synonym for the already established practice of the recitative in operas as found in Wagner’s compositions. Arnold Schoenberg used the related term Sprechstimme as a technique in his song cycle Pierrot lunaire (1912) where he employed a special notation to indicate the parts that should be sung-spoke. Schoenberg’s disciple Alban Berg used the technique in his opera Wozzeck (1924). Schoenberg employed it again in his Moses and Aron opera (1932). In Boulez’s explorations of the relationship between poetry and music he questioned "whether it is actually possible to speak according to a notation devised for singing. This was the real problem at the root of all the controversies. Schoenberg's own remarks on the subject are not in fact clear." Pierre Boulez wrote three settings of René Char's poetry, Le Soleil des eaux, Le Visage nuptial, and Le Marteau sans maître. Char had been involved with Surrealist movement, was active in the French Resistance, and mixed freely with other Parisian artists and intellectuals. Le Visage Nuptial (The Nuptial Face) from 1946 was an early attempt at reuniting poetry and music across the gap they had taken so long ago. He took five of Chars erotic texts and wrote the piece for two voices, two ondes Martenot, piano and percussion. In the score there are instructions for “Modifications de l’intonation vocale.” His next attempt in this vein was Le Marteau sans maître (The Hammer without a Master, 1953-57) and it remains one of Boulez’s most regarded works, a personal artistic breakthrough. He brought his studies of Asian and African music to bear on the serialist vortex that had sucked him in, and he spat out one of the stars of his own universe. The work is made up of four interwoven cycles with vocals, each based on a setting of three poems by Char taken from his collection of the same name, and five of purely instrumental music. The wordless sections act as commentaries to the parts employing Sprechstimme. First written in 1953 and 1954, Boulez revised the order of the movements in 1955, while infusing it newly composed parts. This version was premiered that year at the Festival of the International Society for Contemporary Music in Baden-Baden. Boulez had a hard time letting his compositions, once finished, just be, and tinkered with it some more, creating another version in 1957. Le Marteau sans maître is often compared with Schoenberg’s Pierrot Lunaire. By using Sprechstimme as one of the components of the piece, Boulez is able to emulate his idol Schoenberg, while contrasting his own music from that of the originator of the twelve tone system. As with much music of the era written by his friends Cage and Stockhausen, the work is challenging to the players, and here most of the challenges are directed at the vocalist. Humming, glissandi and jumps over wide ranges of notes are common in this piece. The work takes Char’s idea of a “verbal archipelago” where the images conjured by the words are like islands that float in an ocean of relation, but with spaces between them. The islands share similarities and are connected to one another, but each is also distinct and of itself. Boulez took this concept and created his work where the poetic sections act as islands within the musical ocean. A few years later, he worked with material written by the symbolist and hermetic poet Stéphane Mallarme, when he wrote Pli selon pli in (1962). Mallarme’s work A Throw of the Dice is of particular influence. In that poem the words are placed in various configurations across the page, with changes of size, and instances of italics or all capital letters. Boulez took these and made them correspond to changes to the pitch and volume of the poetic text. The title comes from a different work by Mallarme, and is translated as “fold according to fold.” In his poem Remémoration d'amis belges, he describes how a mist gradually covers the city of Bruges until it disappears. Subtitled A Portrait of Mallarme Boulez uses five of his poems in chronological order, starting with "Don du poème" from 1865 for the first movement finishing with "Tombeau" from 1897 for the last. Some consider the last word of the piece, mort, death, to be the only intelligible word in the work. The voice is used more for its timbral qualities, and to weave in as part of the course of the music, than as something to be focused on alone. Later still Boulez took e.e. cummings poems and used them as inspiration for his work Cummings Ist der Dichter in 1970. Boulez worked hard to relate poetry and music together in his work. It is no surprise, then, that the institute he founded would go far in giving machines the ability to sing, and foster the work of other artists who were interested in the relationships between speech and song. Ambassador of the Avantgarde
At the end of the 1950s Boulez had left Paris for Baden-Baden where he had scored a gig as composer in residence with the South-West German Radio Orchestra. Part of his work consisted of conducting smaller concerts. He also had access to an electronic studio where he set to work on a new piece, Poesie Pour Pouvoir, for tape and three orchestras. Baden-Baden would become his home, and he eventually bought a villa there, a place of refuge to return to after his various engagements that took him around the world and on extended stays in London and New York. His experience conducting for the Théâtre Marigny, had sharpened his skills in this area, making it all possible. Boulez had gained some experience as a conductor in his early days as a pit boss at the Folies Bergère. He gained further experience when he conducted the Venezuela Symphony Orchestra when he was on tour with his friend Jean-Louis Barrault. In 1959 he was able to get further out of the mold of conducting incidental music for theater and get down to the business he was about: the promotion of avantgarde music. The break came when he replaced the conductor Hans Rosbaud who was sick, and a replacement was needed in short notice for a program of contemprary music at the Aix-en-Provence and Donaueschingen Festivals. Four years later he had the opportunity to conduct Orchestre National de France for their fiftieth anniversary performance of Stravinsky's The Rite of Spring at the Théâtre des Champs-Élysées in Paris, where the piece had been first been premiered to the shock of the audience. Conducting suited Boulez as an activity for his energies and he went on to lead performances of Alban Berg’s opera Wozzeck. This was followed by him conducting Wagner’s Parsifal and Tristan and Isolde. In the 1970s Boulez had a triple coup in his career. The first part of his tripartite attack for avantgarde domination involved his becoming conductor and musical director the BBC Symphony Orchestra. Then second part came after Leonard Bernstein’s tenure as conductor of the New York Philharmonic was over, and Boulez was offered the opportunity to replace him. He felt that through innovative programming, he would be able to remold the minds of music goers in both London and New York. Boulez was also fond of getting people out of stuffy concert halls to experience classical and contemporary music in unusual places. In London he gave a concert at the Roundhouse which was a former railway turntable shed, and in Greenwich Village he gave more informal performances during a series called “Prospective Encounters.” When getting out of the hall wasn’t possible he did what he could to transform the experience inside the established venue. At Avery Fisher Hall in New York he started a series of “Rug Concerts” where the seats were removed and the audience was allowed to sprawl out on the floor. Boulez wanted "to create a feeling that we are all, audience, players and myself, taking part in an act of exploration". The third prong came when he was asked back by the President of France to come back to his home country and set up a musical research center. Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic. Selected Re/sources: Benjamin, George. “George Benjamin on Pierre Boulez: 'He was simply a poet.'” < https://www.theguardian.com/music/2015/mar/20/george-benjamin-in-praise-of-pierre-boulez-at-90> Boulez, Pierre. Orientations: Collected Writings. Cambridge, MA.: Harvard University Press, 1986. Glock, William. Notes in Advance: An Autobiography in Muisc. Oxford, UK.: Oxford University Press. 1991. Greer, John Michael. “The Reign of Quantity.” < https://www.ecosophia.net/the-reign-of-quantity/> Griffiths, Paul. “Pierre Boulez, Composer and Conductor Who Pushed Modernism’s Boundaries, Dies at 90.” < https://www.nytimes.com/2016/01/07/arts/music/pierre-boulez-french-composer-dies-90.html> Jameux, Dominique. Pierre Boulez. London, UK.: Faber & Faber, 1991. Peyser, Joan. To Boulez and Beyond: Music in Europe Since the Rite of Spring. Lanham, MD.: Scarecrow Press, 2008 Ross, Alex. “The Godfather.” <https://www.newyorker.com/magazine/2000/04/10/the-godfather> Sitsky, Larry, ed. Music of the 20th Century Avant-Garde: A Biocritical Sourcebook. Westport, CT.: Greenwood Press, 2002. [Read Part I] Milton Babbit: The Musical Mathematician Though Milton Babbitt was late to join the party started by Luening and Ussachevsky, his influence was deep. Born in 1916 in Philadelphia to a father who was a mathematician, he became one of the leading proponents of total serialism. He had started playing music as a young child, first violin and then piano, and later clarinet and saxophone. As a teen he was devoted to jazz and other popular forms of music, which he started to write before he was even a teenager. One summer on a trip to Philadelphia with his mother to visit her family, he met his uncle who was a pianist studying music at Curtis. His uncle played him one of Schoenberg’s piano compositions and the young mans mind was blown. Babbitt continued to live and breathe music, but by the time he graduated high school he felt discouraged from pursuing it as his calling, thinking there would be no way to make a living as a musician or composer. He also felt torn between his love of writing popular song and the desire to write serious music that came to him from his initial encounter with Schoenberg. He did not think the two pursuits could co-exist. Unable or unwilling to decide he went in to college specializing in math. After two years of this his father helped convince him to do what he loved, and go to school for music. At New York University he became further enamored with the work of Schoenberg, who became his absolute hero, and the Second Viennese School in general. In this time period he also got to know Edgar Varese who lived in a nearby apartment building. Following his degree at NYU at the age of nineteen, he started studying privately with composer Roger Sessions at Princeton University. Sessions had started off as a neoclassicist, but through his friendship with Schoenberg did explore twelve tone techniques, but just as another tool he could use and modify to suit his own ends. From Sessions he learned the technique of Schenkerian analysis, a method which uses harmony, counterpoint and tonality to find a broader sense and a deeper understanding of a piece of music. One of the other methods Sessions used to teach his students was to have them choose a piece, and then write a piece that was in a different style, but used all the same structural building blocks. Sessions got a job from the University of Princeton to form a graduate program in music, and it was through his teacher, that Babbitt eventually got his Masters from the institution, and in 1938 joined the faculty. During the war years he got pressed into service as a mathematician doing classified work and divided his time between Washington D.C., and back at Princeton teaching math to those who would need for doing work such being as radar technicians. During this time he took a break from composing, but music never left his mind, and he started focusing on doing musical thought experiments, with a focus on aspects of rhythm. It was during this time period when he thought deeply on music that he thoroughly internalized Schoenbergs system. After the war was over he went back to his hometown of Jackson and wrote a systematic study of the Schoenberg system, “The Function of Set Structure in the Twelve Tone System.” He submitted the completed work to Princeton as his doctoral thesis. Princeton didn’t give out doctorals in music, only in musicology, and his complex thesis wasn’t accepted until eight years after his retirement from the school in 1992. His thesis and his other extensive writings on music theory expanded upon Schoenberg’s methods and formalized the twelve tone, “dodecaphonic”, system. The basic serialist approach was take the twelve notes of the western scale and put them into an order called a series, hence the name of the style. It was called a tone row as well. Babbitt saw that the series could be used to order not only the pitch, but dynamics, timbre, duration and other elements. This led him to pioneering “total serialism” which was later taken up in Europe such as Pierre Boulez and Olivier Messiaen, among others. Babbitt treated music as field for specialist research and wasn’t very concerned with what the average listener thought of his compositions. This had its pluses and minuses. On the plus side it allowed him to explore his mathematical and musical creativity in an open-ended way and see where it took him, without worrying about having to please an audience. On the minus side, not keeping his listeners in mind, and his ivory tower mindset, kept him from reaching people beyond the most serious devotees of abstract art music. This tendency was an interesting counterpoint from his years as teenager when he was an avid writer of pop songs and played in every jazz ensemble he could. Babbitt had thought of Schoenberg’s work as being “hermetically sealed music by a hermetically sealed man.” He followed suit in his own career. In this respect Babbitt can be considered as a true Castalian intellectual, and Glass Bead Game player. Within the Second Viennese School there was an idea, a thread taken from both 19th century romanticism and adapted from the philosophy of Arthur Schopenhauer, that music provides access to spiritual truth. Influenced by this milieu Babbitt’s own music can be read and heard as connecting the players and listeners to a platonic realm of pure number. Modernist art had already moved into areas that many people did not care about. And while Babbitt was under no illusion that he ever saw his work being widely celebrated or popular, as an employee of the university, he had to make the case that music was in itself a scientific discipline. Music could be explored with the rigors of science, and that it could be made using formal mathematical structures. Performances of this kind of new music was aimed at other researchers in the field, not at a public who would not understand what they were listening to without education. Babbitt’s approach rejected a common practice, in favor of what would become the new common practice: many different ways of investigating, playing, working with and composing music that go off in different directions. During WWII Babbitt had met John Van Neumann at the Institute for Advanced Studies. His association with Neumann caused Babbitt to realize that the time wasn’t far off when humans would be using computers to assist them with their compositional work. Unlike some of the other composers who became interested in electronic music, Babbitt wasn’t interested in new timbres. He thought the novelty of them was quick to wear off. He was interested in how electronic technology might enhance human capability with regards to rhythms. Victor In 1957 Luening and Ussachevsky wrote up a long report for the Rockefeller Foundation of all that they had learned and gathered so far as pioneers in the field. They included in the report another idea: the creation of the Columbia-Princeton Electronic Music Center. There was no place like it within the United States. In a spirit of synergy the Mark I was given a new home at the CPEMC by RCA. This made it easier for Babbitt, Luening, Ussachevsky and the others to work with the machine. It would however soon have a younger, more capable brother nicknamed Victor, the RCA Mark II, built with additional specifications as requested by Ussachevsky and Babbitt. There were a number of improvements that came with Victor. The number of oscillators, had been doubled for starters. Since tape was the main medium of the new music, it also made sense that Victor should be able to output to tape instead of the lathe discs. Babbitt was able to convince the engineers to fit it out with multi-track tape recording on four tracks. Victor also received a second tape punch input, a new bank of vacuum tube oscillators, noise generating capabilities, additional effect processes, and a range of other controls. Conlon Nancarrow, who was also interested in rhythm as an aspect of his composition, bypassed the issue of getting players up to speed with complex and fast rhythms by writing works for player-piano, punching the compositions literally on the roll. Nancarrow had also studied under Roger Sessions, and he and Babbitt knew each other in the 1930s. Though Nancarrow worked mostly in isolation during the 1940s and 1950s in Mexico City, only later gaining critical recognition in the 1970s and onwards, it is almost certain that Babbitt would have at least been tangentially aware of his work composing on punched player piano rolls. Nancarrow did use player pianos that he had altered slightly to increase their dynamic range, but they still had the all the acoustic limitations of the instrument. Babbitt, on the other hand, found himself with a unique instrument capable of realizing his vision for a complex, maximalist twelve-tone music that was made available to him through the complex input of the punched paper reader on the RCA Mark II and it’s ability to do multitrack recording. This gave him the complete compositional control he had long sought after. For Babbitt, it wasn’t so much the new timbres that could be created with the synth that interested him as much as being able to execute a score exactly in all parameters. His Composition for Synthesizer (1961-1963) became a showcase piece, not only for Babbitt, but for Victor as well. His masterpiece Philomel (1963-1964) saw the material realized on the synth accompanied by soprano singer Bethany Beardslee and subsequently became his most famous work. In 1964 he also created Composition for Synthesizer. All of these are unique in the respect that none of them featured the added effects that many of the other composers using the CPEMC availed themselves of; these were outside the gambit of his vision. Phonemena for voice and synthesizer from 1975 is a work whose text is made up entirely of phonemes. Here he explores a central preoccupation of electronic music, the nature of speech. It features twenty-four consonants and twelve vowel sounds. As ever with Babbitt, these are sung in a number of different combinations, with musical explorations focusing on pitch and dynamics. A teletype keyboard was attached directly to the long wall of electronics that made up the synth. It was here the composer programmed her or his inventions by punching the tape onto a roll of perforated paper that was taken into Victor and made into music. The code for Victor was binary and controlled settings for frequency, octave, envelope, volume and timbre in the two channels. A worksheet had been devised that transposed musical notation to code. In a sense, creating this kind of music was akin to working in encryption, or playing a glass bead game where on kind of knowledge or form of art, was connected to another via punches in a matrix grid. Wired for Wireless Babbitt’s works were just a few of the many distilled from the CPEMC. Not all were as obsessed with complete compositional control as Babbitt, and utilized the full suite of processes available at the studio, from the effects units to create their works, and their works were plenti-ful. The CPEMC released more recorded electronic music out into the world than from anywhere else in North America. During the first few years of its operation, from 1959 to 1961 the capabilities of studio were explored by Egyptian-American composer and ethnomusicologist Halim El-Dabh, who had been the first to remix recorded sounds using the effects then available to him at Middle East Radio in Cairo. He had come to the United States with his family on a Fulbright fellowship in 1948 and proceeded to study music under such composers as Ernst Krenek and Aaron Copland, among a number of others. In time he settled in Demarest, New Jersey. El-Dabh quickly became a fixture in the new music scene in New York, running in the same circles as Henry Cowell, Jon Cage, and Edgard Varèse. By 1955 El-Dabh had gotten acquainted with Luening and Ussachevsky. At this point his first composition for wire recorder was eleven years behind him, and he had kept up his experi-mentation in the meantime. Though he had been assimilated into the American new music milieu, he came from outside the scenes in both his adopted land the and European avantgarde. As he had with the Elements of Zaar, El-Dabh brought his love of folk music into the fold. His work at the CPEMC showcased his unique combinations that involved his extensive use of percussion and string sounds, singing and spoken word, alongside the electronics. He also availed himself of Victor and made extensive use of the synthesizer. In 1959 alone he produced eight works at CPEMC. These included his realization of Leiyla and the Poet, an electronic drama. El-Dabh had said of his process that it, "comes from interacting with the material. When you are open to ideas and thoughts the music will come to you." His less abstract, non-mathematical creations remain an enjoyable counterpoint to the cerebral enervations of his col-leagues. A few of the other pieces he composed while working the studio include Meditation in White Sound, Alcibiadis' Monologue to Socrates, Electronics and the World and Venice. El-Dabh influenced such musical luminaries as Frank Zappa and the West Coast Pop Art Experimental Band, his fellow CPEMC composer Alice Shields, and west-coast sound-text poet and KPFA broadcaster and music director Charles Amirkhanian. In 1960 Ussachevsky received a commission from a group of amateur radio enthusiasts, the De Forest Pioneers, to create a piece in tribute to their namesake. In the studio Vladimir composed something evocative of the early days of radio and titled it "Wireless Fantasy". He recorded morse code signals tapped out by early radio guru Ed G. Raser on an old spark generator in the W2ZL Historical Wireless Museum in Trenton, New Jersey. Among the signals used were: QST; DF the station ID of Manhattan Beach Radio, a well known early broadcaster with a range from Nova Scotia to the Caribbean; WA NY for the Waldorf-Astoria station that started transmitting in 1910; and DOC DF, De Forests own code nickname. The piece ends suitably with AR, for end of mes-sage, and GN for good night. Woven into the various wireless sounds used in this piece are strains of Wagner's Parsifal, treated with the studio equipment to sound as if it were a shortwave transmis-sion. In his first musical broadcast Lee De Forest had played a recording of Parsifal, then heard for the first time outside of Germany. From 1960 to 1961 Edgard Varese utilized the studio to create a new realization of the tape parts for his masterpiece Deserts. He was assisted in this task by Max Mathews from the nearby Bell Laboratories, and the Turkish-born Bulent Arel who came to the United States on a grant from the Rockefeller Foundation to work at CPEMC. Arel composed his Stereo Electronic Music No. 1 and 2 with the aid of the CPEMC facilities. Daria Semegen was a student of Arel’s who composed her work Electronic Composition No. 1 at the studio. There were numerous other composers, some visiting, others there as part of their formal education who came and went through the halls and walls of the CPEMC. Lucio Berio worked there, as did Mario Davidovsky, Charles Dodge, and Wendy Carlos just to name a few. Modulation in the Key of Bode
Engineer and instrument inventor Harold Bode made contributions to CPEMC just as he had at WDR. He had come to the United States in 1954, setting up camp in Brattleboro, Ver-mont where he worked in the lead development team at the Etsey Organ Corporation, eventually climbing up to the position of Vice President. In 1958 he set up his own company, the Bode Electronics Corporation, as a side project in addition to his work at Etsey. Meanwhile Peter Mauzey had become the first director of engineering at CPEMC. Mauzey was able to customize a lot of the equipment and set up the operations so it became a comfortable place for composers. When he wasn’t busy tweaking the systems in the studio, Mauzey taught as an adjunct professor at Columbia University, all while also doing working en-gineer work at Bell Labs in New Jersey. Robert Moog happened to be one of Mauzey’s students while at Columbia, under whom he continued to develop his considerable electrical chops, even while never setting foot in the studio his teacher had helped build. Bode left to join the Wurlitzer Organ Co. in Buffalo, New York when it hit rough waters and ran around 1960. It was while working for Wurlitzer that Bode realized the power the new transistor chips represented for making music. Bode got the idea that a modular instrument could be built, whose different components would then be connected together as needed. The instrument born from his idea was the Audio System Synthesiser. Using it, he could connect a number of different devices, or modules, in different ways to create or modify sounds. These included the basic electronic music components then in production: ring modulators, filters, re-verb generators and other effects. All of this could then be recorded to tape for further pro-cessing. Bode gave a demonstration of his instrument at the Audio Engineering Society in New York, in 1960. Robert Moog was there to take in the knowledge and the scene. He became in-spired by Bodes ideas and and this led to his own work in creating the Moog. In 1962 Bode started to collaborate with Vladimir Ussachevsky at the CPEMC. Working with Ussachevsky he developed ‘Bode Ring Modulator’ and ‘Bode Frequency Shifter’. These became staples at the CPEMC and were produced under both the Bode Sound Co. and licensed to Moog for inclusion in his modular systems. All of these effects became widely used in elec-tronic music studios, and in popular music from those experimenting with the moog in the 1960s. In 1974 Bode retired, but kept on tinkering on his own. In 1977 he created the Bode Vo-coder, which he also licensed to Moog, and in 1981 invented his last instrument the Bode Bar-berpole Phaser. .:. .:. .:. Read part I. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/SOURCES: Holmes, Thom. Electronic and Experimental Music. Sixth Edition. Music of the 20th Century Avant-Garde: A Biocritical Sourcebook https://ubu.com/sound/ussachevsky.html Columbia-Princeton Electronic Music Center 10th Anniversary, New World Records, Liner Notes, NWCRL268 , Original release date: 1971-01-01 https://120years.net/wordpress/the-rca-synthesiser-i-iiharry-olsen-hebert-belarusa1952/ https://cmc.music.columbia.edu/about https://betweentheledgerlines.wordpress.com/2013/06/08/milton-babbitt-synthesized-music-pioneer/ http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/olson-harry.pdf http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/seashore-carl.pdf https://snaccooperative.org/ark:/99166/w6737t86 https://happymag.tv/grateful-dead-wall-of-sound/ https://ubu.com/sound/babbitt.html https://www.youtube.com/watch?v=c9WvSCrOLY4 https://www.youtube.com/watch?v=6BfQtAAatq4 Babbitt, Milton. Words About Music. University of Wisconsin Press. 1987 https://en.wikipedia.org/wiki/Combinatoriality http://musicweb-international.com/classRev/2002/Mar02/Hauer.htm http://www.bruceduffie.com/babbitt.html http://cec.sonus.ca/econtact/13_4/palov_bode_biography.html http://cec.sonus.ca/econtact/13_4/bode_synthesizer.html http://esteyorganmuseum.org/ Otto Luening and Vladimir Ussachevsky In America the laboratories for electronic sound took a different path of development and first emerged out of the Universities and the private research facility of Bell Labs. It was a group of composers at Columbia and Princeton who had banded together to build the Columbia-Princeton Electronic Music Center (CPEMC), the oldest dedicated place for making electronic music in the United States. Otto Luening, Vladimir Ussachevsky, Milton Babbit and Roger Sessions all had their fingers on the switches in creating the studio. Otto Luening was born in 1900 in Milwaukee, Wisconsin, to parents who had emigrated from Germany. His father was a conductor and composer and his mother a singer, though not in a professional capacity. His family moved back to Europe when he was twelve, and he ended up studying music in Munich. At age seventeen he went to Switzerland and it was at the Zurich Conservatory where he came into contact with futurist composer Ferruccio Busoni. Busoni was himself a devotee of Bernard Ziehn and his “enharmonic law.” This law stated that “every chord tone may become the fundamental.” Luening picked this up and was able to put it under his belt. Luening eventually went back to America and worked at a slew of different colleges, and began to advocate on behalf of the American avant-garde. This led him to assisting Henry Cowell with the publication of the quarterly New Music. He also took over from Cowell New Music Quarterly Recordings which put out seminal recordings from those inside the new music scene. It was 1949 when he went to Columbia where for a position on the staff in the philosophy department and it was there he met Vladimir Ussachevsky. Ussachevsky had been born in Manchuria in 1911 to Russian parents. In his early years he was exposed to the music of the Russian Orthodox Church and a variety of piano music, as well as the sounds from the land where he was born. He gravitated to the piano and gained experience as a player in restaurants and as an improviser providing the live soundtrack to silent films. In 1930 he emigrated to the United States, went to various schools, served in the army during WWII, and eventually ended up under the wing of Otto Luening as a postdoctoral student at Columbia University, where he in turn ended up becoming a professor. In 1951 Ussachevsky convinced the music department to buy a professional Ampex tape recorder. When it arrived it sat in its box for a time, and he was apprehensive about opening it up and putting it to use. “A tape-recorder was, after all, a device to reproduce music, and not to assist in creating it,” he later said in recollection of the experience. When he finally did start to play with the tape recorder, the experiments began as he figured out what it was capable of doing, first using it to transpose piano pitches. Peter Mauzey was an electrical engineering student who worked at the university radio station WKCR, and he and Ussachevsky got to talking one day. Mauzey was able to give some technical pointers for using the tape recorder. In particular he showed him how to create feedback by making a tape loop that ran over two playback heads, and helped him get it set up. The possibilities inherent in tape opened up a door for Ussachevsky, and he became enamored of the medium, well before he’d ever heard of what Pierre Schaeffer and what his crew were doing in France, or what Stockhausen and company were doing in Germany. Some of these first pieces that Ussachevsky created were presented at a Composers Forum concert in the McMillan Theater on May 9, 1952. The following summer Ussachevsky presented some of his tape music at another composers conference in Bennington, Vermont. He was joined by Luening in these efforts. Luening was a flute player, and they used tape to transpose his playing into pitches impossible for an unaided human, and added further effects such as echo and reverb. After these demonstrations Luening got busy working with the tape machine himself and started composing a series of new works at Henry Cowell’s cottage in Woodstock, New York, where he had brought up the tape recorders, microphones, and a couple of Mauzey’s devices. These included his Fantasy in Space, Low Speed, and Invention in Twelve Tones. Luening also recorded parts for Ussachevsky to use in his tape composition, Sonic Contours. In November of 1952 Leopold Stokowski premiered these pieces, along with ones by Ussachevsky, in a concert at the Museum of Modern Art, placing them squarely in the experimental tradition and helping the tape techniques to be seen as a new medium for music composition. Thereafter, the rudimentary equipment that was the seed material from which the CPEMC would grow, moved around from place to place. Sometimes it was in New York City, at other times Bennington or at the MacDowell Colony in New Hampshire. There was no specific space and home for the equipment. The Louisville Orchestra wanted to get in on the new music game and commissioned Luening to write a piece for them to play. He agreed and brought Ussachevsky along to collaborate with him on the work which became the first composition for tape-recorder and orchestra. To fully realize it they needed additional equipment: two more tape-recorders and a filter, none of which were cheap in the 1950s, so they secured funding through the Rockefeller Foundation. After their work was done in Louisville all of the gear they had so far acquired was assembled in Ussachevsky’s apartment where it remained for three years. It was at this time in 1955 they sought a permanent home for the studio, and sought the help of Grayson Kirk, president of Columbia to secure a dedicated space at the university. He was able to help and put them in a small two-story house that had once been part of the Bloomingdale Asylum for the Insane and was slated for demolition. Here they produced works for an Orson Welles production of King Lear, and the compositions Metamorphoses and Piece for Tape Recorder. These efforts paid off when they garnered the enthusiasm of historian and professor Jacques Barzun who championed their efforts and gained further support. With additional aid from Kirk, Luening and Ussachevsky eventually were given a stable home for their studio inside the McMillin Theatre. Having heard about what was going on in the studios of Paris and Germany the pair wanted to check them out in person, see what they could learn and possibly put to use in their own fledgling studio. They were able to do this on the Rockefeller Foundation’s dime. When they came back, they would soon be introduced to a machine, who in its second iteration, would go by the name of Victor. The Microphonics of Harry F. Olson One of Victor’s fathers was a man named Harry Olson (1901-1982), a native of Iowa who had the knack. He became interested in electronics and all things technical at an early age. He was encouraged by his parents who provided the materials necessary to build a small shop and lab. For a young boy he made remarkable progress exploring where his inclinations led him. In grade school he built and flew model airplanes at a time when aviation itself was still getting off the ground. When he got into high school he built a steam engine and a wood-fired boiler whose power he used to drive a DC generator he had repurposed from automobile parts. His next adventure was to tackle ham radio. He constructed his own station, demonstrated his skill in morse code and station operation, and obtained his amateur license. All of this curiosity, hands on experience, and diligence served him well when he went on to pick up a bachelors in electrical engineering. He next picked up a Masters with a thesis on acoustic wave filters, and topped it all off with a Ph.D in physics, all from his home state University of Iowa. While working on his degrees Olson had come under the tutelage of Dean Carl E. Seashore, a psychologist who specialized in the fields of speech and stuttering, audiology, music, and aesthetics. Seashore was interested in how different people perceived the various dimensions of music and how ability differed between students. In 1919 he developed the Seashore Test of Music Ability which set out to measure how well a person could discriminate between timbre, rhythm, tempo, loudness and pitch. A related interest was in how people judged visual artwork, and this led him to work with Dr. Norma Charles Meier to develop another test on art judgment. All of this work led Seashore to eventually receive financial backing from Bell Laboratories. Another one of Olson’s mentors was the head of the physics department G. W. Stewart, under who he did his work on acoustic wave filters. Between Seashore and Stewart’s influence, Olson developed a keen interest in the areas of acoustics, sound reproduction, and music. With his advanced degree, and long history of experimentation in tow, Olson headed to the Radio Corporation of America (RCA) where he became a part of the research department in 1928. After putting in some years in various capacities, he was put in charge of the Acoustical Research Laboratory in 1934. Eight years later in 1942 the lab was moved from Camden to Princeton, New Jersey. The facilities at the lab included an anechoic chamber that was at the time, the largest in the world. A reverberation chamber and ideal listening room were also available to him. It was in these settings that Olson went on to develop a number of different types and styles of microphone. He developed microphones for use in radio broadcast, for motion picture use, directional microphones, and noise-cancelling microphones. Alongside the mics, he created new designs for loudspeakers. During WWII Olson was put to work on a number of military projects. He specialized in the area of underwater sound and antisubmarine warfare, but after the war he got back to his main focus of sound reproduction. Taking a cue from Seashore, he set out to determine what a listeners preferred bandwidth of sound actually was when sound had been recorded and reproduced. To figure this out he designed an experiment where he put an orchestra behind a screen fitted with a low-pass acoustic filter that cut off the high-frequency range above 5000 Hz. This filter could be opened or closed, the bandwidth full or restricted. Audiences who listened, not knowing when the concealed filter was opened or closed had a much stronger leaning towards the open, all bandwidth listening experience. They did not like the sound when the filter was activated. For the next phase of his experiment Olson switched out the orchestra, whom the audience couldn’t see anyway, with a sound-reproduction system with loudspeakers located in the position of the orchestra. They still preferred the full-bandwidth sound, but only when it was free of distortion. When small amounts of non-linear distortion were introduced, they preferred the restricted bandwidth. These efforts showed the amount of extreme care that needed to go into developing high-fidelity audio systems. In the 1950s Olson stayed extremely busy working on many projects for RCA. One included the development of magnetic tape capable of recording and transmitting color television for broadcast and playback. This led to a collaboration between RCA and the 3M company, reaching success in their aim in 1956. The RCA Mark I Synthesizer Claude Shannon’s 1948 paper “A Mathematical Theory of Communications,” was putting the idea of information theory into the heads of everyone involved in the business of telephone and radio. RCA had put large sums of money into their recorded and broadcast music, and the company was quick to grasp the importance and implications of Shannon’s work. In his own work at the company, Olson was a frequent collaborator with fellow senior engineer Herbert E. Belar (1901-1997). They worked together on theoretical papers and on practical projects. On May 11, 1950 they issued their first internal research report on information theory, "Preliminary Investigation of Modern Communication Theories Applied to Records and Music." Their idea was to consider music as math. This in itself was not new, and can indeed be traced back to the Pythagorean tradition of music. To this ancient pedigree they added the contemporary twist in correlating music mathematically as information. They realized, that with the right tools, they could be able to generate music from math itself, instead of from traditional instruments. On February 26, 1952 they demonstrated their first experiment towards this goal to David Sarnoff, head of RCA, and others in the upper echelons of the company. They made the machine they built perform the songs “Home Sweet Home” and “Blue Skies”. The officials gave them the green light and this led to further work and the development of the RCA Mark I Synthesizer. The RCA Mark I was in part a computer, as it had simple programmable controls, yet the part of it that generated sound was completely analog. The Mark I had a large array of twelve oscillator circuits, one for each of the basic twelve tones of the muscial scale. These were able to be modified by the synths other circuits to create an astonishing variety of timbre and sound. The RCA Mark I was not a machine that could make automatic music. It had to be completely programmed by a composer. The flexibility of the machine and the range of possibilities gave composers a new kind of freedom, a new kind of autocracy, total compositional control. This had long been the dream of those who had been bent towards serialism. The programming aspect of the RCA Mark I hearkened back to the player pianos that had first appeared in the 19th century, and used a roll of punched tape to instruct the machine what to do. Olson and Belar had been meticulous in all of the aspects that could be programmed with their creation. These included pitch, timbre, amplitude, envelope, vibrato, and portamento. It even included controls for frequency filtering and reverb. All of this could be output to two channels and played on loudspeakers, or sent to a disc lathe where the resulting music could be cut straight to wax. It was introduced to the public by Sarnoff on January 31, 1955. The timing was great as far as Ussachevsky and Luening were concerned, as they first heard about it after they had returned from a trip to Europe where they had visited the GRM, WDR, and some other emerging electronic music studios. The trip had them eager to establish their own studio to work electronic music their own way. When they met Schaeffer he had been eager to impose his own aesthetic values on the pair, and when they met Stockhausen, he remained secretive of his working methods and aloof about their presence. Despite this, they were excited about getting to work on their own, even if exhausted from the rigors of travel. They made an appointment with the folks at RCA to have a demonstration of the Mark I Synthesizer. The RCA Mark I far surpassed what Luening and Ussachevsky had witnessed in France, Germany and the other countries they visited. With its twelve separate audio frequency sources the synth was a complete and complex unit, and while programming it could be laborious, it was a different kind of labor than the kind of heavy tape manipulation they had been doing in their studio, and the accustomed ways of working at the other studios they got to see in operation. The pair soon found another ally in Milton Babbit, who was then at Princeton University. He too had a keen interest in the synth, and the three of them began to collaborate together and share time on the machine, which they had to request from RCA. For three years the trio made frequent trips to Sarnoff Laboratories in Princeton where they worked on new music. .:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/SOURCES: Holmes, Thom. Electronic and Experimental Music. Sixth Edition. Music of the 20th Century Avant-Garde: A Biocritical Sourcebook https://ubu.com/sound/ussachevsky.html Columbia-Princeton Electronic Music Center 10th Anniversary, New World Records, Liner Notes, NWCRL268 , Original release date: 1971-01-01 https://120years.net/wordpress/the-rca-synthesiser-i-iiharry-olsen-hebert-belarusa1952/ https://cmc.music.columbia.edu/about https://betweentheledgerlines.wordpress.com/2013/06/08/milton-babbitt-synthesized-music-pioneer/ http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/olson-harry.pdf http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/seashore-carl.pdf https://snaccooperative.org/ark:/99166/w6737t86 https://happymag.tv/grateful-dead-wall-of-sound/ https://ubu.com/sound/babbitt.html https://www.youtube.com/watch?v=c9WvSCrOLY4 https://www.youtube.com/watch?v=6BfQtAAatq4 Babbitt, Milton. Words About Music. University of Wisconsin Press. 1987 https://en.wikipedia.org/wiki/Combinatoriality http://musicweb-international.com/classRev/2002/Mar02/Hauer.htm http://www.bruceduffie.com/babbitt.html http://cec.sonus.ca/econtact/13_4/palov_bode_biography.html http://cec.sonus.ca/econtact/13_4/bode_synthesizer.html http://esteyorganmuseum.org/ Sferics is one of Lucier’s most elegant and simple works. It is just a recording. Other versions of Sferics could be produced, and many science and radio hobbyists make similar recordings without ever having heard of Alvin Lucier. The phenomenon at the heart of Sferics existed long before they were ever able to be detected and recorded. Listening to this form of natural radio requires going down to the Very Low Frequency (VLF) portion of the radio spectrum.
The title of Lucier’s work refers to broadband electromagnetic impulses that occur as a result of natural atmospheric lightning discharges and are able to be picked up as natural radiofrequency emissions. Listening to these atmospherics dates all the way back to Thomas Watson, assistant of Alexander Graham Bell, as mentioned at the beginning of this book. He picked them up on the long telegraph lines which acted as VLF antennas. Since his time telegraph operators and radio hobbyists and technicians have heard these sounds coming in over their equipment. For some chasing after these sferics has become a hobby in itself. The VLF band ranges from about 3 kHz to 30 kHz and the wavelengths at this frequency are huge. Most commercial ham radio transceivers tend to only go as low as 160 meters which translates to between 1.8 and 2 MHz in frequency. A VLF wave at 3 kHz is by comparison a length of 100 kilometers. The VLF range includes a portion of the spectrum that is in the range of human hearing, from 20 Hz to 20 kHz. Yet since the sferics are electromagnetic waves rather than sound waves a person needs radio ears to listen to them: i.e., an antenna and receiver. On average lightning bolts strike about forty-four times a second, adding up to around 1.4 billion flashes a year. It’s a good thing the weather acts as a variable distribution system of these strikes, though some places get hit more than others. The discharge of all this electricity means there are a lot of electromagnetic emissions from these strikes going straight into the VLF band where they can be listened to with the right equipment. Because these wavelengths are so long, you could be in California listening to a thunderstorm in Italy or India, or in Maine listening to sferics caused by storms in Australia. The sound of sferics is kind of soothing and reminds me of the crackle of old vinyl that has been unearthed from a dusty vault in a thriftstores basement. There are lots of pops and lots of hiss. As these are natural sounds picked up with the new extensions to our nervous system made available by telecommunications listening to sferics has the same kind of soothing effect as listening to a field recording of an ocean, or stream meandering through lonely woods. But for a long time, listeners, hobbyists and scientists didn’t really know what these emissions were caused by. During the scientific research activities surrounding the International Geophysical Year (IGY) overlapping 1957-58 their presence and source was verified. The IGY yearlong event was an international scientific project that managed to receive backing from sixty-seven countries in the East and West despite the ongoing tensions of the Cold War. The focus of the projects was on earth science. Scientists looked into phenomena surrounding the aurora borealis, geomagnetism, ionospheric physics, meteorology, oceanography, seismology, and solar activity. This was an auspicious area of study for the scientists, as the timing of the IGY coincided with the peak of solar cycle 19. When a solar cycle is at its peak, the ionosphere is highly charged by the sun making radio communications easier, and producing more occurrences of aurora, among other natural wonders. One of the researchers was a man by the name of Morgan G. Millett, and his recordings would go on to have a direct influence on Alvin Lucier. Millet was an astrophysicist who had established one of the first programs to use the fresh discoveries occurring in the VLF band as a way to investigate the properties of space plasma around the earth, in the region now known as the upper ionosphere and magnetosphere. His inquiries into this area allowed for deep gains of knowledge in a new area of study before space-crafts began making direct observations of this area. Millet was also a ham radio operator with the call sign W1HDA. He had been interested in radio since he was a teenager, and throughout his career found ways to use his inclination and knack to research propagation. Throughout the 1940s and early 50s Millet and his colleagues conducted radar experiments near his home in Hanover, New Hampshire. The purpose of these studies was to observe two modes of propagation that magnetoionic theories had predicted would occur when radio waves entered the atmosphere. During the IGY he chaired the US National Committee's Panel on Ionospheric Research of the National Research Council. In this capacity he oversaw the radio studies being conducted all around the earth. As part of that work he joined the re-supply mission to the US Antarctic station on the Weddell Sea in early 1958 as the senior scientific representative. For his own specific research he maintained a series of far-flung stations spread across the Americas. It was from these that he made a number of recordings of natural radio signals. Lucier later heard these at Brandeis. The composer writes, “My interest in sferics goes back to 1967, when I discovered in the Brandeis University Library a disc recording of ionospheric sounds by astrophysicist Millett Morgan of Dartmouth College. I experimented with this material, processing it in various ways -- filtering, narrow band amplifying and phase-shifting -- but I was unhappy with the idea of altering natural sounds and uneasy about using someone else's material for my own purposes.” Millets recordings were made at a network of receiving stations and he interpreted the audio data he collected to obtain some of the earliest measurements of free electron density in the thousands of kilometers above earth. A colorful vocabulary was built up to describe the sounds heard in the VLF portion of the spectrum. Sferics that traveled over 2,000 kilometers often shifted their tone and came to be called tweeks; the frequency would become offset as it traveled in distance, cutting off some of the sound and making it sound higher in the treble range. Whistlers were another phenomenon heard on the air. They occurred when a lightning strike propagated out of the ionosphere and into the magnetosphere, along geomagnetic lines of force. The sound of a whistler is one of a descending tone, like a whistle fading into the background, hence its name. It is similar to the tweek, but elongated due to it stretching out away from the surface followed by a return to the Earth’s magnetic field. Dawn chorus is another atmospheric effect some lucky eavesdroppers in the VLF range may be able to pick up from time to time. It is an electromagnetic effect that may be picked up locally at dawn. The cause of this is thought to be generated from energetic electrons being injected into the inner magnetosphere, something that occurs more frequently during magnetic storms. These electrons interact with the normal ambient background noise heard in the VLF band to create a sound that is actually similar to that of birdsong in the morning. This sound is likely to be heard when aurorae are active when it is dubbed auroral chorus. Millets experimental work in recording these phenomena created a foundation to study such things as how the earth and its magnetic field interact with the solar wind. Listening to Millet’s recordings wasn’t enough for Lucier. “I wanted to have the experience of listening to these sounds in real time and collecting them for myself. When Pauline Oliveros invited me to visit the music department at the University of California at San Diego a year later, I proposed a whistler recording project. Despite two weeks of extending antenna wire across most of the La Jolla landscape and wrestling with homemade battery-operated radio receivers, Pauline and I had nothing to show for our efforts. . . .” The idea was shelved for over a decade. In 1981 Lucier tried again. He got a hold of some better equipment and was able to go out to a location in Church Park, Colorado, on August 27th, 1981. For the Colorado recording he collected material continuously from midnight to dawn with a pair of homemade antennas and a stereo cassette tape recorder. He repositioned the antennas at regular intervals to explore the directivity of the propagated signals and to shift the stereo field. This was all done at Church Park, August 27th, 1981. It was in the early 80s that Millet continued his own radio investigations. He built a network of radar observing stations to study gravity waves that propagate to lower latitudes of Earth from the arctic region. These gravity waves appear as propagating undulations in the lower layers of the ionosphere. Lucier wasn’t the only musician to be interested in this phenomenon. Electronic music producer Jack Dangers explored these sounds under his moniker as Meat Beat Manifesto on a song called The Tweek from the album Actual Sounds & Voices. Pink Floyd used dawn chorus on the opening track of their 1994 album the Division Bell. VLF enthusiast Stephen P. McGreevy has been tracking these sounds for some time, and has collected a lot of recordings and been releasing them on CD and the internet via archive.org. At the time of this writing he has made eight albums of such recordings. On the communications side of things the VLF band’s interesting properties have been exploited for use in submarine communication. VLF waves can penetrate sea water to some degree, whereas most other radio waves are reflected off the water. This has allowed for low-bitrate communications across the VLF band by the worlds militaries. Some hams have also taken up experimenting with communication across VLF, learning more about its unique propagation in doing so. Just as the Hub was getting off the ground and into circulation as a performing ensemble, one of its members, Scott Gresham-Lancaster, was working with Pauline Oliveros on a new project she had initiated in creating the ultimate delay system: bouncing her music off the surface of the moon and back to earth with the help of an amateur radio operator. Since Pauline had first started working with tape she had always been interested in delay systems. Later she started exploring the natural delays and reverberations found in places such as caves, silos and the fourteen-foot cistern at the abandoned Fort Worden in Washington state. The resonant space at Fort Worden in particular had been important in the evolution of Pauline’s sound. It was there she descended the ladder with fellow musicians Paniotis, a vocalist, and with trombonist Stuart Dempster to record what would become her Deep Listening album. Supported by reinforced concrete pillars the delay time in the cistern was 45 seconds, creating a natural acoustic effect of great warmth and beauty. This space continued to be used by musicians, including Stuart Dempster, and the place was dubbed by them, the cistern chapel. Pauline had another deep listening experience in a cistern in Cologne when visiting Germany. Between these experiences, the creation of the album, and the workshops she was starting to teach, she came up with a whole suite of practices and teachings that came to be called Deep Listening. The term itself had started as a pun when they emerged up from the ladder that had taken them into the cistern. Pauline describes Deep Listening as, “an aesthetic based upon principles of improvisation, electronic music, ritual, teaching and meditation. This aesthetic is designed to inspire both trained and untrained performers to practice the art of listening and responding to environmental conditions in solo and ensemble situations.” Since her passing Deep Listening continues to be taught at the Rensselaer Polytechnic Institute under the directorship of Stephanie Loveless. The idea of bouncing a signal off the moon, which amateur radio operators had learned to do as a highly specialized communications technique, was another way of exploring echoes and delays, in combination with technology in a poetic manner. Pauline first had the idea for the piece when watching the lunar landing in 1969. “I thought that it would be interesting and poetic for people to experience an installation where they could send the sound of their voices to the moon and hear the echo come back to earth. They would be vocal astronauts. My first experience of Echoes From the Moon was in New Lebanon, Maine with Ham Radio Operator Dave Olean. He was one of the first HROs to participate in the Moon Bounce project in the 1970s. He sent Morse Code to the moon and got it back. This project allowed operators to increase the range of their broadcast. I traveled to Maine to work with Dave. He had an array of twenty four Yagi antennae which could be aimed at the moon. The moon is in constant motion and has to be tracked by the moving antenna. The antenna has to be large enough to receive the returning signal from the moon. Conditions are constantly changing - sometimes the signal is lost as the moon moves out of range and has to be found again. Sometimes the signal going to the moon gets lost in galactic noise. I sent my first ‘hello’ to the moon from Dave's studio in 1987. I stepped on a foot switch to change the antenna from sending to receiving mode and in 2 and 1/2 seconds heard the return ‘hello’ from the moon.” Though farther away in space than the walls of the worden cistern, the delay time between the radio signal going there and coming back is much shorter. In a vacuum radio waves travel at the speed of the light. Earth Moon Earth, or EME as it is known in ham radio circles was first proposed by W. J. Bray, a communications engineer who worked for Britain’s General Post Office in 1940. At the time, they thought that using the moon as a passive communications satellite could be accomplished through the use of radios in the microwave range of the spectrum. During the forties the Germans were experimenting with different equipment and techniques and realized radar signals could be bounced off the moon. The German’s developed a system known as the Wurzmann and carried out successful moon bounce experiments in 1943. Working in parallel was the American military and a group of researchers led by Hungarian physicist Zoltan Bay. At Fort Monmouth in New Jersey in January of 1946 John D. Hewitt working with Project Diana carried out the second successful transmission of radar signals bounced off the moon. Project Diana also marked the birth of radar astronomy, a technique that was used to map the surfaces of the planet Venus and other nearby celestial objects. A month later Zoltan Bay’s team also achieved a successful moon bounce communication. These successful efforts led to the establishment of the Communication Moon Relay Project, also known as Operation Moon Bounce by the United States Navy. At the time there were no artificial communication satellites. The Navy was able to use the moon as a link for the practical purpose of sending radio teletype between the base at Pearl Harbor in Hawaii, to the headquarters at Washington, D.C. This offered a vast improvement over HF communications which required the cooperation of the ionospheric conditions affecting propagation. When the artificial communication satellites started being launched into orbit the need to use the moon for communicating between distant points was no longer necessary. Dedicated military satellites had an extra layer of security on the channels they operated on. Yet for amateur radio operators the allure of the moon was just beginning, and hams started using it in the 1960s to talk to each other. It became one of Bob Heil’s favorite activities. In the early days of EME hams used slow-speed CW (Morse Code) and large arrays of antennas with their transmitters amplified to powers of 1 kilowatt or more. Moonbounce is typically done in the VHF, UHF and GHz ranges of the radio spectrum. These have proven to be more practical and efficient than the shortwave portions of the spectrum. New modulation methods also have given hams a continuing advantage on using EME to make contacts with each other. It is now possible using digital modes to bounce a signal off the moon with a set up that is much less expensive than the large dishes and amounts of power required when this aspect of the hobby was just getting started.
“For instance, an 80W 70 cm (432 MHz) setup using about a 12-15 dBi Yagi works well for EME Moonbounce communication using digital modes like the JT65,” writes Basu Bhattacharya, VU2NSB, a ham and moonbouncer located in New Delhi, India. On the way to the moon and back, the radio path totals some 50,000 miles and the signals are affected by a number of different factors. The Doppler shift caused by the motion of the moon in relation us surface dwellers is an important factor for making EME contacts. It is also something that effected the sound of the Pauline’s music when it got bounced off the lunar surface. “The sound shifted slightly downward in pitch… like the whistle of a train as it rushes past,” said Pauline of her performance. “I played a duo with the moon using a tin whistle, accordion and conch shell. I am indebted to Scott Gresham-Lancaster who located Dave Olean for me in 1986 and helped to determine the technology necessary to perform Echoes From the Moon. Ten years later Scott located all the Ham Radio Operators for the performance in Hayward, California which took place during the lunar eclipse September 23, 1996. Following is the description of that performance: The lunar eclipse from the Hayward Amphitheater was gorgeous. The night was clear and she rose above the trees an orange mistiness. As she climbed the sky the bright sliver emerged slowly from the black shadow - crystal clear. The moon was performing well for all to see. Now we were ready to sound the moon. “The set up for Echoes From the Moon involved Mark Gummer - a Ham Radio Operator in Syracuse New York. Mark was standing by with a 48 foot dish in his back yard. I sent sounds from my microphone via telephone line in Hayward California to Mark and he keyed them to the moon with his Ham Radio rig and dish and then he returned the echo from the moon. The return came in 2 & 1/2 seconds. Scott Gresham-Lancaster was the engineer and organized all. When the echo of each sound I made returned to the audience in the Hayward University Amphitheater they cheered. Later in the evening Scott set up the installation so that people could queue up to talk to the moon using a telephone. There was a long line of people of all ages from the audience who participated. People seemed to get a big kick out of hearing their voices return - processed by the moon. There is a slight Doppler shift on the echo because of the motion of both earth and moon. This performance marked the premiere of the installation - Echoes From the Moon as I originally intended. The set up for the installation involved Don Roberts - Ham Radio Operator near Seattle and Mike Cousins at Stanford Research Institute in Palo Alto California. The dish at SRI is 150 feet in diameter and was used to receive the echoes after Don keyed them to the moon. With these set ups it was only possible to send short phrases of 3-4 seconds. The goal for the next installations would be to have continuous feeds for sending and receiving so that it would be possible to play with the moon as a delay line.” It's a set up that could work for other musicians who want to realize again Oliveros’s lunar delay system. Or it could be modified to create new works. The thrill of hearing a sound or signal come back from the moon remains, and if creative individuals get together to explore what can be done with music and technology, new vistas of exploration will open up. .:. .:. .:. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/Sources: http://www.kunstradio.at/VR_TON/texte/4.html https://muse.jhu.edu/article/810823 Furniture Music Over the course of the 20th century a music concerned with various aspects of space and spatialization began to take shape. It was a music with its roots in both the aether and the living room, this latter because of the influence of Erik Satie. Satie was to have many influences on musical developments after him. One stream was the noisy yet minimalist vein that came from the influence of his piece Vexations. The other was as the spiritual god father of ambient, descending from his conceptions of Furniture Music. This latter is what concerns us here. In French the term is musique d’ameublement a phrase he coined in 1917 and is generally taken to mean background music. It’s literal translation is furnishing music, though in English it has been standard to call it furniture music. It was a breakthrough idea in western music as it the music itself was to be a part of the room, a sonic background to furnish the space and not intended as something that needed to be directly focused on. Many of Satie’s pieces can be experienced as furniture music, but he only gave the name to five short pieces. The names are often indicative of how the music relates to a specific space. Satie had a notion of music that could "mingle with the sound of the knives and forks at dinner." His first set of furniture pieces gave that notion a form. The first set of furniture music he wrote has names like “Tapisserie en fer forgé – pour l'arrivée des invités (grande réception) – À jouer dans un vestibule – Mouvement: Très riche (Tapestry in forged iron – for the arrival of the guests (grand reception) – to be played in a vestibule – Movement: Very rich)” and “Carrelage phonique – Peut se jouer à un lunch ou à un contrat de mariage – Mouvement: Ordinaire (Phonic tiling – Can be played during a lunch or civil marriage – Movement: Ordinary)” The second set was composed as intermission music for a comedy by Max Jacob that has since been lost. As intermission music the idea of background ambience to fill the space is again asserted. Not much else was done with the furniture music and it remained largely unknown to the public except for being mentioned in a few biographies of the composer. In the 1960s some facsimiles of his scores appeared in the then new biographies coming out on Satie, with publication of the scores following in the 70s. In America Satie’s ideas and music found a champion in John Cage. Cage was stimulated by the idea of furniture music and it inspired his own experiments and theories for a minimalist background music. Furniture music became a nucleus around which the minimalist and avant-garde composers rallied around with its emphasis on being played not as the centerpiece, but as something to create a space which people lived and moved inside of. Atmosphere, timbre, texture, long durations, repetition, and drone a part of the milieu. These tendencies towards texture and drone were picked up by Brian Eno who built upon the idea of furniture music on his album Discreet Music (discussed in terms of its relation to cybernetics and information theory in Chapter 3). Eno thought of Discreet Music, as just what one of the definitions of the word discreet means: unobtrusive and unnoticeable. ''The ambient records are similar to paintings,'' Eno says. ''You don't gaze at a painting for hours each day. But you're aware of its presence, and occasionally you choose to go into it deeply - at a time when you're receptive and want it to affect your mood.'' The minimalist and ambient aspects of furniture music built on by Cage and Eno became major strands of what was to become Space Music. Another major strand came again from that great force of nature, Karlheinz Stockhausen, and the German electronic musicians who followed his lead starting in the 1960s and 70s. The Spatialization of Space At the WDR Stockhausen became a colleague of Robert Beyer in 1953 (see Chapter 5). In a 1928 paper Beyer wrote about “Raummusik” or spatial music. It wasn’t about music from the stars, or music to create an atmosphere in a specific space as Satie had done with his furniture pieces, but was focused on the possibilities of having different sound elements localized at specific points within a concert hall or listening space. With the advent of electroacoustic music the spatialization of sound also became about certain sounds being in specific loudspeakers and moving sounds from one loudspeaker to another within a system. Stockhausen took the idea of spatial music, and the term, and ran with it, with composed spatial elements running throughout many of his works. And while this spatial element was very dear to Stockhausen, he was also interested in creating music inspired by outer space and the greater cosmos. Following a performance of Hymnen in 1967 he said, “Many listeners have projected that strange new music which they experienced—especially in the realm of electronic music—into extraterrestrial space. Even though they are not familiar with it through human experience, they identify it with the fantastic dream world. Several have commented that my electronic music sounds ‘like on a different star,’ or ‘like in outer space.’ Many have said that when hearing this music, they have sensations as if flying at an infinitely high speed, and then again, as if immobile in an immense space. Thus, extreme words are employed to describe such experience, which are not ‘objectively’ communicable in the sense of an object description, but rather which exist in the subjective fantasy and which are projected into the extraterrestrial space." Many of Stockhausen’s pieces of music are concerned with outer space, the constellations, and stars. It was a recurrent theme throughout the compositions he wrote in the 1970s, and he spiraled back to space and the stars again and again throughout his creative life. As such a few of the relevant pieces will be explored here and others will be examined in more depth in their own sections of this chapter. Sternklang is a piece of music that pulls together Stockhausen’s interest in combinatorial systems (Glass Bead Games), spatial music, and intuitive music, among other things. "park music", to be performed outdoors at night by 21 singers and/or instrumentalists divided into five groups, at widely separated locations. In a park at night the sky is open to all who want to receive the light and blessing of the stars, of those things coming into being. In the score Stockhausen says simply that the music is sacred and that it is best performed on in the warmth of summer on when the moon is full. Stockhausen says of the piece, “STERNKLANG is music for concentrated listening in meditation, for the sinking of the individual into the cosmic whole”. The music itself bears many similarities to Stimmung, in that overtone singing is done by the vocalists based on various combinations of vowel phonetics. The instrumentalists are also required to create overtones and also use synthesizers, sometimes processing their sound through the synth to create the required overtones. The groups are spaced approximately 60 meters apart from each other, creating the spatial effects for listeners who are wondering around the park, stopping here and there to listen to the different ways the music sounds in separate but overlapping spaces. Loudspeakers amplify the different groups, and each group is supposed to be situated that they can hear at least one or two other groups. These separate groups of players perform independently of one another, but they also synchronize together at ten different times during the performance. The synchronization is done through the work of the torch-bearers and sound-runners. They run from one group to another, the torch bearer lighting the way, the sound runner, giving a musical “model” to the other groups. In the center of the park a percussionist synchronizes the musicians to a common tempo. This complex work has an equally complex score, made up of a text illustrating the concept, a Formscheme, five pages each with six of the Models to be played in a variety of combinations, ten pages with ten Special Models, and a page of Constellations. All this material is given to the different groups of musicians who use parts of it for the structure according to the instructions. From this material many completely different performances of Sternklang could be given, due to the combinatorial aspect. Yet they would all sound consistent as Sternklang. The score is a vessel into which the musical energies are poured, and though the contents may differ between performances, the vessel itself lends its form. The Special Models are the only times when the five groups are synchronized via the tam-tam, yet even within these there are part-patterns that may differ. Mixed in at different points of the music are the Constellations. These points are based on the actual constellation shapes interpreted as relative pitch and loudness. Meanwhile, the thirty different Models give instruction for how to sing the pitch material using the phonetic vowels from the constellation names so as to accentuate the overtones. Just as in Stimmung, the names are considered to be ones full of magical power. In all the overtones played there is a unique oscillation, created by the mouth by the vocalists, while the synthesizer players use timbre filters, and the trombone players use mutes. The five different groups can be conceived as their own constellations, at times vibrating with their own rhythms, songs, tones. At other times they come into synchronized harmony. Drifting about these constellations are the human listeners, being exposed at different points to the intense and pure musical light of the star sounds. He followed up Sternklang with Ylem, Tierkreis and Sirius. When Licht took over his compositional life starting in 1977 he managed to continue to work in themes of space, and worked dizzying amounts of spatialization and sound projection techniques into the various pieces that make up his magnum opus. Of these the pieces Weltraum (Outer Space, 1991–92/1994), Komet (Comet, 1994/1999), Lichter—Wasser (Lights—Waters, 1998–99) are especially significant. Michaelion (1997) is likewise discussed (at the end of the chapter or in the section on shortwave radio). In the Klang cycle his final series of works, he continued to be inspired by the stars. The electronic chamber piece Cosmic Pulses sees him completely leave the orbit of previous Earth music’s in his spatial exploration of space. Stockhausen’s influence fed more or less directly into the Kosmiche genre of music in Germany starting in the late 1960s. Other Planes of There If you’ve ever listened to the music of Sun Ra you know that space is the place. To say that Sun Ra was interested in space music from a cosmic perspective is an understatement. The man from Saturn himself said "When I say space music, I'm dealing with the void, because that is of space too... So I leave the word space open, like space is supposed to be." In the 1930s when Herman Blount was taking a training course to become a teacher in Huntsville, Alabama, he received some visitors who established his true calling. He was to be a teacher, but not a school teacher. These visitors, Blount said, were aliens, who had antennas that grew above their eyes and on their ears, perhaps attuned to the wavelengths of cosmic music. They transported Sonny Blount, and this transportation caused him to metamorphosize into Sun Ra, after his visit to the planet Saturn. There he was given a set of metaphysical equations that surpassed the trivial knowledge of Earth. At the proper time, these beings told him, when life on Earth was filled with despair, he could set out to teach humanity. The vehicle for his teaching was music, and his message was one of discipline. This experience informed Ra's work for the rest of his life. It changed him on a fundamental level, and from it he continued his quest into music and metaphysics. Sun Ra steeped himself in mystic lore. His birth name came from Black Herman, the stage name of stage magician, hoodoo practitioner, and seller of patent medicines. His act was mixed the illusions of being "buried alive" and other escapes and that of a traveling medicine show catering to African-Americans. Black Herman was the author of Secrets of Magic, Mystery, and Legerdemain, that contained a mythologized biography, and a selection of material on sleight of hand, hoodoo folk magic, astrology, lucky numbers, dreams and more. The name Herman itself calls to mind that trickster and communicator Hermes, though it's etymology is actually German from the words harja- "army" and mann- "man". Though Herman Blount changed his name, in many ways he followed in the footsteps of his namesake, and lived a life of magic and mystery. Like Black Herman he created a mythology around his life that became part of his teaching vehicle, just as his music became a vehicle for space travel. Ra's band was not a band. They were a group of "tone scientists". They weren’t an orchestra, they were an arkestra, and their music was a way to travel the outerspace ways, and to bring the sounds of the cosmos down hear onto Earth. The way Ra’s compositions swing, showed that they weren’t tied to the gravity well of our planet, but orbited around vast interplanetary spheres. For all the free-wheeling moments of parts of the Sun Ra's ouvre, it came from his total discipline. His music sounds wild, out there, but it came from his total devotion to music. He abstained from alcohol, and encouraged his band members to do the same. He abstained from sex, drugs, and even sleep. The rock and roll ethos was his antithesis. For him there was sanctity to his calling as a musician, tied up as it was with also being a messenger from another world. His band practiced for hours and hours, in the middle of the night when Ra couldn’t sleep, late in the afternoon when he was jolted out of a brief catnap, in the morning when they no longer remembered what day it was, they were playing music. It was always in their mind and they were ready to swing. Sun Ran and his Arkestra were so prolific it is beyond the scope of this section to go into the vast penumbra that is his legacy and work. The theme of space reverberates throughout his records. So were the sounds of the space age. Sun Ra was one of the first jazz musicians, if not the first, to get into the synthesizer game, bringing the sound of the Minimoog into his already swirling cosmic pallette. Sun Ra believed it was important for black musicians to get into the world of electronic music, to start exploring the experimental sounds of the space age made possible by technology. For the makers of synthesizers, Jazz was a genre where they had yet to have a presence. All that changed between 1969 and 1970 when Sun Ra was invited to visit the Moog workshop in Trumansburg, NY. As one of the great jazz pianists Sun Ra had already availed himself of the electric sounds that became available in the 50s and 60s. These included electric piano, electric Celeste, Hammond organ, and the Clavioline. The Clavioline was memorably used on Joe Meek's production of Telstar by the Tornadoes. It was a vacuum tube based monophonic keyboard that gave an otherworldly vibe to many songs. Sun Ra loved the expanded timbre palette these keyboard instruments gave his voracious appetite for sound and he was always looking for what else might come down the line, and the Moog was his ticket into the seventies. Sun Ra had met Robert Moog when a journalist at the jazz rag Downbeat arranged for Sun Ra to visit the Moog factory. Sun Ra got a chance to got his expert hands on the Minimoog which was still in pre-production. The great synthesizer maker even gave the great Ra a prototype to take back with him. At the time the portable synthesizer was just an idea. The synths at the time were messy affairs taking up rooms and patched with huge amounts of cables. While the results of these instruments switched on many to their well-tempered sounds, as a touring instrument the Moog was untested, and its little brother the Minimoog was still in infancy. Sun Ra not only tested it's possibilities but took it out into the greater solar system on a scouting mission that brought space sounds into Sun Ra's live and recorded sessions. His track Space Probe, for example, was an extended solo with the Minimoog. As new keyboards found their way into the market they would often find their way to Sun Ra who continued to include such stalwarts as the Yamaha DX7 into his interplanetary musical concepts. From Kosmiche to Hearts of Space Kosmiche can be considered a synonym for Krautrock. The term was in use in Germany before the Krautrock label got thrown onto bands like Can (whose members Holger Czukay and Irmin Schmidt were students of Stockhausen), Ash Ra Temple, Faust and Guru Guru by the music press in England. Krautrock itself can be seen as a highly psychedelic vision of rock music with a heavy emphasis on synthesizers and propulsive motorik rhythms dressed with jazz improvisations and avant-garde tape editing techniques. It owed less to blues music, than rocks American and English counterparts, yet was indebted to the scenes of free improvisation happening in art music and jazz circles. A lot of it can be cosmic and spacey, but the extended synthesizer escapades of Popul Vuh, Amon Duul II, and especially Tangerine Dream and Klaus Schulze all went on to put their stamp on the emergent genre of ambient space music that would be epitomized in the set lists of the of the radio show Hearts of Space. On Tangerine Dream’s 1971 album Alpha Centauri the music was described in the liner notes as “kosmiche musik”. Julian Cope noted that the album was like Pink Floyd’s Saucer Full of Secrets, but minus the rock. It spread further, when their record label, OHR, put out a compilation with the name as a title. These Germans had found inspiration in the range of sounds now available to them with the Moog Modular, and with the EMS VCS3. They were also eager to separate their sound from their troubled nations past, and focusing on outer space, at the height of the space race and optimism about humanities exploration of the cosmos, was one solution. Space rock continued as one vein of this music, and another more ambient strain continued to emerge from others who found inspiration from the star sounds of Alpha Centauri. Klaus Schulze was another heavy influence on this emergent sound. Before he began his prolific solo career he’d already been playing with Tangerine Dream on their first album Electronic Meditations, after which he left to form Ash Ra Temple, made one album with them and departed. He also played sessions with the acid soaked Cosmic Jokers. Once he went solo he truly flourished as an artist. His first solo album Irrlicht came out in 1972 and featured a modified electrical organ as the main sound source and samples of classical symphonic music played backwards and run through a messed up amplifier to transform the sounds, which he mixed to tape for a three-movement symphony. Cyborg was his next album, and featured a similar set up, while Timewind from 1975 saw his first use of a sequencer which became a staple of his process. The pieces here are sidelong masterpieces easy to lose a sense of time while listening to. It was in these same years that Stephen Hill founded his radio show Music from the Hearts of Space, originally on KPFA. He used the pseudonym Timmotheo, and when his co-host Ann Turner joined him, she used the on air alias Annamystic. In its original incarnation it was a three-hour long late night excursion into all things “space music”. Hill had been an architect by training, and he was interested in all kinds of contemplative music, and also music that could fill up a space. The kosmiche sounds coming out of Germany certainly fit the bill. The program grew to fill its own niche and encompassed a mix of a wide range of ambient, electronic, world, new age, classical and experimental music. Space music can act as an isolation chamber when skillfully constructed, and excels over an expanded range of time. Steve Roach and Robert Rich both got started in the late seventies with albums coming out in the early eighties. There complimentary styles were perfect for the further growth of ambient space music and the two artists became closely associated with the milieu of music presented on Hearts of Space. At the age of twenty when Steve Roach wasn't practicing to up his game as a Motocross racer, he was listening to the sounds of Vangelis, Klaus Schulze, and Tangerine Dream. After he suffered from a bike crash that led him into a near-death experience, where he heard "the most intensely beautiful music you could ever imagine" he reorganized his life and dedicated it to recreating the music he had heard. Out of this experience came his landmark and timeless album called "Structures from Silence." Roach has said that others who have had near-death experiences tell him that they heard similar music. He had acquired his first synthesizer about six years before the accident, in 1978 and taught himself to play, inspired by the music he'd been listening to. In 1982 his first album, Now, came out. Then the bike crash. From that time on his life has been devoted to bringing people music that communicates a spiritual perception of space and time, flow, at once in touch with the landscapes of the earth, as with the vast expanse of silence within the void. The three long tracks on Structures from Silence encapsulate the listener within a web of harmonic waves. From that release onwards Roach has been relentless in his mission to bring a music of space, stillness, and quiet noise into the hearts and heads of his many listeners. The music of Roach became a staple on Hearts of Space, and a bridge between the adjacent worlds of ambient and new age. Tribal soundworlds were also explored when Roach visited Australia. He fell in love with the desert outback and the didgeridoo. He learned to play the instrument, and started incorporating into his music. Roach was also studying the Aboriginal Dreamtime, and going on walkabouts in the desert of his of California. These influences came to the fore on his 1988 classic Dreamtime Return. The desert became a spiritual home for Roach, and he eventually moved to Arizona where the wide open landscape continues to be a source of inspiration. Out of these experiences, and collaborations with many artists, Roach helped to create the tribal ambient and tribal techno subgenres. Another artist in a similar vein, who has also collaborated with Roach, is Robert Rich, whose music is another frequent touchstone on Hearts of Space playlists. They also began their careers around the same time, with Rich releasing his first album Sunyata in 1982. Like Roach his signature soundworlds have helped to further define an organic and at times tribal strain of ambient. Rich also goes in for explorations into propulsive beat centered trance rhythms, with extensive explorations of alternate tuning systems, recalling the works of Terry Riley and Steve Reich, abetted with the help of a sequencer. Robert Rich also has a penchant for all night concerts, just as Riley did with his longform raga inspired minimalism, but Rich took his performances in a different direction, with quieter sounds. He used his sleep concerts as a vehicle for exploring the nature of sleep, consciousness and dreams. Hearts of Space founder Stephen Hill notes, “What's now being called Ambient music is the latest chapter in the contemplative music experience. Electronic instruments have created new expressive possibilities, but the coordinates of that expression remain the same. Space-creating sound is the medium. Moving, significant music is the goal.” Radio remains a perfect medium for presenting this type of music and Hill and Turner would do long hour long blocks with no voice interruptions as DJs until the end of each hour, when they would announce what they heard. This allowed the listeners to sink into the experience with being brought out of their contemplative reverie. In 1983 after ten years on KPFA Hearts of Space started to be syndicated on 35 National Public Radio stations around the United States via the Public Radio Satellite System. It continued into the era of net streaming and in 2009 it was still on two hundred public radio stations. It moved into orbit with Sirius XM for a time. On November 12, 2021, it reached its latest milestone, 1,300 installments. Earth Station One: John Shepherd Beamforms to Space
Other shows mining the same vein have also achieved great success on the public radio circuit, with one of the most popular being Echoes created in 1989 and hosted by John Diliberto. Earth Station One, created by John Shepherd, was the most innovative, as Shepherd not only played classic space music, but attempted to broadcast it to the extraterrestrial lifeforms he believes live in outer space. Something must have been in the air in the early seventies, if not in the acid, as John Shepherd embarked on his own quest to transmit space music into space beginning age 21 in 1971. He’s been listening to radio shows about the UFO phenomenon, and was an avid electronics hobbyist, who had begun tinkering in his teens, building equipment on his own out of surplus and whatever parts he could scrounge. He was also a Science Fiction buff, and wanted to be able to build the kind of machines he saw in TV and film. As he played around with parts the idea of building something that could communicate with aliens came to him. Between some ARRL manuals and an electronics 101 course he took in highschool, and what the he taught himself, he started putting together a station at his grandparents home in Michigan. He had a friend in Transverse City who was as into music as he was into electronics and SF films. They would listen to his friends collection of over 4,000 albums for eight and ten hour shifts. In his first attempts at communicating with extraterrestrials he used binary tone pulses on 150-watt transmitter. Then he upped his game as Project STRAT (Special Telemetry Research and Tracking) was born out of the stew of influences affecting him and his destiny. Why not transmit music? He put together other set ups, and in time had a 60,000 volt transmitter to beam shows that featured Can, Kraftwerk, Cluster, Neu! And other bands from the German kosmiche scene into outer space, outside of earth and lunar orbit, out into the void. His shows also featured different world music, minimalist composers, and sometimes jazz. “I felt that music was a sort of universal language and would best suit the open form of communication. It doesn’t need much in the way of translating and most of the music I selected was of the instrumental variety. I felt the more genuine forms of music offered something meaningful. It has to be something that inspires the mind and imagination. That's when it's special,” he said. His eccentric passion was entirely funded by odd jobs, and he kept at his quest to communicate with higher intelligences using technology and art for twenty-seven years. Without much in the way of financial help for his pet project, he finally had to shut down the station in 1998. It’s legacy however lives on, and with synthesis of electromagnetic communications, and music, perhaps others will step in to bring the space music of Earth to those ear perked aliens, listening, out there, somewhere in orbit. Ambient remains a popular genre for listeners and musicians, and it is my belief that these related forms of contemplative sounds will have spaces on the spectrum for decades to come, that the music of the spheres will continues to reverberate across airwaves and ionosphere, and even out into the solar system and beyond. .:. .:. .:. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. References / RE/Sources: Notes for A Brief History of Space Music The 'furniture music' of rock star Brian Eno by David Sterritt, The Christian Science Monitor, May 3, 1984 https://www.csmonitor.com/1984/0503/050321.html Electronic and Experimental Music: Pioneers in Technology and Composition, Thomas B. Holmes, Routledge Music/Songbooks, 2002 https://www.sonoloco.com/rev/stockhausen/18.html http://stockhausenspace.blogspot.com/2014/07/opus-34-sternklang.html Sun Ra sources: Space is the Place: The Life and Times of Sun Ra by John F. Szwed https://moogfoundation.org/sun-ra-the-minimoog-by-historian-thom-holmes/ https://reverb.com/news/sun-ras-cosmic-keys Kosmiche-Musik and Its Techno-Social Context, Alexander C. Harden, IASPM Journal, ISSN 2079-3871. https://web.archive.org/web/20110719024656/http://www.intuitivemusic.com/cosmic-jokers-biography https://www.hos.com/history.html https://steveroach.com/Info/Info.html https://robertrich.com/about/ https://steveroach.com/Music/discography.php?albumID=585 https://thequietus.com/articles/29090-john-was-trying-to-contact-aliens-radio https://pitchfork.com/thepitch/meet-the-man-who-used-kraftwerk-fela-kuti-and-other-fascinating-music-to-try-to-lure-aliens-to-earth/ At the same time Reed Ghazala was discovering circuit bending, another Midwesterner was getting involved in the creation of the sound systems that would change the way live rock and roll music was performed around the country and around the world. Bob Heil is an exemplar of the creative fusions that can happen when an ear turned on to the power of music also develops the knack for technical innovation. Born on October 5 in 1940 at age ten he was an avid accordion player. At age thirteen his parents gave him a B3 Hammond organ. This gift gave him a life in music, and in turn radio, that kept him busy with creative fun and innovation for close to seven decades. Heil quickly mastered the Hammond and at an early age got a job playing organ in a restaurant where he made fifteen bucks every weekend. Two years later, with even more chops, he became the organist at the Fox Theater in St. Louis. Built in 1929 by William Fox the movie palace was designed to be a showcase for the films of the Fox Film Corporation. Throughout the 1960s it was one of the leading movie theaters in St. Louis and has now been given another life as a performance arts venue. The organ at the Fox Theater was massive and had over 4000 pipes. Heil had to tune and voice the pipes. This job gave Heil hands on practice in concentrated listening. He had to go in, learn all the harmonics for the pipes and be able to dissect what he was hearing. Heil, K9EID, has left his mark in music and amateur. The passion for radio also came to him young when got his ticket as an amateur radio operator at the age of fifteen in 1956. The hobby was quick to become an obsession. He plugged the earnings from his organist jobs into radio gear and began a lifetime of tinkering and working with audio and radio circuits. At the time there was excellent propagation on the amateur radio bands, and the six meter band, known to aficionados as “the magic band” was hopping with contacts both close and far. Anyone who wanted to get on the portion of the spectrum to make contacts and hear distant stations was in luck. One night while Heil was tuning around the six meter band he heard something horrid and strange. It was an operator talking on single sideband, not at all common at that time in the six meter portion. On another evening Heil heard him again and they got to talking. Soon they started meeting up on the radio to talk every night. They became fast friends on the air and one day this new friend, Larry Burrell, K0DGE, asked him to come see him in person. Larry happened to be chief engineer at KMOX. Heil was blown away when Burrell showed him around the studios and control rooms of the mighty Midwestern AM station. Heil wanted to get on 6 meter single sideband just like his older friend and asked him if he would build him a unit. His friend told him no, he wouldn’t build one for him but he would help Heil build one for himself. This proved to be a far greater gift than being given a radio. As Burrell elmered Heil and helped him build his own rig to do single sideband on six meters it sparked Heil’s love for building. After putting together a transverter for 6 Meter single sideband, he built one for 2 meter single sideband. Organs and Antennas At school Heil wasn’t doing so hot. Music and radio were his passions, and he continued to fund his habit for radio from the work he got as a musician. Yet somehow he managed to scrape by, and with his parent’s encouragement, got into another beloved aspect of the hobby, setting up antennas. These antenna’s would prove to become important later in his career as a maker of high fidelity microphones and other audio equipment for musicians and radio operators. One antenna he put up was a Telrex 6 Meter spiral array. Another was a 75 Meter dipole, a phased array also made by Telrex. Playing around with these antennas Heil learned how take them in and out of phase using coaxial cable. Antenna phasing is used by hams and shortwave radio stations for beamforming -a technique that focuses a wireless signal towards a specific direction and receiving device, rather than having the signal spread out in all directions as it typically does from conventional broadcast antennas. Phased arrays are especially desirable on the lower HF band where conventional beams are not feasible. In the VHF and UHF ranges of the radio spectrum most hams use Yagi type antennas for beamforming. A Yagi is different than a phased arrays in that only one element is driven by the transceiver. The rest of the antenna elements are parasitic, in that they re-radiate the signal driven by the radio at different phases. However, when an array is truly phased, all the elements are driven directly by the radio in different phases. Having a phased array allowed Heil to send and receive signals in specific directions so he could work different amateur radio stations in North America and around the globe going east, south, west or north. One day Bob Heil got a call from Robert Drake, founder of the R. L. Drake radio company. Founded in 1943 Drake’s company made high and low pass filters for government and amateur radio operators, and after WWII he started making equipment for hams. Robert Drake was interested in one of the radios Heil had built, a kilowatt transmitter for 2 meter SSB. As Heil recalled Drake telling him, “’We have a little meeting here at our club and I would love for you to come here and spend a day with us. It's actually a couple of days. We do it once a year in the Biltmore Hotel downtown. We cleared out all the furniture on one of the floors and we'll have Art Collins and the guys in one room. You have Carl Mosley and his antennas in another. We'll have Wes Schum from Central Electronics. We'll have Bill Halligan of Halligan,’ and on and on. He names his list; I'm going, ‘Whoa. What do you want me to do Sir?’ ‘We want you to come and tell us how you built this station.’” This gathering was the Dayton Hamvention, and it quickly grew into one of the two largest annual gatherings for amateur radio operators and manufactures in the world. Heil came and gave his presentation and it was well received by the manufacturers and other hams in attendance. Part of the very purpose of the amateur radio service as defined by the FCC is to advance the state of the radio art. It is this experimental aspect of the radio hobby that has long been a beacon for some of its brightest stars. After Heil’s presentation he got to talking with a British man who was there with his J. Beam Company. The man was looking for someone like Heil to do some experiments with an antenna they had built, and they asked him if he would like to carry out the work. He was more than willing, so they sent him what any ham would be happy to play with: a 128 element antenna array built for the 2 meter band. After shipping the massive array to him, he was helped by a contractor and fellow ham K9EBA who helped him put up such a beast of an antenna. He had another friend who worked for Motorola who also helped. The fact that his parents let him put up fifty foot wide antenna in the vacant lot behind their house was another blessing working in his favor. This was the antenna Heil used to get started in 2 Meter moonbounce using VHF SSB, but before he got into that he first got another job, this time at the Holiday Inn in St. Louis where he built them a pipe organ for their four star restaurant. It was extremely rare to have a pipe organ in a restaurant and this helped the Midwest spot become a destination for travelers and organ fans on both sides of the continent. In building the organ Heil again had the support of mentors, this time from Martin Wick of the Wick Pipe Organ Company, whom he’d met through one of his music teachers. He became close friends with Wick and would stop at his plant in Highland, Illinois on his way from Heil’s hometown of Marissa before going to play at the Holiday Inn in St. Louis. Wick had shown him one of the little theater organs he’d installed in a private home, and that gave Heil the idea of building a similar instrument for the restaurant at the hotel. Once approval for the plan was in place he would go up to Highland every day to work on putting it together under the guidance of Wick and his employees. It took him about a year and a half to build the organ with five ranks of pipes, a blower, reservoirs, relays and a large console. Ever curious Heil wanted to learn how to voice and tune the organ himself just to see if he could do it, and with a bit of guidance from his mentors, he added this skill to his chest of valuable knowledge. After he built the organ he got paid to play it six nights a week, and when he looked over the rack as he played he saw the sign for Mosley Electronics. Fate had conspired to place him just across the street from the Mosley antenna plant. Mosley Electronics was the brainchild of Carl Mosley, W0FQY, later K0AXS, a ham who got his start in the world of radio back in 1918 when spark gap transmitters electrified the air with their crackling sound. In the late 1930’s and early 1940’s Mosley started making equipment, starting with the 3/4" tube socket that was standard equipment for most amateur radio operators at the time. He was working out of his basement when he started this operation, but soon he had so many orders he had to grow his business, hire employees, get additional help. As his business grew Mosley entered the market for creating accessories for television as the TV era dawned in the 1950’s, building feed thru insulators, wall outlets and plugs. In 1951 he got into the antenna game with his famous “Vest Pocket” design for his fellow hams. The development of the design lead from monoband to multi-band and from there to the tri-band Vest Pocket utilizing one feedline. This innovation led to the antenna becoming a mainstay, and for antennas in general to be the centerpiece of his business, and the building of the factory in St. Louis. Military and industrial antennas were also being made by Mosley and it was these innovations that led to the creation of the WWV antenna for transmitting time signals. In 1955 his company created the Trap-Master TA-33 amateur tri-band beam setting the standard in the field. From Marissa to the Moon St. Louis was also the home of McDonald Aircraft. In 1959 they were busy building the Mercury capsule for NASA. Once a month seven astronauts from the agency came to train at McDonald, and they stayed at the Holiday Inn. They listened to Heil play the organ, and he got to be on friendly terms with the space cadets. One of them was a man named Alan Shepard whose father had also been an organ player and he was intrigued by the fact that the hotel had put such a custom built instrument inside the restaurant. As Heil and Shepard got to know each other, Heil told him about his ham radio hobby. He showed Shepard some pictures of the huge VHF array he had put up. Heil recall’s their conversation: "‘Wait a minute, you have this thing working?’ I said, ‘Yes.’ ‘Can we borrow it?’ I said, ‘Well, of course.’ ‘Ah,’ he said, ‘This would be great.’ I said, ‘Well, you need to take it down?’ ‘No, no, no,’ he said, ‘You have a phone patch?’ I said, ‘Yes Sir.’ He said, ‘Here's what we’re going to do. We're going to send you a signal from Houston in the telephone line. You patch it into your transmitter, into this 128 element. You point that sucker up to the moon and what we want to know is what kind of delay time [it has].’” Mathematically NASA had already calculated, without computers, what the delay time would be in bouncing a radio signal off the moon. Yet with Heil’s array they would be able to test how accurate their calculations were. Heil was around 20 or 21 at the time and his hobby had brought him into playing with the big leagues just a few years into the space race. “They would send little signals, just little shots, and they would listen for it. They had, of course, fantastic . . . I didn't know exactly what but probably 50 foot dishes, who knows, but it was NASA. That was just such a big deal for me,” Heil said of the time. For four hours a night, six nights a week he would play the organ for his job, and the rest of the time he spent building amateur radio gear, doing moonbounce experiments on VHF SSB with NASA, and making contacts on the radio. Around this time Joe Hall helped him get one of his transverter’s that he had built onto the market, and it was the first one to be sold commercially. All this, and Heil had never gone to college, having barely graduated highschool. “Amateur radio was my college professor” he is fond of saying. Heil Sound System In 1966 Heil was inspired to open up his own Hammond organ and music store in his hometown of Marissa, Illinois. He dubbed it Ye Old Music Shoppe and it was destined to become the rock and roll capital of the world. One day a high school kid came in with a guitar amplifier and asked Heil if he’d be able to fix it. Ever curious he took a look inside and saw the tubes and other components were similar to the ham radio gear he tinkered on. With his trusty soldering iron he fixed it up for the guy. This happened to be one of the guitarists who was later a member of REO Speedwagon. He and other rock and rollers started patronizing Heil’s shop and he started to develop a reputation with the rock music crowd, even though it was a genre he knew nothing about himself. His shop started renting Hammond B3 organs to musicians and bands who were on tour in the area, often playing at the Keil Auditorium. People like Janis Joplin, Jimi Hendrix and Ted Nugent would come in, and after they rented the organ, they’d ask him about the PA system in the venue. Heil didn’t know much about the PA’s in the concert hall, it wasn’t an interest to him. He was interested in the sound systems for his organs. But he knew the little bitty columns of speakers where the bands played tended to sound horrible. Fate intervened in his life once again at this juncture, when he went to go visit his old friend George Bales the stage manager at Fox Theater in 1968. When he got there he saw a bunch of boxes outside the stage door. George told him the theater was putting in a new set of speakers, and those were the old ones, being thrown out. "‘Wait a minute. You're throwing those away? Can I have them?’" he asked his friend. His friend said “‘Sure””. Heil recalls, “The Ham Radio in me kicked in, I went and rented a truck.’”Ham’s have always been great scavengers of material and parts. Where one person might see old electronic junk a ham sees possibilities. Heil got them and took them to a vacant building he had in Marissa and started experimenting. The speakers were Altec 4’s and they were huge, about 10-feet-wide, 8 feet deep and 8 feet tall, and he had four of them. He put some radio horns in them, and got some JBL drivers and some McIntosh amplifiers. Next he needed a mixer and got an Altec. From all of this gear he put together a great sounding PA. Unknown to him, nobody else in the music business was putting together sound systems in this manner. A manager for one of the venues got wind of the PA and asked him if they could use it when they brought in different acts from Nashville, and Heil said yes. To Heil it was just a big hi-fi system, but the acts and the venue manager went zonkos over the sound it produced. Dolly Parton was among the first musicians who got to use the system. At that point people around St. Louis started to talk about Heil’s achievement. Another manager came up to him at a show and asked him if he would take the PA on tour with the band the guy worked for. Heil explained he’d never been on a tour, but that he had a couple guys who liked rock music who worked for him, and that he’d get them and the gear rounded up and along to do these shows in Ohio. After two days into the this gig he found out the lead guitarist for the band was a ham radio operator. His call sign was WB6ACU and his name was Joe Walsh, and the band was the James Gang. Walsh and Heil hit it off and so began a lifelong friendship. The next big jump in the progression of Heil Sound took place on February 2, 1970. The Grateful Dead were scheduled to play at the Fox Theater. Good friend of the Grateful Dead, the "Bear" Augustus Owsley Stanley III was going to run their sound system. Owlsey was himself an amateur radio operator, having secured a license during his stint as an electronic specialist for the United States Air Force. While in the service he also picked up his general radiotelephone operator license. His technical background served him well as an audio engineer and as a clandestine LSD chemist, who supplied the Dead and their fans with copious amounts of the hallucinogenic drug. It is estimated that between 1965 and 1967 alone that Owsley had produced no less than 500 grams of LSD, amounting to a little more than five million doses. When he first got started making the stuff, acid wasn’t yet illegal, but it quickly became so, and it didn’t take long for the law to catch up with the man and his operation. With drug charges pending against him, Owsley had been ordered not to leave the state of California. That pesky little detail didn’t stop him from going on the tour though. As Heil recalls, “They were going to do a short little Midwest, East Coast tour and their sound man was on probation out of the state of California. He wasn't supposed to be out of the state, but the drug agents and the FBI they found out that he was going to be on tour so they went to the first job. The first job and they sat and waited till they were finished playing. The group came on to St. Louis the second date. Now there were no cell phones. There was no communication in those days. The group shows up at the Fox at 4 o'clock in the afternoon. There's no PA. There's no Owsley. The group was the Grateful Dead. Well, they call back to their office found out that Owsley was in jail. The PA was confiscated; their group was not going to continue.” George Bales from the Fox called up Heil with this situation on his hand, asking if he still had those speakers he had given him. The Grateful Dead were at the theater without a PA and they needed some help. Bales put Heil on the phone with Jerry Garcia and the two talked about the equipment Heil had at his disposal and Garcia got amped. They would be able to pull off the concert in style. “We went up there and we did the show and it was marvelous”. For the gig Heil also brought in a Langevin studio recording console he’d modified to use with the speaker system in a live music setting. He’d had help in rewiring the board from his friend Tomlinson Holman who was at the time going to school at the Universeity of Illinois. Holman later went to have his own prestigious career in sound as the creator of the THX theater sound protocol. One of the things that made the mixing board innovative was an electronic crossover Heil had built into the console. Heil had some help from some early Deadheads in getting the show together. "My two roadies, Peter Kimble and John Lloyd, knew all the Dead songs — they were big fans. So that night they moved the PA, set it up and mixed the show." Heil had also innovated a trick to deal with the pesky problem of feedback, every stage musicians bane. "We would run the microphones out of phase from the monitors, something that nobody had been doing yet. Since they were out of phase with the microphones and the FOH system, anything that leaked in from the monitors would be canceled out. As a result, we could get these things incredibly loud before they would feed back. That's one of the things that Jerry Garcia really loved." The show was a massive success, and the Grateful Dead asked Heil, his crew, and his sound system to join them on the road. On that night the live sound system for rock and roll was born. “They took us right out of there that night on the rest of the tour. Jerry and I became very good friends. We could be here a long time talking about the things that we did together, the equipment, the technology, that's where I'm at with this. It wasn't so much of the group as it was Jerry and his love for gear and what we could do with different things and help them.” From that point on Heil started receiving more and more requests to do the live sound for touring rock bands. He did the sound for Humble Pie which is when he became friends with Peter Frampton, and he worked with ZZ Top among many others. Heil's setup had become an instant hit, and soon to be the template for the modern concert touring sound system. He was on tour with Chaka Khan in Chicago when he got a call from The Who in Boston where they needed his help. He wanted to help them, but didn’t want to leave Chaka Khan stranded and wasn’t sure how he’d even be able to make it to Boston with the truck of gear. Heil Sound stored and kept all their traveling equipment in a 40-foot semi, the first people to do so. The Who suggested he rent a Tiger airplane, who were an airfreight company. He got a friend with another PA system to cover Chaka Khan and they drove their semi onto a 707 jet and flew to Boston the next day. Heil’s sound system did what the Who needed it to do and set the standards for playing large arenas and coliseums. The Who used Heil’s system on the rest of the tour and from this encounter Heil forged a lifelong friendship with Pete Townsend. Townsend later called him to London because he had an idea for Bob. He wanted to know if could build a PA for quadraphonic sound. Once again up for the task Heil Sound built the system used for the Quadrophenia tour in 1974. As the 1970’s progressed at any one time Heil would have three of his custom PA systems on tour with acts like J. Geils, Jeff Beck, ZZ Top, and others, with a crew of 35 people working to make it all happen. Heil was also responsible for the first use of monitor speakers by musicians in concerts so they could hear themselves playing in these huge venues, and was the first to build stage monitors that didn’t feedback. All his knowledge in building came from the expertise with electronics he’d developed as a ham radio operator. The Talk Box With his buddy Joe Walsh he also built a talk box for guitar that could withstand the rigors of stage. The talk box is an effects unit that shapes the frequency content of a sound, usually of a guitar, by way of applying voice to the sound of the instrument. The original talk box had been invented by musician, band leader, and amateur radio operator Alvino Rey, W6UK back in 1939. Rey got the idea that he could wire a carbon throat microphone in such a way as to modulate his electric steel guitar. The carbon throat mics had in turn been originated for use by military pilot communications, so pilots could communicate even in extremely windy and noisy communications. Rey put one on the throat of his wife Luise King who was a singer in The King Sisters group. She would stand behind a curtain and mouth the words alongside the guitar to modify its sound. It was a move that added unique coloration and novelty to his performances. Some producers at a studio in Nashville had shown the trick to Joe Walsh, having given him a little box with a big hose that he drove with his guitar amp. It was good enough for the studio but the set up wasn’t powerful enough for the big live concerts Joe was playing at the time in his band Barnstorm. Heil and Walsh, along with the latter’s guitar tech “Krinkle” combined a 250-watt JBL driver and a hi-pass filter to make the first Heil Sound Talk Box. It was used on Walsh’s solo single, Rocky Mountain Way. Later Heil’s Talk Box was used to great effect by Peter Frampton, who received one as a Christmas gift. His girlfriend hadn’t known what to get him for the holiday and called up Heil for advice. Heil had just the thing for him and sent her a hand-built Talk Box whose components were housed in fiberglass and used a 100-watt high-powered driver. This was the tool that gave his Frampton Comes Alive! album and tour it’s signature sound, to the point where Peter Framptom and the talk box are almost synonyms. A Dish for Hungry for Satellite Hunters As the 1970’s rolled on into the ‘80’s Heil got bit by the satellite bug. His friend Bob Cooper was a guy he had done some of his moonbounce experiments with back in 1962. When he heard about some of Cooper’s shenanigans building a satellite dish that used a coffee can as a low noise amplifier (LNA) to pick up the backhaul of HBOs feed he made a point to reconnect with his old friend. Once a month Cooper had an informal get together in Oklahoma where he showed others how to build these satellite receiving systems, and Heil got into the game of TVRO or television receive-only. Communications freaks love to receive anything and satellite transmissions are particularly exciting to some devotees. At the time a dedicated group of communications hobbyists were getting into receiving the uncut and unedited content of satellites as it was transmitted unencrypted an “in the clean” to different local stations who would slap on their particular channel graphics and logos before presenting as a packaged TV program. For instance, sports broadcasts, would be transmitted with raw footage later to be edited during the highlights section of a local news program. After getting into the technical aspects of this for awhile, Heil got to be one of the first ten on the test team for the commercial satellite operation DirectTV in 1991. His store was one of the first to sell DirectTV. It was around this time his company also worked on installing custom home theaters, but after his stint of time served in this capacity, he got out of the satellite game, and his mind turned once again to the radio hobby. Hi-Fidelity for High Frequency One day Heil turned on his radio and didn’t like what he heard on the air. It wasn’t what his fellow hams were rag chewing about that caused him to be disconcerted. It was how they sounded when they talked to each other. He wondered where all the great sounding Art Collins radio gear had gone, and how it was that such good equipment had be replaced by gear that did not have the same audio quality. It was in seeking a solution to this problem that he started making microphones for hams and musicians. Of the many mentors Heil had over the years, Paul Klipsch was another whose knowledge and friendship changed his life. Klipsch was an engineer and a pioneer of high fidelity audio. Among the many patents he held was one for seismic prospecting and recording seismic waves. Seismic prospecting is a method of geophysical exploration where vibrations are made in the earth by firing small explosive charges, and other means, into the ground. The resulting waves are measured and studied so to reveal the underlying strata, or composition of layers of rock and soil. [Klipsch work in these fields possibly overlapped with the seismic work and interests of Gordon Mumma.] Klipsch had been dissatisfied with the quality of phonographs and early speakers in the same way Heil had been dissatisfied with the sounds of hams on the air: they both thought each had sounded bad. Neither were content to let things stand in such a state. Klipsch used his technical abilities to create better sound systems and environments, that led to the development of the corner horn speaker that was a vast improvement over previous iterations of the phonograph horn. Klipsch had his lab in old AT&T exchange building and Heil liked to visit him there. He directed Heil to study the work put out by the idea factory of Bell Labs, specifically the work of Dr. Fletcher and Dr. Munson. These two Bell Labs scientists gave Heil a secret weapon in his quest for audio excellence: the Fletcher Munson Curve. Dr. Harvey Fletcher had been born in Utah in 1884, graduated from Brigham Young High School in 1904 and University in 1907. Gifted in physics and mathematics he decided to go to the University of Chicago for his doctorate. Nervous about going to the big city on his own he persuaded his sweetheart to marry him, and they went together, even though he had not yet been admitted to the school. Robert A. Millikan, a Nobel prize winning physicist, became a mentor to Fletcher and helped him get started at the University, where he eventually earned the first summa cum laude ever awarded by the institution. During this time period Fletcher worked closely with Millikan who figured out how to measure the charge of an electron, research that was fundamental to the growth of electronics and broadcasting technologies. Fletcher eventually hitched his star to the Western Electric Company in New York, and from there went on to become the Director of Physical Research at Bell Laboratories. It was there under the auspices of pure research that his gifts fully blossomed. He published 51 papers, wrote two books, and had nineteen patents. In particular his two books, Speech and Hearing, and Speech and Hearing in Communication, set the precedent for further work on the clarity of audio. One of the things Fletcher was interested in was how the sound of a typical talker was heard by a typical listener. He realized that small imperfections in speech could have drastic effects on a listener’s ability to perceive what was said. For the telephone system this meant they had to do everything they could to make sure their own technology did not interfere with its primary purpose of allowing distant voices to connect with each other. The instruments used to convert sound waves into electrical form and then back into sound waves needed to be able to do so without causing distortion. Fletcher also conducted with his colleague Wilden Munson the first research on the frequency response of the human ear in 1933. By playing a series of tones they were able determine how listener's perceived loudness at different frequencies and from their results they learned that the frequency response of the human ear is non-linear. They also learned that frequency perception varies based on amplitude. They used the data from these experiments to create the Fletcher-Munson curve, which shows that the frequency range which the human ear finds most sensitive is between 2 kHz and 5 kHz. It was all published in their paper, “Loudness, its definition, measurement and calculation" published in the Journal of the Acoustical Society of America. AT&T used this research to equalize the phone lines and keep the maximum articulation of speech at the sweet spot between 2 and 3 kHz. Assiduous study of the Fletcher-Munson curve allowed Heil to make his next breakthrough and implement these findings in a line of equalizers and microphones. Equalizer’s had already been made for the Hi-Fi stereo market, but for some reason hadn’t been put together for use by hams. Heil corrected that, and in 1982 he was the first to build one specifically for use on the ham radio bands, the EQ200. He made this available as a DIY kit, after an article he wrote on it for QST Magazine set the ham community aflame. “Voice communication absolutely needs articulation,” he wrote. His equalizer helped to roll off all the frequencies below 100 hz, which only muddied things up and were a waste of RF energy. From Phased Array Antennas to Microphones
After he had the equalizer Heil realized there was still a problem with microphones used by hams. “They're bassy, they're tubby, they have no rear rejection,” as he put it. So Heil got into the microphone business. He worked with Icom and Yaesu on the microphones for their radios, and then went on to make his own microphones for ham radio, first the HC series, and later the Gold Line. Heil’s friend Joe Walsh was a big fan of Heil’s microphones for ham radio, so much that he thought they should be reworked for the stage with the professional musician in mind. In 2006 Walsh asked him to adapt his Gold Line ham microphone for him. Working closely with Walsh, he came up with the Gold Line Pro for his fellow musicians. Because he learned how to take it out of phase, it is the only microphone to have 40 db of rear sound. The success of his microphone came on top of all his previous experience and knowledge in radio and music. For the microphones he got an insight from his phased array antenna systems he used as a ham. Antenna phasing is used for ham radio beamforming, or pointing a signal in specific direction a person wants to transmit towards. In shortwave broadcasting, for instance, it is used to aim a signal at certain parts of the globe. Hams use it for making contacts in countries and states they want to work. Generally they are a set of different antennas combined to work as one. To beamform on the shortwave and HF ham frequencies different lengths of coaxial cable are used and attached to antennas that different create radiation patterns depending on selection. Another way is to hook them up into an RF matching network that provides -90° and +90° delays and relays for the configuration of each element. This enables a station to listen to other stations using the same-frequency in different locations. Heil took this knowledge of taking antennas in and out of phase to pick up particular stations, and used it in the microphone which he realized could also be made to out of phase and give it a huge amount of gain in the rear side of the mic, something uncommon. His design proved to be as popular with musicians as it was with hams. At the time of this writing Heil is eighty years old, and continues to get on the air every day with his various ham rigs and talk on his phased array antenna system. He was honored by the Rock and Roll Hall of Fame with a display on Heil Sound, the only display at the museum to feature an equipment producer. Heil remains a passionate organ player, and it is fitting that he is able to be heard playing live every week on shortwave radio at the time of this writing. International station WTWW out of Lebanon, Tennessee blasts his organ playing at 100,000 watts on 5085 kHz every Saturday at 8 PM Central Time. Heil’s sound systems have rocked the world and they never would have been possible if he hadn’t been swept up into the hobby of ham radio. References: Motes from presentation to OhKyIn Amateur Radio Society from a talk called “The Science of Audio” Bob Heil gave over Zoom on January 5, 2021. Archived on YouTube: https://www.youtube.com/watch?v=RJiO_vFa2Tc https://www.qsotoday.com/transcripts/k9eid https://www.nutsvolts.com/magazine/article/how-phased-array-antennas-work https://play.fallows.ca/wp/radio/ham-radio/ham-radio-beamforming-phased-arrays/ http://mosley-electronics.com/mosley_history.html For more on Bob Cooper, this interview from Mother Earth News: https://www.motherearthnews.com/nature-and-environment/satellite-television-zmaz80mjzraw http://abc.eznettools.net/byhigh/History/Fletcher/DrHarvey.html http://play.fallows.ca/wp/radio/ham-radio/ham-radio-beamforming-phased-arrays/ .:. .:. .:. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
The first time Chris Brown heard the League of Automatic Music Composers was on KPFA as he was driving to a piano-tuning appointment in 1981. The music was wild, unified as an organism, yet with divergent tentacles or strands wiggling off in multiple directions like a psychedelic octopi. It was Chris’ first exposure to networked computer music, and the wriggling tentacles had put their first hooks into his brain.
Five years later Chris was working with a group who had dubbed themselves Ubu, incorporated, named after the 1896 play Ubu Roi by Alfred Jarry. This group had members from the LAMC and was now at work organizing experimental music concerts at galleries and community music spaces. One of the concerts the group decided to organize was called THE NETWORK MUSE – Automatic Music Band Festival. Held in an old church it brought together four different groups working with homebrewed computer music and presented performances over a few days. One of these groups was the duo of The Hub, then comprised of just Tim Perkis and John Bischoff. At the concert Bischoff and Perkis were using a KIM-1 as a mailbox to post data used in controlling their individual music systems. This information then became available to the other player to use however and whenever they chose as they performed their combined system. The Hub had been their solution to the often messy tangle of wires and electronics that had been common during the LAMC years. Their interface was an elegant solution and a variety of computers and their users could plug into the system.
In 1987 composers Phil Niblock and Nick Collins instigated the formation of an expanded ensemble when some members of The Hub were invited to New York to give a performance at two separate locations linked together by a modem. This required the additional players and they were readily pooled from the other groups who had participated in the Network Muse. The two locations to be linked were both performance spaces, Exerimental Intermedia (XI) run by Niblock and the Clocktower (now MoMA PS1). The idea was to have a trio play at each location, that when connected via the modem became a sextet.
Bischoff and Perkis had already started playing as a trio with Mark Trayle in a group called Zero Chat Chat in the aftermath of the Automatic Music Band Festival, so it was a simple matter to recruit Chris Brown, Phil Stone, and Scott Gresham-Lancaster, who had all played in different groups at the festival to form a second trio. This expanded sextet became the Hub. They designed three pieces to play for the network, using the modem that divided the acoustics of the sextet into two trios that were still joined via the wires of information. These pieces were “Borrowing and Stealing”, “Simple Degradation” and “Vague Notions”. They also played three other pieces that were improvised independently, local to each group. As Kyle Gann wrote in a review of the piece for the Village Voice at the time, “Equally peculiar (for those who attended a different space each night) was the oblique correspondence of identical pieces between the Clocktower and EIF, for the two audiences did not hear the same sounds. Each group fed information into the others' performance, but basic materials differed, making each piece a kind of sonic conceptual butterfly: same body, wildly different wings.” To many people having a group playing in two different physical locations was just a neat technological stunt. While interesting to promoters, it wasn’t the main interest of the band, though the performance did help congeal the Hub and the six composers continued to work together under the rubric. Yet the idea of the modem concert continued to haunt them and it was a spectacle they were asked to repeat in different forms. Their interest wasn’t however in the distances that separated them, but in the interactivity of the network itself, and the sounds of iconoclastic music programming of each musician that could be influenced by the musical programs of the others.
The Hub also kept up with the new computers that continued to hit the market. The next iteration of the Hub device was based on the SYM-1 single-board computer made by Synertek. The processor was 1 MHz and it had 8k of RAM on the deck and a hexadecimal keypad for programming in machine language like the KIM. What made this an upgrade for the computer music chamber ensemble was that they built an expansion board onto the SYM that had four 6850 ACIAs (asynchronous communication interface adapters). These had connections to the 8-bit databus, seven address lines, system clock, and read and write controls. This bit of hacked together gear gave them options for connecting, interacting, musically communicating.
The homebrewed circuits were housed inside a box of clear plastic underneath the SYM with connectors on the outside. Three of the connectors were used to network three players with 1200 baud RS232 serial connections. The fourth connector went to an identical SYM-HUB they had built to host the other trio -the other half of the six piece band. These two Hubs could now communicate with each other quite speedily at 9600 baud, even though most modems in that era couldn’t send information that fast. Phil Stone and Tim Perkis wrote a program in an assembly language used to receive and transmit messages from the players, each with their own serial port, to the Hub. The program also constantly copied stored data to the second Hub so that both memory areas had data from all the members of the group. Stone and Perk’s wrote some comments on the program, “Devices connected to each channel make requests to write to the HUB processor table memory, and to read it. Each makes its request by sending command bytes of which the high four bits form a command field (CF) and the low four a data field (DF). In the HUB processor there are three variables kept for each channel: a current WRITE.ADDRESS (12 bits); the current READ.ADDRESS, (12 bits) and the current WRITE.DATA (8 bits). These variables for each channel can be set only by commands from that channel. All channel commands are dedicated to setting these variables, or initiating a read or write to the HUB table memory.” The music of the Hub is in its way just as cerebral as the means used to make it. Having assembled their gear and membership, they now set about to play the endless game of composition, programming and recombination. The group were musicians first and technologists as a close secondary interest. Where most musicians work from a score, the Hub works from a spec. Individual notes are not preordained, but specifications for how a piece is to be constructed is all put in the spec. The spec can be read closely along with the schematics of the Hub. Like the blueprint for a house, the spec gives an outline or structure to the game of networked music. Even though the spec is often designed by one composer, the individual aspects of how it is prepared are left up to the programmers individual.
HubRenga
Being based in the Bay Area, having a history with CCM and Mills College, and being part of the experimental music and arts scene meant there was a great deal of overlap between people, and a lot of potential for fruitful collaborations. Several members of the Hub knew Ramón Sender. During the Hub years Sender had gotten interested in the collaborative aspects of writing made possible with computer networks. A fruitful collaboration was cooked up between the Hub, Ramon Sender and the Poetry group on the WELL, the Whole Earth ‘Lectronic Link one of the oldest virtual communities and a regular online hangout spot for members of the counterculture. The first version ofHubRenga was performed over the air on a KPFA’s Music Special radio show hosted by Charles Amirkhanian on September 7, 1989. In this transmission the Hub was joined by novelist and musician Ramón Sender, and poets from another network, the poetry conference of The Well (Whole Earth ‘Lectronic Link), a pioneering electronic community that operated in the Bay Area to facilitate communication between people interested in arts and alternative lifestyles. The poetry conference was a forum about poetry which subscribers to The Well could join to exchange ideas and work collaboratively. Sender was one of the hosts of the forum for a number of years. For the HubRenga piece, the computer network of the Hub was connected to the network of the WELL. For this performance, the Japanese poetry game called the Renga was used as a format for the textual aspect of the work. Renga is a genre of collaborative Japanese poetry where alternating stanzas are linked in succession by multiple poets. Renga is typically composed live when a group of poets are gathered together. For HubRenga Ramón acted as moderator inside the KPFA studio, and browsed the poetic submissions as they came into the poetry conference forum on WELL, reading them aloud as part of the music, accompanied by an unnamed female reader. The WELL poetry group, had been working, through Sender, for a few months with the Hub before the big date at KPFA. In keeping with traditional Renga practices, the poets worked around a theme. In departing from those practices they used a non-traditional theme. Usually the themes are based on the season when they are performed: summer, spring, autumn, winter. In this case the poets chose to use Earth as the theme. The poets came up with a common list of set words to use throughout the performance and this was given to the composer-programmers. They wrote programs that used these words as triggers. When a Hub member received a text from the WELL on his computer, their program filtered it for specific keywords, determined in advance from the list to trigger specific musical responses. The keywords chosen by the Hub as triggers were: embrace echo twist rumble keystone whisper charm magic worth Kaiser schlep habit mirth swap split join plus minus grace change grope skip virtuoso root bind zing wow earth intimidate outside phrase honor silt dust scan coffee vertigo online transfer hold message quote shimmer swell ricochet pour ripple rebound duck dink scintillate old retreat non-conformist flower sky cage synthesis silence crump trump immediate smack blink This was the kind of interactive system the Hub thrived on and HubRenga was performed again in Los Angeles, along with Bonnie Barnett, an original member of Pauline Oliveros’ Womens Ensemble, who declaimed the power words. In this iteration Ramón Sender and members of the WELL Poetry Conference, participated via modem from the Bay Area. The Hug Goes MIDI In 1990 the Hub brought their wrecking ball to the world of MIDI music, a technical standard and communications protocol that was then only nine years old. Scott Gresham-Lancaster had been tasked with exploring its possibilities for the group. MIDI, which stands for Musical Instrument Digital Interface, allows for a plethora of electronic instruments, synthesizers, computers and other audio devices to be connected together to play, record and edit music. One single MIDI link on one single cable can carry up to sixteen discrete channels of information and these can all be sent to different instruments or devices, say a synth, drum machine or computer. The information carried on one of those channels includes musical instructions for pitch, velocity or attack, notation, vibrato, panning within the stereo field, and clock signals that allow one device to control the tempo of the other devices in the MIDI network. As a musician plays something that is using MIDI it all gets converted to information that is commonly used to control other sound producing modules. For instance a person is playing a synthesizer and it is triggering an external drum machine, sequencer, or other digital sound module. It is also used for recording and writing music. A player can hook a MIDI capable instrument up to a computer which then records the data. This information can be assigned different voices in a digital audio workstation, modified, and edited. This typical way of using MIDI –as one musician controlling an array of other instruments from one station- had no interest or appeal to members of the Hub. They wanted to break MIDI and use it for their own purposes. Scott beta-tested the then new Opcode Studio 5 MIDI interface. It was a single box unit that functioned as computer interface and MIDI patchbay with 15 inputs and outputs, processor and synchronizer. Scott played around with the hardware and learned how to program it so it could work as MIDI version of their namesake Hub. The new protocol would give them a faster messaging system that was also more flexible than their homebrewed system. Another advantage was that by using a standardized platform they would be able to share their working methods with other musicians in a way that was more accessible and closer to open source. Yet the switch to MIDI meant a drastic change from the system they had been using. In the world of electronic music a new system means a new sound and they would either have to alter their existing pieces to fit with MIDI or start writing brand new pieces. It also changed the operational mode they had become accustomed too. Instead of the common memory shared between members, where data in any customized format could be deposited, the MIDI-HUB worked as a switchboard. Each player now had their musical data tagged and in this way identified them. “No longer was it up to each musician to specifically look at information from other players, but instead information would arrive in each player's MIDI input queue unrequested. Information about current states had to be requested from players, rather than being held on a machine that always contained the latest information. This networking system was more private, enabling person-to-person messaging, but making broadcasting more problematic. To send messages to everyone, a player would need to send the same message out individually addressed to each player. If a player failed to handle the message sent, its information was gone forever. And messages were sent more quickly under the MIDI-HUB, leading to an intensity of data traffic that was new in the music. The MIDI-HUB pieces reflected the nature of this new aspect of the band's network instrumentation.” Waxlips was the first piece written for the MIDI-HUB and it was designed by Tim Perkis as a simple way of exploring the architecture of the network and it ended up becoming a “tune up” piece for the ensemble in their performances and tours, a way to test the system and get it up to speed before tackling other pieces from their repertoire. It was written to be simple and with minimal musical structure. Each player sends and receives requests to play one note. Once the request comes in and is received, the note message gets transformed in a fixed way and is sent on to someone else. The message can be modified by any musical rule. The only limiting factor was that in the various sections of the piece, specified with signals from a lead player, the same rule must be followed so a new-message-in gets followed by the same new-message- out. The lead player “jump-starts the process by spraying the network with a burst of requests.” Tim Perkis writes in the liner notes to the Wreckin’ Ball CD that contains recordings of Waxlips, “The network action had an unexpected living and liquid behavior: the number of possible interactions is astronomical in scale, and the evolution of the network is always different, sometimes terminating in complex (chaotic) states, including near repetitions, sometimes ending in simple loops, repeated notes, or just dying out altogether. In initially trying to get the piece going, the main problem was one of plugging leaks: if one player missed some note requests and didn't send anything when he should, the notes would all trickle out. Different rule sets seem to have different degrees of ‘leakiness’, due to imperfect behavior of the network, and as a lead player I would occassionally double up -- sending out two requests for every one received -- to revitalize a tired net." One of the ways the MIDI-Hub enabled the ensemble to collaborate was by receiving the output data from another musicians set up. For Alvin Curran’s Electric Rags III composition, Curran improvised on his Yamaha Disklavier electric piano. The MIDI output of his improvisation was sent through the Hub system and the ensemble players used it whatever ways they wished.
They used a similar set up again for Scot Gresham-Lancaster's Vex, a take on Erik Satie's proto-minimalist and extremely long piano piece Vexations. For this version they took Satie’s score and fed it into the HUB for a synchronized performance of the piece by Alvin Curran and the Rova Saxophone Quartet. As each note arrived into their system the Hub took the notes to create an electronic embellishment for the acoustic players they were working with.
Curran was a frequent collaborator and they worked with him on a studio version of his Erat Verbum (1993 iteration). This was a six part radio composition piece made for the Studio Akustischer Kunst of the WDR, and they worked with him on the Delta section. The piece utilizes recordings of John Cage’s famous Norton Lectures, also known as I-IV, that were fed into the HUB. The members of the group perused these and retranslated them instantly into Morse Code. Curran than live mixed the dots and dashes into a stunning fantasia. The stamp John Cage left across various musical subcultures and musicians was also evident in the work of The Hub. His spirit was kind of hovering in the background of things as they went about their work. “One of the strands in the musical philosophy of The Hub was the interest in defining musical processes that generated, rather than absolutely controlled, the details of a musical composition. An acknowledged influence on this interest was the work of John Cage, and it seemed a natural extension to us to try to automate the indeterminate processes used in his work. Many of these processes are extremely time-consuming and tedious; and given that Cage was himself involved for a long time in live electronic performance, we felt a real-time realization of these processes during the progress of a performance was not only feasible, but aesthetically implied.” In 1995 they got the opportunity to do a live realization of Cages’s Variations II at Mills College for a happening put together by David Bernstein called “Here Comes Everybody: A Conference on the Music, Writing, and Art of John Cage”. As part of the activities one evening of concerts was devoted to Cage’s electronic music and The Hub performed their version of his iconic composition.
Disconnectivity
Ever since the Hub had played together at their XI/ClockTower premiere in NYC, in two separate locations connected by modem over the telephone wires, there had been pressure on the group from the many techies interested in their music for them to switch from their serial communications network to ethernet. There had also been pressure on them to do further concerts where the musicians were playing in different locations but connected via a network. In a way they had done this with the HubRenga concerts where the poets connected to the Hub via the WELL. Yet they hadn’t played together as a spatially disconnected group since the first concert. In a way this was something that was expected of them, even if they really preferred to be in each other’s company while playing. The public fascination with the idea of musicians playing together though separated but vast distances in physical space remained a constant even though they had never repeated the experiment or incorporated as a regular part of their practice as a network ensemble. They preferred the local area network of being in each-others company as they played. They sought a balance between the spontaneous interactions of the electronic systems they set up and the reciprocal feedback between themselves as humans making music together -an inherently social activity. Chris Brown writes, “Since that event we have continued to receive requests for concerts to be performed remotely, that is, without all of us being physically in the same space, but have always declined, in part because we really prefer to be in the space where we can hear each other's sound directly and to see each other and communicate live. The Hub is a band of composers who use computers in their live electronic music, and our practice has been to create pieces that involve sharing data in specific ways that shape the sound and structure of each piece. We are all programmers, and instrument builders in the sense that we take the hardware and software tools available to us and reshape them to realize unconventional musical ideas.” Eventually however The Hub succumbed to pressures to produce another concert where the members were separated in two different locations. “Points of Presence” was produced in 1997 by the Institute for Studies in the Arts (ISA) at Arizona State University (ASU), that linked to members of The Hub at Mills College, California Institute for the Arts and ASU over the internet. The piece nearly spelled the end of the Hub after a decade of cooperative engagement in network music composition. “Now in 1997 new tools have become available that allow us to reapproach the remote music idea - telharmonium, points-of-presence - in a new way. Personal computers are now fast enough to produce high-quality electronic sound in real-time, allowing instrument-builders like Mike Berry to choose a purely software environment to produce home-made musical instruments. His Grainwave software, a shareware application for MacOS PowerPCs, was adopted by the group for this piece because it allows each of us to design our own sounds, and these sounds/instruments can be installed at any physical location that has a PC on which they can play - we can be independent of the hardware that produces our music, our instruments have become data which can be replicated easily in any place. At the same time we, along with the rest of our culture, have been spending more and more time in our lives and our work communicating and collaborating on the internet. Why should we not extend our musical practice into this domain? Can we retain here the ability to define our own musical worlds, avoiding the commercial, prefab, and controlling musical aesthetics of the technological culture?” Yet the performance itself was plagued by technical failures. They ran into many issues with the software and couldn’t debug it easily on the fly with a room full of people expecting to hear a concert. Because they weren’t in the same place they had to rely on internet chat and telephone calls to try and fix the issues. And with the different parts unable to work together as a network, the music was never able to work or lift off the ground. They were only able to play for ten minutes as a full network and they had to supply those who came to hear them with clumsy explanations of what they were trying to do. “The technology had defeated the music. And after the concert, one by one, the Hub members turned in their resignations from the band.” It wasn’t to be the very end of the band. Having been built as an ad hoc network they eventually found themselves reassembled again, ready for action, and all of the members of the Hub have lively musical activities they are involved with outside of the network -bringing in new information and new ideas to their working methods. References: The League of Automatic Music Composers: 1978-1983, New World Records No. 80671, released 2007. Collection compiled by Jon Leidecker (Wobbly). The Hub: Boundary Layer. Tzadik. 8050-3. Three CD set with extensive liner notes and CD-Rom text files. At a Distance: Precursors to Art and Activism on the Internet. Edited by Annmarie Chandler and Norie Nuemark. MIT Press. Cambridge, Massachusetts. 2005 .:. .:. .:. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
As the musical computers at Bell Labs in New Jersey were winding down in the late 70s, people in the California homebrew microcomputer scene were just starting to get wound up. DIY computers had arrived and a group of electronic music experimentalists in the San Francisco Bay Area were writing programs, networking them together and seeing how they sounded in various configurations. The group was known as the League of Automatic Music Composers (LAMC), active between 1977 to 1983 before being reassembled into another musical configuration known as The Hub. LAMC can rightly be considered the first computer music group, and first network music group.
The League had its beginnings in the CCM during the time when Robert Ashley was the director. It was also the time when the first fruits of Silicon Valley were beginning to ripen and were able to be plucked off the shelf by hackers and hobbyists. At CCM these hackers and hobbyists were also experimental musicians. Because the CCM allowed for open access to its studio it drew a large crowd of people outside of strictly academic art music into its doors where they were all able to freely mix and mingle. Rock musicians met hackers, and hackers met free improvisers and jazz heads, who all met those studying the radical end of western classical music as it had evolved in the 20th century. One of the mottos of the CCM was “if you’re not weird, get out!” It became a home for an assortment of musically inclined misfits, a place where they could fit in. Part of this already strange and heady brew was the homebrew tradition, which was very active at the Center due in part to its proximity to new integrated circuits being produced in Silicon Valley, in part due to its history as the place where the Buchla Box had been invented, and its association with the original composers who had formed SFTMC. Many of those luminaries, such as David Tudor, came to lecture and give concerts at CCM. The students had taken to the idea that building and designing circuits was part and parcel of the compositional process. The schematic diagram was seen as directly related to the graphic scores that had been innovated by the likes of John Cage, Morton Feldman and Karlheinz Stockhausen. David Tudor and Gordon Mumma had already paved the way in their creation of electronic musical systems that once designed and built could be turned on to produce the music. These cybernetic systems were often autonomous and required little intervention from the composer as player after the system had been set up. Tudor had spent time at CCM as a composer in residence and his influence permeated the atmosphere there, particularly his idea that the job of the composer was to listen rather than to dogmatically determine every last note of a piece of music. This emphasis on listening is a theme that runs through contemporary musical practice and can be traced to this rich heritage left to us by Cage, Oliveros, and Tudor. In Tudor’s case he emphasized the setting up of autonomous, or automatic networks of electronics; systems that were made up of phase shifters, attenuators, amplifiers, and filters such as in his Untitled piece from 1972. The aesthetic beauty of such a piece lies in the enjoyment of listening deeply to the complex interactions of the system. This system music presents a mirror to other types of systems: human social systems, the diverse ecological systems of the natural world, complex electronic communication systems, and the way the human body is a system of organs, cells, tissues, nerves, and parts all moving together, sometimes in harmony, sometimes creating dissonant tones and clashing with noise.
By the mid-seventies the first commercial microcomputers had been made available for the average consumer. They were called micro at this time to differentiate them from their mainframe predecessors that took up entire rooms in the halls of industry and the academy. This availability meant that anyone who was willing to fork over the $250 bones one of these machines cost could have their own computer. Free from the oversight of how it was used by the folks who were in charge of the institutional mainframes, enthusiasts were able to dabble. These micro computers were integrated into the circuit of California’s music scene.
Jim Horton was an early adopter, and he was quick to get his hands on one of these computers. It was 1976 and the contraption was the KIM-1. This was a single board device and its name stood for how it worked: Keyboard Input Monitor. Jim’s love of KIM soon spread out like a virus around the community and many other people started saving up their dollars to get these machines. The KIM-1 itself consisted of just a single printed circuit board. All the components were on one side and it had a whopping memory of 1k RAM. The unit had a hexadecimal keypad used for programming. The programs themselves were saved to audiocassette. An add-on keyboard could be attached and up to 4000 characters displayed on a television or monitor. As more people bought the machines, they started to share the programs they had written for them, and helped each-other troubleshoot the persnickety machine, and so a community of devotees grew around the devices. The KIM-1 wasn’t Horton’s first experience working with new technology. As a musician he was trained as a flutist, but had also gotten in on the game of analog synthesis. He had gained a reputation for building very large modular patches that had the ability to self-modify. He would get his friends to bring along their synths and he would connect his synth to theirs building networks of synthesizers. After building a huge and complex patch he would let the system play itself in long eight hour concerts that lasted all night. These concerts were similar to the all night concerts Terry Riley gave and a precursor to the sleep concerts later given by electronic musician Robert Rich. Jim Horton was the quintessential starving artist and he did his work for the glory not the gold. He had saved his meager welfare checks, and instead of buying food, literally starved himself for a synthesizer. He sacrificed to acquire the equipment necessary for realizing his soundworld. Forgoing creature comforts for greater achievement, he was known for plugging straight in to whatever work was at hand, and just getting on with things. One of his bandmates, Tim Perkis, recalls that meeting Jim was a liberating experience. He said, “Horton would show up at a gig with his tangle of loose wires and electronic components in a dresser drawer he would temporarily press into service. With my head full of hesitations born of half-digested conventional wisdom about audio circuitry, it was mind-blowing to see someone just go directly to the heart of the matter, twisting bare wires together, connecting anything to anything, and doing the deeply conceptual musical work which drove him without waiting for the right equipment to appear. He lived in a poverty that never seemed like a limitation to him, and worked with whatever means he had at hand.” In 1977 it was Jim Horton who first proposed the idea of making a microcomputer network band. It happened in an organic way. There was already a group getting together on a regular basis to share the music they were making on their KIM computers. Some of this music was also made with analog circuits and other instruments. At one of these gatherings Horton shared his idea of banding together to create a “silicon orchestra”. He had already demonstrated that synthesizers could be networked together into self-generative, ever shifting systems of musical patches. It was a natural next step to network the computers and other circuits they were building into their own system and listen to the experimental results. Later in the year at Mills College Horton worked with Rich Gold, one of the founding members behind LAMC. The pair put on a concert where the two of them linked their KIMs together. For the performance Horton ran an algorithmic music program based on the harmonic theories of eighteenth century mathematician Leonhard Euler. Rich Gold had written an artificial language program and these two programs interacted with each other for the show. Jim also was working with other future band member John Bischoff at the time and one of the things they had figured out was a piece where tones from John’s KIM would make Jim’s KIM transpose its melodic activity according to a set key note. Then in 1978 John, Jim and Rich all joined together as a trio to give a performance at an artist space in Berkley. Next they were joined by composer David Behrman who had come to California to co-direct the CCM with Robert Ashley, his friend and fellow member of the Sonic Arts Union. Rich Gold and Jim Horton were studying with Behrman at CCM. It was around this time when Behrman recorded his landmark album On the Other Ocean. This album is equally at home in the related but differing milieus of New Music, Ambient, and Minimalism, and on comfortable footing displaying sustained harmonies between electronic and acoustic sounds that slowly dance and revolve around each other until the difference between them blurs. The two pieces on the album feature the KIM-1 microcomputer with flute and bassoon on the title piece, and cello and the KIM-1 on the flip side, Figure in a Clearing. In these pieces the KIM-1 “listens” to the live performers, and accompanies or marks points when particular pitches are played. When Behrman joined LAMC this principle became a recurring theme in their music.
Behrman talks of his time at Mills College, “Some of the students began bringing computers to the Mills Center for Contemporary Music; on the advice of a wise Bay Area artist, Jim Horton, Paul DeMarinis and I bought KIM-1 microcomputers. KIM-1 weighed about 10 ounces and cost around 200 dollars. Around that time I'd been building switching circuits that were placed between primitive pitch-sensors and homemade synthesizers consisting mostly of triangle-wave generators. The switching circuits took a long time to solder together and could only do one thing. It seemed that this new device called the microcomputer could simulate one of these switching networks for a while and then change, whenever you wanted, to some other one. It was fun connecting its port lines to homemade synthesizers, and also to sensors, and writing very simple software to link sensor activity with synthesizer sounds. There was something fascinating about the design of software, even though on the KIM-1 it had to be done in machine language, by pressing keys on a little hexadecimal pad. This was the dawn of 'interactivity' in California, the moment when Jobs and Wozniack were introducing the Apple computer. There was a Bay Area composers group of that era, the Microcomputer Network Band, which liked to do concerts in which the participants would wire together a group of computers on a table, turn them all on, and stand back and watch to see what would happen.”
In November of 1978, now a quartet, the League of Automatic Music Composer gave its first performance using the name. Two years later Rich Gold and David Behrman had left the group to work on other projects. That’s when Tim Perkis swooped in to fill the spots. Tim was interested in music made with alternate tuning systems from various parts of the globe, even playing in a local gamelan group. He was also a Just Intonation fanatic who happened to be skilled with electronics, having a graduate degree in video from California College of Arts and Crafts. If building your own homebrewed electronic instruments is a new kind of folk craft, than Perkis excelled at this craft work, programming his circuits to play in the various tuning systems he collected in his research.
Now in trio form, with a cadre of Bay area musicians and improvisers joining the festivities on occasion at various performances, they played together for four more years in this configuration. They had a habit of getting together on alternate Sundays to play at the Finnish Hall in Berkley, and people were welcome to come in and take in the scene.
Perkis writes, “Audience members could come and go as they wished, ask questions, or just sit and listen. This was a community event of sorts as other composers would show up and play or share electronic circuits they had designed and built. An interest in electronic instrument building of all kinds seemed to be ‘in the air.’ The Finnish Hall events made for quite a Berkeley scene as computer-generated sonic landscapes mixed with the sounds of folk dancing troupes rehearsing upstairs and the occasional Communist Party meeting in the back room of the venerable old building.” During their time the LAMC distilled the spirit of the Bay area and infused its essence into their playful work practice and the music that came out of their curious explorations. Part band and part collective, they blended the communal zeitgeist of the day, with the fermenting intellectual and cultural atmosphere at work in such staples as the Whole Earth Catalog that promoted the use of personal computers alongside solar cells and sprout growing kits as part of the wave of interest in self-sufficiency and appropriate technology prevalent during a decade when the realities of hard limits were entering people’s consciousness. The members of the League had taken mega doses of the do it yourself ethos with regards to technical innovations. Everything they used was homebrewed or built from kits and modular components. All of it was on the table and subject to being taken apart, tinkered with, put to use in experiments. Then they would put it all back together again to see how it worked in a variety of combinations. The League created networks of microcomputers and circuits with an ear towards making one large interactive musical instrument out of the member’s individual computers and components. One came from many. The members of the collective were all interested in computers and programming them to make music. They learned that when they networked their machines together and sent instructions to each other, the amassed circuits of silicon and solder were capable of eliciting what they called new “musical artificial intelligences.” The sound of the leagues music is like a noisy arcade that has been rewired and rerouted in an ad hoc fashion. Amidst the distortion, the random generated tones, and the disorienting arpeggios produced by the circuits and programs, something beautiful occasionally emerges, but the sounds are always interesting and stimulating to the intellect. It’s often messy and unpredictable, but what comes out of the apparent chaos has the feel of sentience and is full of life.
Without the same kind of tools being used by Max Matthews and Laurie Spiegel and others at the big institutions, it should come no surprise that the sounds the League conjured up had more in common with 8-Bit gaming soundtracks, albeit highly dosed and on a recombinant and aleatory West Coast trip, than with the kind of sounds the bigger mainframe computers were making. It was done by a group of individuals dedicated to the notion that computers and people could create their own independent networks, built at home from the circuit board up. Their music has as much in common with the lo-fi aesthetics of garage rock as it does with the pristine waveforms built from code at Bell Labs. The limitations in computer memory, the limits of space on the circuit board, and the haphazard way it all got connected to other components gave their music the flavor of strong home brewed hooch. The sounds get the job done, and in their miasmic chaos, what comes out of the mess of wires is sublime.
The LAMC embraced their role as musical bricoleurs. According to Perkis, “We felt our work was more akin to that of our mentors and friends building gamelans (Lou Harrison and Bill Colvig), mechanical or electro-mechanical musical instruments (Tom Nunn, Chris Brown), or incorporating hacked versions of electrical and new electronic musical toys into their work (Paul DeMarinis, Laetitia Sonami), than to the contemporary institutional computer music. There was always the sense that the music arose out of the material situation, out of idiosyncratic individual players and the anarchic, ad-hoc arrangements they made.” Theirs was a mechanical musical conversation that ranged from noisy arguments to anarchic harmonies.
Their music was also steeped in the traditions of free improvisation that had developed on the West Coast. When they set up their systems, at Finnish hall, or in the living room of a bandmate, they didn’t set about to practice a certain song or pre-composed piece of music, it was rather the ever evolving continual music of the patch in progress, the program in process, the new circuit being added to the mix, or the old circuit being mixed in a new way. Each member had a station of their own equipment, running their own programs, making their own sounds and contributing them to the spontaneous mix. The stations were set up in such a way that the microcomputers could send and receive information from each other, hence being a network band. The novel interactions of each new set up became the piece. It was composed, but it was spontaneous. With each new system set up the result was automatic.
So, as with David Tudor and Pauline Oliveros, the main activity of the musician was in listening. Making adjustments, tinkering with the system, the listening to what happened, after listening again and making new adjustments, tinkering some more and listening again in and endless cycle of discovery and surprise. When they noticed a set up that elicited sounds of beauty, or a sublime alien strangeness, they took notes so they could try to realize that same musical state again. It was true experimental music made in a laboratory they put together themselves. In 1983 all the tinkering and hauling gear was beginning to take a toll on Jim Horton. He had been suffering from rheumatoid arthritis already for some time, and in his way, endured the pain with stoic fortitude, pushing it to one side to continue living his Spartan artistic lifestyle. But it became too much. Eventually the human power supply running the operation had to be unplugged. The LAMC slowed down and then decided to disband. Yet the end of the LAMC wasn’t the end of what Jim and the others had started, but rather a new beginning. Tim Perkis and John Bischoff went on to try and bring a touch of order to the chaotic mess of wires, gadgets and connections that had become their musical practice. They envisioned building a standard interface they could more easily network their computers together with. This they achieved and became the seed for Perkis and Bischoff’s next project, The Hub.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. |
Justin Patrick MooreAuthor of The Radio Phonics Laboratory: Telecommunications, Speech Synthesis, and the Birth of Electronic Music. Archives
August 2024
Categories
All
|