Dr. Friedrich Trautwein the Radio Experimental Laboratory The story of The Studio for Electronic Music at the WDR is linked to the earlier work of two German instrument makers, Dr. Friedrich Trautwein and Harold Bode. Two institutions were also critical precursors for the development of the technology around electronic music, the Heinrich Hertz Institute for Research on Oscillations and the Staatlich-akademische Hochschule für Musik. For the latter, in particular, the opening of its Rundfunkversuchstelle, or Radio Experimental Lab, will be briefly explored as they important in the history of radio and electronic music. The philosophical and aesthetic milieu surrounding what was called “electrical music” in Germany at the time, became one of the intellectual cornerstones from which the studio in Cologne was created. Dr. Friedrich Trautwein was born on August 11, 1888 in Würzburg, Germany and became an engineer with strong musical leanings. After beginning an education in physics, he quit and turned his attentions to law, so he could work for the post office in the capacity of a patent lawyer, and protect intellectual properties around developments in radio technology. When WWI broke out he became the head of a military radio squadron. The experience cemented his love for communications technology. After the war ended he went on to receive a PhD in electrical engineering. Between 1922 and 1924 he got two patents under his belt, one for generating musical notes with electrical circuits. Trautwein then went to Berlin in 1923 where worked at the first German radio station, the Funk-Stunde AG Berlin. On May 3, 1928 the the Staatlich-akademische Hochschule für Musik (State-Academic University of Music) opened their new department the Rundfunkversuchstelle (RVS) or the Radio Experimental Lab. One of their goals was of researching new directions and possibilities associated with the development of radio broadcasting. At the time in Germany, much thought was going into the way music was played and heard over the radio. There were many issues around noise and fidelity on early broadcasting equipment and receiver sets that made listening to symphonies, opera singers and other music not as pleasant to listen to when it came over the air. Some people thought it was because listening to a radio broadcast was just different from the way music was perceived when at a concert hall or music venue. These minds thought that a new form of music should be created specifically for the medium. This idea for a new musical aesthetic came to be known as rundfunkmusik, or radio-music and neue sachlichkeit, or the new objectivity. The RVS was in part established to explore the possibilities of radio-music. In 1930 Trautwein was hired as a lecturer on the subject of electrical acoustics for the RVS. One of the other goals of the institution was to create new musical instruments that specifically catered to the needs of radio. An overarching goal was to create new tonalities that would electrify the airwaves and sing out in greater fidelity inside people’s homes on their receiving sets. It was at RVS that Trautwein collaborated with the composers Paul Hindemith, Georg Schünemann and the musician Oskar Sala to create his instrument the trautonioum. Another objective Trautwein had during his time at RVS was to analyze problems around the electronic reproduction and transmission of sound, like Harvey Fletcher and others had at Bell Labs. Unlike the people at Bell Labs, the RVS was specifically part of a music conservatory, and though they also had the goal of clarifying speech, they were very interested in electronic music. It took Bell Labs until the 1950s to get in on that game. One of the aims of the trautonium was to be an instrument that could be used in the home among family members for what the Germans called hausmusik. They wanted it to be able to mimic the sounds of many other instruments in a way similar to an organ. To achieve this aim they worked with various resistors and capacitors and employed a glow lamp circuit to create the fundamental frequencies. Changes in resistance and capacitance on the circuit altered the frequency. Trautwein also added additional resonance circuits to his design that were tuned to different frequencies. He connected these to high and low pass filters that could then create formants with the sound. All this control over the sound led to the ability to create very unusual tonalities alongside the familiar and traditional. Changes in tone color were made available with the turn of a dial. A new sound could be dialed in just as a new station could be listened to by turning the knob of radio. Tone color isn’t static either, but changes as the sound moves through time. This is the acoustical envelope of a sound, and Trautwein took this into consideration when designing his instrument. In their search for rich tonalities Trautwein and his colleagues stumbled across the mystery of the vowels. Preceding Homer Dudley’s vocoder by eight years, it became the first instrument able to reproduce the sounds of the vowels. This led Trautwein and Sala to discover the many similarities that exist between vowel sounds and the timbre of a variety of instruments. Trautwein compared the oscilliograms of spoken vowel formants with those played by the trautonium and found that they conformed to each other. “The trautonium is an electrical analogy of the sound creation of the human speech organs” he wrote in his 1930 paper Elektische Musik. “The scientific significance lies in the physico-phsyiological impression of the synthetically generated sounds compared with the timbre of numerous musical instruments and speech sounds. This suggests that the physical processes are related in many cases.” For the first iteration of the instrument there were knobs for changing the formants and timbre, and a pedal for changing the volume. The process it used to change the tone color was an early form of subtractive synthesis that simply filtered down an already complex waveform, rather than building one up by adding sine waves together. On June 20th 1930 a demonstration of the Trautonium was given at the New Music in Berlin festival. This was to be an “Electric Concert” and one of the main attractions was the premiere of Paul Hindemith’s Trio-Pieces written for the instrument. On one of the three instruments Hindemith himself played the top part with Trautwein and Oskar Sala playing the middle voice. A piano-teacher named Rudolph Schmidt played the bass portion. A commercial version of the instrument, dubbed the Volkstrautonium, was manufactured and distributed by the German radio equipment company Telefunken starting in 1932, but it was expensive and difficult to learn to play, and so remained unpopular. The company managed to only sell about two a year, and so by 1938 the product was discontinued. Composers remained were somewhat interested in its abilities and Hindemith, who had acted as an advisor to Trautwein, wrote the Concertina for Trautonium and Orchestra in 1940. Oskar Sala became a virtuoso on the instrument and would play compositions by Niccolò Paganini on it. In time, he took over the further development of the trautonium and created his his own variations- the Mixtur-Trautonium, The Concert-Trautonium and the Radio – Trautonium. He continued to champion it until his death in 2002. Famously, the sound of the birds in Alfred Hitchcock’s movie of the same name is not sourced from real birds, but come from the Mixtur-Trautonium as played by Sala. In 1935 the RVS was shutdown by Joseph Goebbels, but it did not disappear entirely as its various elements were diffused into different parts of the music school. After WWII, Trautwein had a hard time getting a job because he had been a card-carrying Nazi. He did build a few more instruments, including the Amplified Harpischord in 1936 and the Electronic Bells in 1947. A modified version of the original Trautonium called the Monochord (not to be confused with the stringed instrument and learning tool of the same name) was purchased by the Electronic Music Studio at the WDR in 1951, as detailed below. His later legacy was to create the first sound engineering programs in Dusseldorf in 1952. Harold Bode and the Heinrich Hertz Institute for Research on Oscillations Harold Bode was the next instrument maker to place his stamp upon the Electronic Music Studio at WDR, and later added a few flourishes to the work done at the Columbia-Princeton Center for Electronic Music. He was born the son of a pipe organ player, and in his own time became an inventor of musical instruments. He had studied mathematics, physics and natural philosophy at Hamburg University. His first instrument was the Warbo-Formant Organ in 1937, a completely electronic polyphonic formant organ. New sounds could be created on it by simply adjusting its half-rotary and stop knobs. Bode’s next step for further education was the Heinrich-Hertz-Institut für Schwingungsforschungin or the Heinrich Hertz Institute for Research in Oscillations (HHI), located in Berlin where he went for his postgraduate studies. At the time the HHI had a focus on the following subjects: high frequency radio technology, telephony and telegraphy, acoustics and mechanics. The research done at the HHI had a focus on radio, television, sound-movie technology, architectural acoustics and the new field of electronic music. The HHI, like the RVS, was interested in developing and promoting the idea of electronic music and radio-music. It was in this phase that Bode developed his Melodium, alongside his collaborators Oskar Vierling and Fekko von Ompteda. The Melodium was a touch-sensitive monophonic yet multi-timbral instrument that became popular with film score composers of the era. Since it was monophonic, it presented fewer problems with tuning than had his wobbly Warbo-Formant Organ. Feeling inspired by his achievement, Bode then decided that creating electronic musical instruments would be “the task of my life time.” His dream was put on hold when WWII broke out in 1939. Despite the dire conflict, and the spiritual sickness at work in his country, Bode counted himself lucky for being able to go into the electronics industry. The only other choice was active military duty. He still did make things for the German project, but he wasn’t a foot soldier, and worked on their submarine sound and wireless communications efforts. In the aftermath of WWII he was newly married and moved from Berlin to a small village in southern Germany where he tinkered on his next invention up in the attic lab of the home where he had started a family. The result was the first iteration of the Melochord in 1947. The Melochord was a two-tone melody keyboard instrument. Its most interesting features were the controls for shaping formants that included various filters to attenuate the sound, ring modulation for harmonics, and the ability to generate white noise and apply attack and decay envelopes. The Melochord was promoted on the radio and in the newspapers, where it was praised for its clear and resonant tones. Werner Meyer-Eppler got wind of the Melochord and started to use it in his experiments at the Bonn. There was a lot of skill that went into playing the Melochord, and while Meyer-Eppler experimented, Bode set his sights on making a more user friendly version called the Polychord that became that first in a series synthesis type organs that Bodes took on his path of continued electronic creation. Genesis of the Studio for Electronic Music
Just as the GRM had been built around a philosophy of the transformation of sound, so too was the Studio for Electronic Music of the West German Radio (WDR) built around a philosophy of the synthesis of sound. Werner Meyer-Eppler was the architect of the strategies to be employed in this laboratory, and the blueprint was his book, Elektronische Klangerzeugung: Elektronische Musik und Synthetische Sprache (Electronic Sound Generation: Electronic Music and the Synthetic Speech). This philosophy placed the emphasis on building up the sounds from scratch, out of oscillators and lab equipment. This was in contrast to the metamorphic, transformational approach purveyed by Schaeffer and Henry with musique concrete. Tape, however, remained an essential lifeblood for both studios. Meyer-Eppler was still lecturing at the Institute for Phonetics and Communication Research of Bonn University while he wrote his book. In his book he had made an inventory of the electronic musical instruments which had so far been developed. Then Meyer-Eppler experimented at the Bonn with what became a basic electronic music process, composing music directly onto tape. One of the instruments Meyer-Eppler had used in his experiments was Harold Bode’s Melochord, and he also used vocoders. He encouraged his students to hear the sounds from the vocoder mixed with the sounds from the Melochord as a new kind of music. The genesis of the Studio for Electronic Music came in part from the transmission and recording of a late-night radio program about electronic music on October 18, 1951. A meeting of minds was held in regards to the program broadcast on the Nordwestdeutscher Rundfunk. At the meeting were Meyer-Eppler, and his colleagues Herbert Eimert, and Robert Beyer among others. Beyer had long been a proponent of a music oriented more towards its timbre than other considerations. Eimert was a composer and musicologist who had published a book on atonal music in the 1920s while still at school at the Cologne University of Music. He had also written a twelve-tone string quartet as part of his composition examination. For these troubles, his teacher Franz Bölsche had Eimert expelled him from the class. Eimert was devout when it came to noise, twelve tone music and serialism and he became a relentless advocate who organized concerts, events, radio shows and wrote numerous articles on this subject of his passion. He eventually did graduate with a doctorate in musicology in 1931 despite the attempts by Bölsche to thwart his will. Fritz Enkel who had also been at the meeting, was a skilled technician, and he designed a framework around which a studio for electronic music could be built. The station manager, Hans Hartmann, heard a report of the meeting and gave the go ahead to establish an electronic music studio. Creating such a studio would give national prestige to Western Germany. After the war Western Germany took great pains to be seen as culturally progressive, and having a place where the latest musical developments could be explored and created by their artists was a part of showing to the world that they were moving forward. Another reason to develop the studio was to use its output for broadcasting. At the time WDR was the largest and wealthiest broadcaster in West Germany and they could use their pool of funds to create something that would have been cost prohibitive for most private individuals and companies. Before they even got the equipment, when they felt the studio might not even get off the ground and become a reality, they made a demonstration piece to broadcast and show the possibilities of what else might be able to be achieved. Studio technician Heinz Schütz was tapped to make this happen, even though he didn’t consider himself a composer or musician. The fact that a non-musician was the first to demonstrate the potential of making music in an electronic studio is apropos of the later development of the field when people like Joe Meek and Brian Eno, who also didn’t call themselves musicians, none-the-less made amazing music with the studio as their instrument. The piece by Schütz was titled Morgenröte (The Red of Dawn) to signify the beginning of their collective efforts. The piece was made with limited means, using just what they had available, and its producer considered its creation to be at most, accidental. The piece by Schütz was typical of what came out of the studio before funding was secured. They didn’t have much to work with except tape, test equipment, and recordings of Meyer-Eppler’s previous work with the Melochord and vocoders. Eimert and Beyer “remixed” these experiments while they got their set-up established. The process of working with the tapes and test equipment gave them the experience and confidence they needed for further work in their laboratory of sound creation. Eimert and Beyer eventually put together some other sound studies as the studio came together piece by piece. These largely followed a “pure audio criteria” and were premiered at the Neues Musikfest (New Music Festival) presentation on May 26, 1953 at the broadcasting studio of the Cologne Radio Centre. The event marked the official opening of the WDR studio. Put together quickly, the pieces played did not live up to the standards Eimert had set for the studio, and this caused a falling out between him and Beyer, who thought they were adequate enough. The next year Beyer resigned. Eventually Bode’s Melochords and Trautwein’s Monochord were acquired, and each was modified specifically for use in the studio. Once in place the studio really got cooking. Next to these they used electronic laboratory equipment such as noise and signal generators, sine wave oscillators, band pass filters, octave filters, and pulse and ring modulators, among others. Oscilliscopes were used to look at sounds. Mixers were used to blend them together. There was a four-track tape recorder they used to synchronize sounds that had been recorded separately and join them in musical union. It could be used to overdub sounds on top of each over as one tape was being copied to another, a then-new technique developed from Meyer-Eppler’s ideas. The mixer had a total of sixteen channels divided into two groups of eight. There was a remote control to operate the four track and the attached octave filter. A cross-plug busbar panel served as a central locus where all the other inputs and outputs met. Connections could be switched with ease between instruments and sound sources, as if one were transferring a call at a telephone switchboard. Soon one of the early pieces of electronic music was transmuted from the raw electrons forged within its crucible of equipment into an enduring classic that showcased Karlheinz Stockhausen’s burgeoning genius. Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic. RE/SOURCES: https://120years.net/wordpress/wdr-electronic-music-studio-germany-1951/ https://120years.net/wordpress/the-melochordharald-bodegermany1947/ https://econtact.ca/13_4/palov_bode_biography.html Schütz, Heinz, Gottfried Michael Koenig, Konrad Boehmer, Karlheinz Stockhausen, György Ligeti, Mauricio Kagel, and Rolf Gehlhaar. 2002. "Erinnerungen 2: Studio für Elektronische Musik". In Musik der Zeit, 1951–2001: 50 Jahre Neue Musik im WDR—Essays, Erinnerungen, Dokumentation, edited by Frank Hilberg and Harry Vogt, 147–54. Hofheim: Wolke. https://www.mpiwg-berlin.mpg.de/research/projects/german-radio-and-development-electric-music-1920s-and-1930s https://www.youtube.com/watch?v=dmCpmJOCF-w https://muse.jhu.edu/article/727300 https://charliedraper.com/articles/2018/12/13/oskar-sala-plays-genzmers-trautonium-concerto-no-1 https://www.youtube.com/watch?v=k0UA0-heeFo
0 Comments
[Read Part I] Milton Babbit: The Musical Mathematician Though Milton Babbitt was late to join the party started by Luening and Ussachevsky, his influence was deep. Born in 1916 in Philadelphia to a father who was a mathematician, he became one of the leading proponents of total serialism. He had started playing music as a young child, first violin and then piano, and later clarinet and saxophone. As a teen he was devoted to jazz and other popular forms of music, which he started to write before he was even a teenager. One summer on a trip to Philadelphia with his mother to visit her family, he met his uncle who was a pianist studying music at Curtis. His uncle played him one of Schoenberg’s piano compositions and the young mans mind was blown. Babbitt continued to live and breathe music, but by the time he graduated high school he felt discouraged from pursuing it as his calling, thinking there would be no way to make a living as a musician or composer. He also felt torn between his love of writing popular song and the desire to write serious music that came to him from his initial encounter with Schoenberg. He did not think the two pursuits could co-exist. Unable or unwilling to decide he went in to college specializing in math. After two years of this his father helped convince him to do what he loved, and go to school for music. At New York University he became further enamored with the work of Schoenberg, who became his absolute hero, and the Second Viennese School in general. In this time period he also got to know Edgar Varese who lived in a nearby apartment building. Following his degree at NYU at the age of nineteen, he started studying privately with composer Roger Sessions at Princeton University. Sessions had started off as a neoclassicist, but through his friendship with Schoenberg did explore twelve tone techniques, but just as another tool he could use and modify to suit his own ends. From Sessions he learned the technique of Schenkerian analysis, a method which uses harmony, counterpoint and tonality to find a broader sense and a deeper understanding of a piece of music. One of the other methods Sessions used to teach his students was to have them choose a piece, and then write a piece that was in a different style, but used all the same structural building blocks. Sessions got a job from the University of Princeton to form a graduate program in music, and it was through his teacher, that Babbitt eventually got his Masters from the institution, and in 1938 joined the faculty. During the war years he got pressed into service as a mathematician doing classified work and divided his time between Washington D.C., and back at Princeton teaching math to those who would need for doing work such being as radar technicians. During this time he took a break from composing, but music never left his mind, and he started focusing on doing musical thought experiments, with a focus on aspects of rhythm. It was during this time period when he thought deeply on music that he thoroughly internalized Schoenbergs system. After the war was over he went back to his hometown of Jackson and wrote a systematic study of the Schoenberg system, “The Function of Set Structure in the Twelve Tone System.” He submitted the completed work to Princeton as his doctoral thesis. Princeton didn’t give out doctorals in music, only in musicology, and his complex thesis wasn’t accepted until eight years after his retirement from the school in 1992. His thesis and his other extensive writings on music theory expanded upon Schoenberg’s methods and formalized the twelve tone, “dodecaphonic”, system. The basic serialist approach was take the twelve notes of the western scale and put them into an order called a series, hence the name of the style. It was called a tone row as well. Babbitt saw that the series could be used to order not only the pitch, but dynamics, timbre, duration and other elements. This led him to pioneering “total serialism” which was later taken up in Europe such as Pierre Boulez and Olivier Messiaen, among others. Babbitt treated music as field for specialist research and wasn’t very concerned with what the average listener thought of his compositions. This had its pluses and minuses. On the plus side it allowed him to explore his mathematical and musical creativity in an open-ended way and see where it took him, without worrying about having to please an audience. On the minus side, not keeping his listeners in mind, and his ivory tower mindset, kept him from reaching people beyond the most serious devotees of abstract art music. This tendency was an interesting counterpoint from his years as teenager when he was an avid writer of pop songs and played in every jazz ensemble he could. Babbitt had thought of Schoenberg’s work as being “hermetically sealed music by a hermetically sealed man.” He followed suit in his own career. In this respect Babbitt can be considered as a true Castalian intellectual, and Glass Bead Game player. Within the Second Viennese School there was an idea, a thread taken from both 19th century romanticism and adapted from the philosophy of Arthur Schopenhauer, that music provides access to spiritual truth. Influenced by this milieu Babbitt’s own music can be read and heard as connecting the players and listeners to a platonic realm of pure number. Modernist art had already moved into areas that many people did not care about. And while Babbitt was under no illusion that he ever saw his work being widely celebrated or popular, as an employee of the university, he had to make the case that music was in itself a scientific discipline. Music could be explored with the rigors of science, and that it could be made using formal mathematical structures. Performances of this kind of new music was aimed at other researchers in the field, not at a public who would not understand what they were listening to without education. Babbitt’s approach rejected a common practice, in favor of what would become the new common practice: many different ways of investigating, playing, working with and composing music that go off in different directions. During WWII Babbitt had met John Van Neumann at the Institute for Advanced Studies. His association with Neumann caused Babbitt to realize that the time wasn’t far off when humans would be using computers to assist them with their compositional work. Unlike some of the other composers who became interested in electronic music, Babbitt wasn’t interested in new timbres. He thought the novelty of them was quick to wear off. He was interested in how electronic technology might enhance human capability with regards to rhythms. Victor In 1957 Luening and Ussachevsky wrote up a long report for the Rockefeller Foundation of all that they had learned and gathered so far as pioneers in the field. They included in the report another idea: the creation of the Columbia-Princeton Electronic Music Center. There was no place like it within the United States. In a spirit of synergy the Mark I was given a new home at the CPEMC by RCA. This made it easier for Babbitt, Luening, Ussachevsky and the others to work with the machine. It would however soon have a younger, more capable brother nicknamed Victor, the RCA Mark II, built with additional specifications as requested by Ussachevsky and Babbitt. There were a number of improvements that came with Victor. The number of oscillators, had been doubled for starters. Since tape was the main medium of the new music, it also made sense that Victor should be able to output to tape instead of the lathe discs. Babbitt was able to convince the engineers to fit it out with multi-track tape recording on four tracks. Victor also received a second tape punch input, a new bank of vacuum tube oscillators, noise generating capabilities, additional effect processes, and a range of other controls. Conlon Nancarrow, who was also interested in rhythm as an aspect of his composition, bypassed the issue of getting players up to speed with complex and fast rhythms by writing works for player-piano, punching the compositions literally on the roll. Nancarrow had also studied under Roger Sessions, and he and Babbitt knew each other in the 1930s. Though Nancarrow worked mostly in isolation during the 1940s and 1950s in Mexico City, only later gaining critical recognition in the 1970s and onwards, it is almost certain that Babbitt would have at least been tangentially aware of his work composing on punched player piano rolls. Nancarrow did use player pianos that he had altered slightly to increase their dynamic range, but they still had the all the acoustic limitations of the instrument. Babbitt, on the other hand, found himself with a unique instrument capable of realizing his vision for a complex, maximalist twelve-tone music that was made available to him through the complex input of the punched paper reader on the RCA Mark II and it’s ability to do multitrack recording. This gave him the complete compositional control he had long sought after. For Babbitt, it wasn’t so much the new timbres that could be created with the synth that interested him as much as being able to execute a score exactly in all parameters. His Composition for Synthesizer (1961-1963) became a showcase piece, not only for Babbitt, but for Victor as well. His masterpiece Philomel (1963-1964) saw the material realized on the synth accompanied by soprano singer Bethany Beardslee and subsequently became his most famous work. In 1964 he also created Composition for Synthesizer. All of these are unique in the respect that none of them featured the added effects that many of the other composers using the CPEMC availed themselves of; these were outside the gambit of his vision. Phonemena for voice and synthesizer from 1975 is a work whose text is made up entirely of phonemes. Here he explores a central preoccupation of electronic music, the nature of speech. It features twenty-four consonants and twelve vowel sounds. As ever with Babbitt, these are sung in a number of different combinations, with musical explorations focusing on pitch and dynamics. A teletype keyboard was attached directly to the long wall of electronics that made up the synth. It was here the composer programmed her or his inventions by punching the tape onto a roll of perforated paper that was taken into Victor and made into music. The code for Victor was binary and controlled settings for frequency, octave, envelope, volume and timbre in the two channels. A worksheet had been devised that transposed musical notation to code. In a sense, creating this kind of music was akin to working in encryption, or playing a glass bead game where on kind of knowledge or form of art, was connected to another via punches in a matrix grid. Wired for Wireless Babbitt’s works were just a few of the many distilled from the CPEMC. Not all were as obsessed with complete compositional control as Babbitt, and utilized the full suite of processes available at the studio, from the effects units to create their works, and their works were plenti-ful. The CPEMC released more recorded electronic music out into the world than from anywhere else in North America. During the first few years of its operation, from 1959 to 1961 the capabilities of studio were explored by Egyptian-American composer and ethnomusicologist Halim El-Dabh, who had been the first to remix recorded sounds using the effects then available to him at Middle East Radio in Cairo. He had come to the United States with his family on a Fulbright fellowship in 1948 and proceeded to study music under such composers as Ernst Krenek and Aaron Copland, among a number of others. In time he settled in Demarest, New Jersey. El-Dabh quickly became a fixture in the new music scene in New York, running in the same circles as Henry Cowell, Jon Cage, and Edgard Varèse. By 1955 El-Dabh had gotten acquainted with Luening and Ussachevsky. At this point his first composition for wire recorder was eleven years behind him, and he had kept up his experi-mentation in the meantime. Though he had been assimilated into the American new music milieu, he came from outside the scenes in both his adopted land the and European avantgarde. As he had with the Elements of Zaar, El-Dabh brought his love of folk music into the fold. His work at the CPEMC showcased his unique combinations that involved his extensive use of percussion and string sounds, singing and spoken word, alongside the electronics. He also availed himself of Victor and made extensive use of the synthesizer. In 1959 alone he produced eight works at CPEMC. These included his realization of Leiyla and the Poet, an electronic drama. El-Dabh had said of his process that it, "comes from interacting with the material. When you are open to ideas and thoughts the music will come to you." His less abstract, non-mathematical creations remain an enjoyable counterpoint to the cerebral enervations of his col-leagues. A few of the other pieces he composed while working the studio include Meditation in White Sound, Alcibiadis' Monologue to Socrates, Electronics and the World and Venice. El-Dabh influenced such musical luminaries as Frank Zappa and the West Coast Pop Art Experimental Band, his fellow CPEMC composer Alice Shields, and west-coast sound-text poet and KPFA broadcaster and music director Charles Amirkhanian. In 1960 Ussachevsky received a commission from a group of amateur radio enthusiasts, the De Forest Pioneers, to create a piece in tribute to their namesake. In the studio Vladimir composed something evocative of the early days of radio and titled it "Wireless Fantasy". He recorded morse code signals tapped out by early radio guru Ed G. Raser on an old spark generator in the W2ZL Historical Wireless Museum in Trenton, New Jersey. Among the signals used were: QST; DF the station ID of Manhattan Beach Radio, a well known early broadcaster with a range from Nova Scotia to the Caribbean; WA NY for the Waldorf-Astoria station that started transmitting in 1910; and DOC DF, De Forests own code nickname. The piece ends suitably with AR, for end of mes-sage, and GN for good night. Woven into the various wireless sounds used in this piece are strains of Wagner's Parsifal, treated with the studio equipment to sound as if it were a shortwave transmis-sion. In his first musical broadcast Lee De Forest had played a recording of Parsifal, then heard for the first time outside of Germany. From 1960 to 1961 Edgard Varese utilized the studio to create a new realization of the tape parts for his masterpiece Deserts. He was assisted in this task by Max Mathews from the nearby Bell Laboratories, and the Turkish-born Bulent Arel who came to the United States on a grant from the Rockefeller Foundation to work at CPEMC. Arel composed his Stereo Electronic Music No. 1 and 2 with the aid of the CPEMC facilities. Daria Semegen was a student of Arel’s who composed her work Electronic Composition No. 1 at the studio. There were numerous other composers, some visiting, others there as part of their formal education who came and went through the halls and walls of the CPEMC. Lucio Berio worked there, as did Mario Davidovsky, Charles Dodge, and Wendy Carlos just to name a few. Modulation in the Key of Bode
Engineer and instrument inventor Harold Bode made contributions to CPEMC just as he had at WDR. He had come to the United States in 1954, setting up camp in Brattleboro, Ver-mont where he worked in the lead development team at the Etsey Organ Corporation, eventually climbing up to the position of Vice President. In 1958 he set up his own company, the Bode Electronics Corporation, as a side project in addition to his work at Etsey. Meanwhile Peter Mauzey had become the first director of engineering at CPEMC. Mauzey was able to customize a lot of the equipment and set up the operations so it became a comfortable place for composers. When he wasn’t busy tweaking the systems in the studio, Mauzey taught as an adjunct professor at Columbia University, all while also doing working en-gineer work at Bell Labs in New Jersey. Robert Moog happened to be one of Mauzey’s students while at Columbia, under whom he continued to develop his considerable electrical chops, even while never setting foot in the studio his teacher had helped build. Bode left to join the Wurlitzer Organ Co. in Buffalo, New York when it hit rough waters and ran around 1960. It was while working for Wurlitzer that Bode realized the power the new transistor chips represented for making music. Bode got the idea that a modular instrument could be built, whose different components would then be connected together as needed. The instrument born from his idea was the Audio System Synthesiser. Using it, he could connect a number of different devices, or modules, in different ways to create or modify sounds. These included the basic electronic music components then in production: ring modulators, filters, re-verb generators and other effects. All of this could then be recorded to tape for further pro-cessing. Bode gave a demonstration of his instrument at the Audio Engineering Society in New York, in 1960. Robert Moog was there to take in the knowledge and the scene. He became in-spired by Bodes ideas and and this led to his own work in creating the Moog. In 1962 Bode started to collaborate with Vladimir Ussachevsky at the CPEMC. Working with Ussachevsky he developed ‘Bode Ring Modulator’ and ‘Bode Frequency Shifter’. These became staples at the CPEMC and were produced under both the Bode Sound Co. and licensed to Moog for inclusion in his modular systems. All of these effects became widely used in elec-tronic music studios, and in popular music from those experimenting with the moog in the 1960s. In 1974 Bode retired, but kept on tinkering on his own. In 1977 he created the Bode Vo-coder, which he also licensed to Moog, and in 1981 invented his last instrument the Bode Bar-berpole Phaser. .:. .:. .:. Read part I. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/SOURCES: Holmes, Thom. Electronic and Experimental Music. Sixth Edition. Music of the 20th Century Avant-Garde: A Biocritical Sourcebook https://ubu.com/sound/ussachevsky.html Columbia-Princeton Electronic Music Center 10th Anniversary, New World Records, Liner Notes, NWCRL268 , Original release date: 1971-01-01 https://120years.net/wordpress/the-rca-synthesiser-i-iiharry-olsen-hebert-belarusa1952/ https://cmc.music.columbia.edu/about https://betweentheledgerlines.wordpress.com/2013/06/08/milton-babbitt-synthesized-music-pioneer/ http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/olson-harry.pdf http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/seashore-carl.pdf https://snaccooperative.org/ark:/99166/w6737t86 https://happymag.tv/grateful-dead-wall-of-sound/ https://ubu.com/sound/babbitt.html https://www.youtube.com/watch?v=c9WvSCrOLY4 https://www.youtube.com/watch?v=6BfQtAAatq4 Babbitt, Milton. Words About Music. University of Wisconsin Press. 1987 https://en.wikipedia.org/wiki/Combinatoriality http://musicweb-international.com/classRev/2002/Mar02/Hauer.htm http://www.bruceduffie.com/babbitt.html http://cec.sonus.ca/econtact/13_4/palov_bode_biography.html http://cec.sonus.ca/econtact/13_4/bode_synthesizer.html http://esteyorganmuseum.org/ Otto Luening and Vladimir Ussachevsky In America the laboratories for electronic sound took a different path of development and first emerged out of the Universities and the private research facility of Bell Labs. It was a group of composers at Columbia and Princeton who had banded together to build the Columbia-Princeton Electronic Music Center (CPEMC), the oldest dedicated place for making electronic music in the United States. Otto Luening, Vladimir Ussachevsky, Milton Babbit and Roger Sessions all had their fingers on the switches in creating the studio. Otto Luening was born in 1900 in Milwaukee, Wisconsin, to parents who had emigrated from Germany. His father was a conductor and composer and his mother a singer, though not in a professional capacity. His family moved back to Europe when he was twelve, and he ended up studying music in Munich. At age seventeen he went to Switzerland and it was at the Zurich Conservatory where he came into contact with futurist composer Ferruccio Busoni. Busoni was himself a devotee of Bernard Ziehn and his “enharmonic law.” This law stated that “every chord tone may become the fundamental.” Luening picked this up and was able to put it under his belt. Luening eventually went back to America and worked at a slew of different colleges, and began to advocate on behalf of the American avant-garde. This led him to assisting Henry Cowell with the publication of the quarterly New Music. He also took over from Cowell New Music Quarterly Recordings which put out seminal recordings from those inside the new music scene. It was 1949 when he went to Columbia where for a position on the staff in the philosophy department and it was there he met Vladimir Ussachevsky. Ussachevsky had been born in Manchuria in 1911 to Russian parents. In his early years he was exposed to the music of the Russian Orthodox Church and a variety of piano music, as well as the sounds from the land where he was born. He gravitated to the piano and gained experience as a player in restaurants and as an improviser providing the live soundtrack to silent films. In 1930 he emigrated to the United States, went to various schools, served in the army during WWII, and eventually ended up under the wing of Otto Luening as a postdoctoral student at Columbia University, where he in turn ended up becoming a professor. In 1951 Ussachevsky convinced the music department to buy a professional Ampex tape recorder. When it arrived it sat in its box for a time, and he was apprehensive about opening it up and putting it to use. “A tape-recorder was, after all, a device to reproduce music, and not to assist in creating it,” he later said in recollection of the experience. When he finally did start to play with the tape recorder, the experiments began as he figured out what it was capable of doing, first using it to transpose piano pitches. Peter Mauzey was an electrical engineering student who worked at the university radio station WKCR, and he and Ussachevsky got to talking one day. Mauzey was able to give some technical pointers for using the tape recorder. In particular he showed him how to create feedback by making a tape loop that ran over two playback heads, and helped him get it set up. The possibilities inherent in tape opened up a door for Ussachevsky, and he became enamored of the medium, well before he’d ever heard of what Pierre Schaeffer and what his crew were doing in France, or what Stockhausen and company were doing in Germany. Some of these first pieces that Ussachevsky created were presented at a Composers Forum concert in the McMillan Theater on May 9, 1952. The following summer Ussachevsky presented some of his tape music at another composers conference in Bennington, Vermont. He was joined by Luening in these efforts. Luening was a flute player, and they used tape to transpose his playing into pitches impossible for an unaided human, and added further effects such as echo and reverb. After these demonstrations Luening got busy working with the tape machine himself and started composing a series of new works at Henry Cowell’s cottage in Woodstock, New York, where he had brought up the tape recorders, microphones, and a couple of Mauzey’s devices. These included his Fantasy in Space, Low Speed, and Invention in Twelve Tones. Luening also recorded parts for Ussachevsky to use in his tape composition, Sonic Contours. In November of 1952 Leopold Stokowski premiered these pieces, along with ones by Ussachevsky, in a concert at the Museum of Modern Art, placing them squarely in the experimental tradition and helping the tape techniques to be seen as a new medium for music composition. Thereafter, the rudimentary equipment that was the seed material from which the CPEMC would grow, moved around from place to place. Sometimes it was in New York City, at other times Bennington or at the MacDowell Colony in New Hampshire. There was no specific space and home for the equipment. The Louisville Orchestra wanted to get in on the new music game and commissioned Luening to write a piece for them to play. He agreed and brought Ussachevsky along to collaborate with him on the work which became the first composition for tape-recorder and orchestra. To fully realize it they needed additional equipment: two more tape-recorders and a filter, none of which were cheap in the 1950s, so they secured funding through the Rockefeller Foundation. After their work was done in Louisville all of the gear they had so far acquired was assembled in Ussachevsky’s apartment where it remained for three years. It was at this time in 1955 they sought a permanent home for the studio, and sought the help of Grayson Kirk, president of Columbia to secure a dedicated space at the university. He was able to help and put them in a small two-story house that had once been part of the Bloomingdale Asylum for the Insane and was slated for demolition. Here they produced works for an Orson Welles production of King Lear, and the compositions Metamorphoses and Piece for Tape Recorder. These efforts paid off when they garnered the enthusiasm of historian and professor Jacques Barzun who championed their efforts and gained further support. With additional aid from Kirk, Luening and Ussachevsky eventually were given a stable home for their studio inside the McMillin Theatre. Having heard about what was going on in the studios of Paris and Germany the pair wanted to check them out in person, see what they could learn and possibly put to use in their own fledgling studio. They were able to do this on the Rockefeller Foundation’s dime. When they came back, they would soon be introduced to a machine, who in its second iteration, would go by the name of Victor. The Microphonics of Harry F. Olson One of Victor’s fathers was a man named Harry Olson (1901-1982), a native of Iowa who had the knack. He became interested in electronics and all things technical at an early age. He was encouraged by his parents who provided the materials necessary to build a small shop and lab. For a young boy he made remarkable progress exploring where his inclinations led him. In grade school he built and flew model airplanes at a time when aviation itself was still getting off the ground. When he got into high school he built a steam engine and a wood-fired boiler whose power he used to drive a DC generator he had repurposed from automobile parts. His next adventure was to tackle ham radio. He constructed his own station, demonstrated his skill in morse code and station operation, and obtained his amateur license. All of this curiosity, hands on experience, and diligence served him well when he went on to pick up a bachelors in electrical engineering. He next picked up a Masters with a thesis on acoustic wave filters, and topped it all off with a Ph.D in physics, all from his home state University of Iowa. While working on his degrees Olson had come under the tutelage of Dean Carl E. Seashore, a psychologist who specialized in the fields of speech and stuttering, audiology, music, and aesthetics. Seashore was interested in how different people perceived the various dimensions of music and how ability differed between students. In 1919 he developed the Seashore Test of Music Ability which set out to measure how well a person could discriminate between timbre, rhythm, tempo, loudness and pitch. A related interest was in how people judged visual artwork, and this led him to work with Dr. Norma Charles Meier to develop another test on art judgment. All of this work led Seashore to eventually receive financial backing from Bell Laboratories. Another one of Olson’s mentors was the head of the physics department G. W. Stewart, under who he did his work on acoustic wave filters. Between Seashore and Stewart’s influence, Olson developed a keen interest in the areas of acoustics, sound reproduction, and music. With his advanced degree, and long history of experimentation in tow, Olson headed to the Radio Corporation of America (RCA) where he became a part of the research department in 1928. After putting in some years in various capacities, he was put in charge of the Acoustical Research Laboratory in 1934. Eight years later in 1942 the lab was moved from Camden to Princeton, New Jersey. The facilities at the lab included an anechoic chamber that was at the time, the largest in the world. A reverberation chamber and ideal listening room were also available to him. It was in these settings that Olson went on to develop a number of different types and styles of microphone. He developed microphones for use in radio broadcast, for motion picture use, directional microphones, and noise-cancelling microphones. Alongside the mics, he created new designs for loudspeakers. During WWII Olson was put to work on a number of military projects. He specialized in the area of underwater sound and antisubmarine warfare, but after the war he got back to his main focus of sound reproduction. Taking a cue from Seashore, he set out to determine what a listeners preferred bandwidth of sound actually was when sound had been recorded and reproduced. To figure this out he designed an experiment where he put an orchestra behind a screen fitted with a low-pass acoustic filter that cut off the high-frequency range above 5000 Hz. This filter could be opened or closed, the bandwidth full or restricted. Audiences who listened, not knowing when the concealed filter was opened or closed had a much stronger leaning towards the open, all bandwidth listening experience. They did not like the sound when the filter was activated. For the next phase of his experiment Olson switched out the orchestra, whom the audience couldn’t see anyway, with a sound-reproduction system with loudspeakers located in the position of the orchestra. They still preferred the full-bandwidth sound, but only when it was free of distortion. When small amounts of non-linear distortion were introduced, they preferred the restricted bandwidth. These efforts showed the amount of extreme care that needed to go into developing high-fidelity audio systems. In the 1950s Olson stayed extremely busy working on many projects for RCA. One included the development of magnetic tape capable of recording and transmitting color television for broadcast and playback. This led to a collaboration between RCA and the 3M company, reaching success in their aim in 1956. The RCA Mark I Synthesizer Claude Shannon’s 1948 paper “A Mathematical Theory of Communications,” was putting the idea of information theory into the heads of everyone involved in the business of telephone and radio. RCA had put large sums of money into their recorded and broadcast music, and the company was quick to grasp the importance and implications of Shannon’s work. In his own work at the company, Olson was a frequent collaborator with fellow senior engineer Herbert E. Belar (1901-1997). They worked together on theoretical papers and on practical projects. On May 11, 1950 they issued their first internal research report on information theory, "Preliminary Investigation of Modern Communication Theories Applied to Records and Music." Their idea was to consider music as math. This in itself was not new, and can indeed be traced back to the Pythagorean tradition of music. To this ancient pedigree they added the contemporary twist in correlating music mathematically as information. They realized, that with the right tools, they could be able to generate music from math itself, instead of from traditional instruments. On February 26, 1952 they demonstrated their first experiment towards this goal to David Sarnoff, head of RCA, and others in the upper echelons of the company. They made the machine they built perform the songs “Home Sweet Home” and “Blue Skies”. The officials gave them the green light and this led to further work and the development of the RCA Mark I Synthesizer. The RCA Mark I was in part a computer, as it had simple programmable controls, yet the part of it that generated sound was completely analog. The Mark I had a large array of twelve oscillator circuits, one for each of the basic twelve tones of the muscial scale. These were able to be modified by the synths other circuits to create an astonishing variety of timbre and sound. The RCA Mark I was not a machine that could make automatic music. It had to be completely programmed by a composer. The flexibility of the machine and the range of possibilities gave composers a new kind of freedom, a new kind of autocracy, total compositional control. This had long been the dream of those who had been bent towards serialism. The programming aspect of the RCA Mark I hearkened back to the player pianos that had first appeared in the 19th century, and used a roll of punched tape to instruct the machine what to do. Olson and Belar had been meticulous in all of the aspects that could be programmed with their creation. These included pitch, timbre, amplitude, envelope, vibrato, and portamento. It even included controls for frequency filtering and reverb. All of this could be output to two channels and played on loudspeakers, or sent to a disc lathe where the resulting music could be cut straight to wax. It was introduced to the public by Sarnoff on January 31, 1955. The timing was great as far as Ussachevsky and Luening were concerned, as they first heard about it after they had returned from a trip to Europe where they had visited the GRM, WDR, and some other emerging electronic music studios. The trip had them eager to establish their own studio to work electronic music their own way. When they met Schaeffer he had been eager to impose his own aesthetic values on the pair, and when they met Stockhausen, he remained secretive of his working methods and aloof about their presence. Despite this, they were excited about getting to work on their own, even if exhausted from the rigors of travel. They made an appointment with the folks at RCA to have a demonstration of the Mark I Synthesizer. The RCA Mark I far surpassed what Luening and Ussachevsky had witnessed in France, Germany and the other countries they visited. With its twelve separate audio frequency sources the synth was a complete and complex unit, and while programming it could be laborious, it was a different kind of labor than the kind of heavy tape manipulation they had been doing in their studio, and the accustomed ways of working at the other studios they got to see in operation. The pair soon found another ally in Milton Babbit, who was then at Princeton University. He too had a keen interest in the synth, and the three of them began to collaborate together and share time on the machine, which they had to request from RCA. For three years the trio made frequent trips to Sarnoff Laboratories in Princeton where they worked on new music. .:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/SOURCES: Holmes, Thom. Electronic and Experimental Music. Sixth Edition. Music of the 20th Century Avant-Garde: A Biocritical Sourcebook https://ubu.com/sound/ussachevsky.html Columbia-Princeton Electronic Music Center 10th Anniversary, New World Records, Liner Notes, NWCRL268 , Original release date: 1971-01-01 https://120years.net/wordpress/the-rca-synthesiser-i-iiharry-olsen-hebert-belarusa1952/ https://cmc.music.columbia.edu/about https://betweentheledgerlines.wordpress.com/2013/06/08/milton-babbitt-synthesized-music-pioneer/ http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/olson-harry.pdf http://www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/seashore-carl.pdf https://snaccooperative.org/ark:/99166/w6737t86 https://happymag.tv/grateful-dead-wall-of-sound/ https://ubu.com/sound/babbitt.html https://www.youtube.com/watch?v=c9WvSCrOLY4 https://www.youtube.com/watch?v=6BfQtAAatq4 Babbitt, Milton. Words About Music. University of Wisconsin Press. 1987 https://en.wikipedia.org/wiki/Combinatoriality http://musicweb-international.com/classRev/2002/Mar02/Hauer.htm http://www.bruceduffie.com/babbitt.html http://cec.sonus.ca/econtact/13_4/palov_bode_biography.html http://cec.sonus.ca/econtact/13_4/bode_synthesizer.html http://esteyorganmuseum.org/ The elements of Linear Predictive Coding (LPC) were built on the basis of some of Norbert Wiener’s work from the 1940’s when he developed a mathematical theory for calculating the optimal filters for finding signals in noise. Claude Shannon quickly followed Wiener with his breakthrough work A Mathematical Theory of Communication, that included a general theory of coding. [For more on Wiener and Shannon see Chapter 3.] With new mathematical tools in hand, researchers started exploring predictive coding. Linear Prediction is a form of signal estimation and it was soon applied to speech analysis. In signal processing, communications and related fields the term “coding” generally means putting a signal into a format where it will be easier to handle a given task. A coding scheme, like morse code for instance, is when an encoder takes the signal and puts into a new format. The decoder takes it out of its new format and puts it back into the old one. The “predictive” aspect of coding has been used for in numerous scientific theories and engineering techniques. What they have in common is that they predict future observations based on past observations. Joined together the term “predictive coding” was coined by information theorist Peter Elias in 1955 in his two papers on the subject. In LPC samples from a signal are predicted using a linear function from previous samples. In math a linear function that has either one or two variables without exponents or it is a function that graphs to the straight line. The error between a predicted sample and the actual sample is also transmitted along with the coefficients. This works with speech because the samples from nearby correspond to each other to a high degree. The error is also transmitted because if the prediction is good the error will be small and take up less bandwidth. In this sense, LPC becomes a type of compression based on source codes. Towards the end of the 1960’s Fumitada Itakura, and Bishnu S. Atal and Manfred Schroeder independently discovered, as in the case of the telegraph and telephone, the elements of LPC. Later, Paul Lansky applied it making delightful music exploring the spectrum between music and speech. Fumitada Itakura was interested in math and radio from an early age, and he had been an amateur radio operator in his youth. His elementary school happened to be just a mile from the radio laboratory at Nagoya University where his father knew some of the professors, so he had occasion to visit it and ask questions. As an undergraduate he became interested in the theoretical side of math and started to learn about stochastic processes. As he extended his ability ever further, he eventually became involved in the mathematical aspect of signal processing. His research paper for his bachelor in electrical communication was on the statistical analysis of whistlers, a very low frequency electromagentic radio wave produced by lightning, and capable of being heard as audio on radio receivers. To study it he built a bank of analog filters to do the signal processing, and made digital circuits to try and find patterns in the time-frequency of the whistlers. It wasn’t easy work, but he persevered. In analyzing the whistler signal he had to work on filtering out a lot of the other noisy material that comes in from the magneto-ionosphere. The work required him to use band-pass filters and the sound spectrogram that had originally been designed for speech analysis. This eventually led to further work with statistics and audio. When he went to graduate school he studied applied mathematics under Professor Kanehisa Udagawa. At Udagawa’s lab he became a part of a group studying pattern recognition and he started a project to recognize hand written characters in 1963. When professor Udagawa died of a heart attack he had to find someone else to study under to continue his course. This led him to work at the NTT. Dr. Shuzo Saito had been a graduate of Nagoya University and was looking for someone to work with in speech research. Saito’s friend professor Teruo Fukumura suggested Itakura. Saito had an interest in speech recognition and encouraged Itakura to get involved. Fukumura began teaching him the basic principles of speech using using Gunnar Fant's Acoustic Theory of Speech Production. Itakura started making sound spectrograms of his voice speaking vowels. His voice was high and husky so it didn’t make as clean of a spectrogram as it would have with someone who had a regular voice. In this there was a hidden gift. He realized if they could do good analysis on a signal that had more random characteristics, they could do even better when analyzing regular speech. From this point, he went and applied statistics to speech classification, based on a paper he had read by J. Hajek. Reading math papers had been a hobby of his and it led to his work on Linear Predictive Coding. Dr. Saito suggested to Itakura that he look for practical results based on his theory, so he started working with a vocoder and got some initial results on his idea, and wanted to go further. Dr. Saito suggested he look at pitch detection, as vocoders often had trouble recognizing voices because of their poor ability in this area. He conceived of a new method of pitch detection that used an inverse filter and oscillation. From this he proposed integrating the linear predictive analysis with his new pitch detection method to create a new vocoder system. In late 1967 he succeeded in synthesizing speech from the vocoder and brought the results to Dr. Saito. From then on Itakura has worked on vocoding. Of the many modes in which speech is produced, the way vowels sound is very important, as it relies on the periodic opening and closing of the vocal cords. Air from the lungs gets converted by them into a wideband signal filled with harmonics containing many properties. This signal resonates the vocal cavities before leaving the mouth where the final sounds are shaped. This speech signal gets analyzed, the signal of the formants estimated and removed in a process called inverse filtering. The rest of the remaining sounds, called the buzz, are also estimated. The signal that remains after the buzz is subtracted is called the residue. Numbers which represent the formants, the buzz and the residue, can be stored or transmitted elsewhere. The speech is then synthesized through a reversal of the original stripping process. The parameters of the buzz and residue are used to create a signal, and the information stripped from the formants is recreated to create a new filter. The process is done in short chunks of time. Taking speech apart and putting it together on the other end was a huge technical feat that saves tons of bandwidth. Speech synthesis could fit five calls onto the same channel that regular voice took up with one. ![]() Mafred Schroeder and Bishnu S. Atal At Bell Labs he met up with Manfred Schroeder who had come from Germany. Schroeder was born in 1926 and came of age during WWII. During the war Schroeder had built a secret radio transmitter that spooked his parents. Transmitting radio was risky business because it was the province of spies and people who wanted to communicate outside the country. When Schroeder saw members of the army or SS outside his house with radio direction finding equipment, he shut off the transmitter for a month. He also listened to the BBC for news, and the American Forces Network transmitting from England, then illegal to listen to. Many people had been sent to concentration camps just for listening to foreign stations, and spreading news to others. The Nazi powers attempted to keep tight control on all information going in and out of the country. A special radio was even manufactured by the state, the People's Radio or Volksempfänger, that was built in such a way that it only could receive approved German stations whose programs were under the directorship of Joseph Goebbels. He excelled at school and was often ahead of even the teachers, and during the war was drafted to a radar team to track incoming aircraft flights and do other work, where he gained extensive experience with the technology. Schroeder was also a math fanatic, like Itakura was, and when he did go to university, always took extra math classes on the side of his physics work. He had been fascinated by crypto math and he loaded up on function theory and probability classes. Eventually Schroeder got a job offer from Bell Labs in 1954, based on previous work he had done experimenting with microwaves and he emigrated to the United States. Bell Labs wanted him to continue his research with microwaves, but he thought he’d switch gears and get into the study of speech instead. For two years he worked on speech synthesizers, and didn’t have much luck in getting them to sound good, so then turned his attention to speakers and room acoustics. Many researchers who were following the dictates of their own curiosity and inclination were left alone to pursue their studies, and see what came out of them and where it took them. John Peirce at Bell Labs wanted Schroeder to use Dudley’s vocoding principles to send high fidelity voice calls over the phone system. This caused Schroeder to hit up against the same issue as Itakura had, the problem of pitch. Part of the issue was extracting the fundamental frequencies from telephone lines not known for superb sound quality. As Schroeder investigated he realized he could take the baseband signal, or those frequencies that have not been modulated, and distort it non-linearly to generate frequencies that the vocoder would then give the right amplitude. This ended being a success. This became voice excited vocoding and the speech that came out of the other end was the most human sounding of any speech synthesis up to that point. ![]() In 1961 Schroeder hired Dr. Bishnu S. Atal to work with him at Bell Labs. Atal was born in 1933 in Kanpur, Uttar Pradesh, India. He studied physics at the University of Lucknow and received his degree in electrical communications engineering from the Institute of Science in Bangalore, India in 1955, before coming to America to study for his Ph.D at the Brooklyn Polytechnic Institute. He returned to his home country to lecture on acoustics from 1957 to 1960 before he was lured back to the U.S. by Schroeder to join him in his investigations in speech and acoustics. In 1967 Schroeder was pacing around the Lab with Atal, and they were conversing about needing to do more with vocoder speech quality. His work on pitch had improved the quality of vocoding, but it wasn’t yet what it could be. What they needed to do, they realized as they talked, was to code speech so no errors were present. As they talked the idea of predictive coding came up. They realized that as speech became encoded they could predict the next samples of speech based on what had just come before. The prediction would be compared with the actual speech. Alongside this the errors, or residuals, would be transmitted. In decoding the same algorithm was used to reconstruct the speech on the other end of the transmission. Schroeder and Atal called this adaptive predictive coding, with the name later changed to linear predictive coding. The quality of speech was as good as that which came out of his voice excited vocoder. They wrote a paper on the subject for the Bell System Journal and presented on it at a conference in 1967, the same year Itakura succeeded with his technique. Since 1970's most of the technology around speech synthesis and coding has been focused on LPC and it is now the most widely used form. When it first came out the NSA were among the first to get their paws on it because LPC can be used for secure wireless with a digitized and encrypted voice sent over a narrow channel. The early example of this is Navajo I, a telephone built into a briefcase to be used by government agents. About 110 of these were produced in the early 1980s. Several other vocoder systems were used by the NSA for the purpose of encryption. LPC has become essential for cellphones, and is a Global System for Mobile Communications (GSM) standard protocol for cellular networks. GSM uses a variety of voice codecs that implement the technology to put 3.1 kHz of audio into 6.5 and 13 kbit/s of transmission. LPC is also used in Voice Over IP, or VoIP, such as is used on Skype and Zoom calls and meetings. A 10th order derivative of LPC was used in the popular 1980s Speak & Spell educational toy. These became popular to hack by experimental musicians in a process known as circuit bending, where the toy is taken apart and the connections re-soldered to make sounds not originally intended by the manufactures. [For more on Ghazala and circuit bending, see chapter 7.] Vocoding technology is also utilized in the Digital Mobile Radio (DMR) units that are currently gaining popularity among hams around the world. DMR is an open digital mobile radio standard. DMR radios use a proprietary AMBE+2 vocoder that works with multi-band excitation for its speech coding and compression to achieve a 6.2 kHz bandwidth. Again the compression and the digital codecs often result in sound artifacts and glitching to occur while talking. Besides it's use in DMR the AMBE+2 is also used in D-Star, Iridium satellite telephone systems, and OpenSky trunked radio systems. ![]() Paule Lansky: notjustmoreidlechatter Since LPC allows for the separation of pitch and speed and the pitch contours of the speech can be altered independently of the speed, it can also be used by the creative thinker for musical composition. Paul Lansky was one such thinker and he used LPC to great effect in a series of compositions exploring synthesis and the qualities of speech. Paul Lansky was born in 1944 in New York and counted George Perle and Milton Babbit as among his teachers. Lansky got his Ph.D in music from Princeton in 1973. Like many others of his generation, Lansky started off being schooled in the school of serialism. His teacher Perle had developed an iconoclastic twelve tone modal system, and Lansky used this to write a piece. For his dissertation he continued to explore Perle’s methodology and used linear algebra as a way to create a model of his teachers system. His interest then extended to take in electronics and computers as a way of exploring the mathematical possibilities inherent within serialism. His first foray into electronic composition was on Mild und Leise from 1973. Proper old school, it was composed using a series of punch cards. Learning the mechanics of the system to achieve his desired outcome was as much a part of the procedure as the composition. For it he used the he Music360 computer language written by Barry Vercoe on an IBM 360/91. The output from the computer went to a 1600 BPI digital tape which that had to be carried over to a basement lab in the engineering quadrangle at Princeton to listen to. It used FM synthesis which had just been worked out at Stanford [for FM Synthesis see Chapter 4.] The harmonic language came from Perle’s system. The result is very emotionally resonant pure electronic music. Lansky has ever been keen to foreground the music in front of the technology used to make the music, and that is true here. The piece was later sampled by Radiohead in their song idioteque on their Kid A album. 1979 saw Lansky beginning to work with LPC as a part of his computer music programming practice, and it was put to use in a series of compositions starting with Six Fantasies on a Poem by Thomas Campion. James Moorer at Stanford University had begun Linear Predictive Coding based derivatives were pioneered by James Moorer at Stanford University in the 1970’s. His wife Hannah McKay reads the poem and LPC techniques and a variety of processing and filtering methords are used to alter and transform the reading in fabulous ways. In his notes to the recording of Six Fantasies, he writes about how it has become common to view speech and song as distinct categories. Lansky thought that “they are more usefully thought of as occupying opposite ends of a spectrum, encompassing a wealth of musical potential. This fact has certainly not been lost on musicians: sprechstimme, melodrama, recitative, rap, blues, etc., are all evidence that it is a lively domain.” Thomas Campion as composer and poet became an archetype emblematic of the “musical spectrum spanned by speech and song.” The poem used by Campion was his Rose cheekt Lawra which was embedded within his 1602 treatise Observations in the Art of English Poesie. Here Campion offered his attempt at a quantitative model for English poetry, where meter is determined by the quantity of vowels rather than by rhythm, as was done in ancient Latin and Greek poetry. Lansky describes the poem as a “wonderful, free-wheeling spin about the vowel box. It is almost as if he is playing vowels the way one would play a musical instrument, jumping here and there, dancing around with dazzling invention and brilliance, carefully balancing repetition and variation. The poem itself is about Petrarch's beloved Laura, whose beauty expresses an implicit and heavenly music, in contrast to the imperfect, all too explicit earthly music we must resign ourselves to make. This seemed to be an appropriate metaphor for the piece.” Lansky continued to explore the continuum between speech and song with his pieces, Idle Chatter, just_more_idle_chatter, and, Notjustmoreidlechatter. Though clearly connected by theme, they are not a suite, but independent works. Idle Chatter from 1985 also continues with the use of his wife as vocalist, and the IBM 3081 as the means of transforming her voice, and again using a mix of LPC, stochastic mixing, and granular synthesis with a bit of help from the computer music language Cmix. If you like glossolalia, and if you ever wanted to try to hear what it sounded like at the Tower of Babel, these recordings are an opportunity. Of Idle Chatter, Lansky wrote, ““The incoherent babble of Idle Chatter is really a pretext to create a complicated piece in which you think you can `parse the data’, but are constantly surprised and confused. The texture is designed to make it seem as if the words, rhythms and harmonies are understandable, but what results, I think, is a musical surface with a lot of places around which which your ear can dance while you vainly try to figure out what is going on. In the end I hope a good time is had by all (and that your ears learn to enjoy dancing).” People had a strong reaction to the piece, and in response to their reaction, Lanksy wrote, just_more_idle_chatter in 1987. He gave the digital background singers more of a role in the piece, but the words still only approach intelligibility and never really reach a stage where the listener can comprehend what is being said, only that something is being spoken. The next saw his “stubborn refusal to let a good idea alone” with the realization of Notjustmoreidlechatter. Here again the chatter almost becomes something that can be discerned as a word before slipping back down into the primordial soup of linguistic babble. The last two of these pieces were made using the DEC MicroVaxII computer. Over time, though Lansky wrote many more computer music pieces, and settings for traditionl instrumentation, he couldn’t just let the words just be. For the pieces on his Alphabet Book album he conducted further investigations in a magisterial reflection on the building blocks of thought: the alphanumerics, the letters and numbers, that allow for communication, the building up of knowledge, and contemplation. .:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/SOURCES: Fumitada Itakura, an oral history conducted in 1997 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA. https://ethw.org/Oral-History:Fumitada_Itakura https://lorenlugosch.github.io/posts/2020/07/predictive-coding/ https://ethw.org/Bishnu_S._Atal Manfred Schroeder, an oral history conducted in 1994 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA. https://ethw.org/Oral-History:Manfred_Schroeder https://en.wikipedia.org/wiki/Digital_mobile_radio https://en.wikipedia.org/wiki/Multi-Band_Excitation https://www.popmatters.com/141865-idle-chatter-about-paul-lanskys-notjustmoreidlechatter-2496021210.html http://paul.mycpanel.princeton.edu/liner_notes/morethanidlechatter.html https://music7703lsu.wordpress.com/2017/04/29/idel-chatter-by-paul-lanky/ https://www.tumblr.com/postpunk/149013720/paul-lansky-mild-und-leise-on-idioteque-the https://paul.mycpanel.princeton.edu/articles.html Charles Dodge was another early computer musician who got in on the speech synthesis game. Born in Iowa in 1942 he was in his early twenties when he first became interested in the possibilities of computer music. As a graduate student at Columbia University he studied composition under Richard Hervig, Chou Wen-chung, and the electronic musician Otto Luening. When he met Godfrey Winham of at Princeton University, he began to think seriously about composing his own works with computers. Winham was an influential music theorist whose wife was a singer whose wife Bethany Beadslee was the voice for much new music, including Milton Babbit’s Philomel. In the sixties Bell Labs was one of the very few places computer music was being made, and it was one of the few places to go to hear how it sounded. Max Matthews encouraged musicians who were making music on university computers to come to Bell Labs to convert it into sound, in the evening after the primary work at the Labs was finished. Charles Dodge was one of these composers, and when he came to listen to his work he became mesmerized by the fascinating sounds of the speech research going on down the hall, and often thought it was more interesting than the sounds he’d created using the computer. In the early 70s he had the opportunity to create some new works at Bell Labs with access to programs written by Dr. Joseph Olive for speech synthesis. Olive was a leading researcher in the area of text-to-speech. Olive was one of those people who had an intense mathematical mind. He had received a physics PhD from the University of Chicago, but he was also interested in music. With help from Olive and some poems written and given to him by his friend Mark Strand, Dodge went about creating Speech Songs. He writes, “I'd never been able to write very effective vocal music and here was an opportunity to make music with words. I was really attracted to that. It wasn't singing in the usual sense. It was making music out of the nature of speech itself. With the early speech-synthesis computers, you could do two things: you could make the voice go faster or slower than the speed in which it was recorded at the same pitch or you could shift the pitch independent of the speech rhythm. That was a kind of transformation that you couldn't make in the usual way of making tape music. It was fascinating to put my hands on two ways of modifying sound that were completely, newly available.” To synthesize the electronic voices for the poems he used called speech-by-analysis. Only words that had put into the computer before using an an analog-to-digital converter could be synthesized. The recorded speech is analyzed by the computer to pull out the various parameters from the spoken word in short segments. Then speech can be recreated by the artificial voice using the same parameters as had been analyzed. For musical purposes, though, those parameters can be altered to change aspects of the sound such as shifting the pitch contour of a phrase or word into a melodic line. Change the speed without altering the pitch is another possibility. Formants and resonance are other aspects that can be changed by the programmer-composer. The poems themselves are humorous and surrealistic, and the way the artificial voice reads them adds to the effect. Dodge was specifically interested in humor, because as he wrote in the liner notes, “Laughter at new music concerts, especially in New York, is rare these days.” He was delighted when audience members laughed at his creation. For a type of music that is so often cerebral and conceptual, its good when some belly laughs can be had. Another piece on the album, The Story of Our Lives, also used techniques of speech synthesis. In this case instead of replacing the recorded human with an artificial voice, they changed the program so that it took from a bank of 64 sine tones that glissandoed at different rates. To create the effect of more than one voice being heard at a time, the different voices were mixed together on the digital computer. Speech Songs came out in 1972 and in 1978 he put together a he made a recording of the radio In Casando by Samuel Beckett, where the musical aspect was two computer synthesized audio channels. This was also when he founded the center for computer music at CUNY’s Brooklyn College and began teaching for their graduate program. His 1970 composition, Earth’s Magnetic Field will be explored in chapter 8 of this book. .:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/SOURCES: https://sonificationart.wordpress.com/2017/06/01/earths-magnetic-field-realizations-in-computed-electronic-sound/ https://info.umkc.edu/specialcollections/archives/1056 https://www.furious.com/perfect/ohm/dodge.html https://www.computerworld.com/article/2577903/see-me---hear-me------.html https://www.clsp.jhu.edu/events/the-making-of-gale-joseph-olive-darpa/ 1. In industrial culture, children want to know about stuff their parents often don’t want to talk to them about, namely sex and death, two of the most natural things in the world. While Halloween has long had the association with death, the association with sex has come about in its later decades, as the holiday has continued in popularity as a party night for adults. Risque costumes became just as common as the ghastly, and the two elements combined in a lurid display of those powers still that are still repressed in our so-called "enlightened and open" society. Halloween allows for death to come to the cultural conversation, where it would otherwise just be shuttered up in a hospital or old folks home.
2. Even in darkness there is something to see. Our society has been cut off from the dark. Electric lightbulbs, one of the first forms of electronic media, have cast their glow onto corners and streets that once contained mysteries after the sun went down. In the darkness there is music. In the darkness there is magic. In the darkness our imagination begins to see. Halloween marks a deepening point in the progression of the dark half of the year. That darkness needs expression and finds it in the popular custom. 3. Tales of ghosts have an ancient pedigree in the traditions of human storytelling. In the twentieth century films were one of the main mediums of storytelling in industrial nations and horror films were among the first moving pictures ever to be made. In 1898 George Mellies made “Le Manoir du Diable,” sometimes called the “The Haunted Castle” in English or “The House of the Devil.” The tradition of the horror film has been kept up ever since, and they are among the most popular forms of all films. As industrial culture dies its own death, horror will still continue to have an outlet in other forms of popular storytelling, the short story and the novel, where the genre had already long had a home. 4. Witchcraft is real. However much rational minded progressive people wanted to cast magic out, it has remained. Even in a world of full of (cue sarcasm) wondrous iPhones, magic, both benefic and malefic, is practiced, explored, studied, spelled. Halloween is a time when the black cat that is the reality of magic can be let out of the bag. Because many people fear magic, the malefic aspect of the art and science is what gets projected out by the collective into the public celebration of Halloween. 5. Magic involves and cultivates the imagination. The imagination involves and cultivates a sense of wonder. For children especially, the sense of wonder and imagination has not yet been squashed. In the liminal time of Halloween those children who are allowed to play and wonder in the dark, to dress in a costume, and see others in costume, become filled with the sense of wonder that is already easy for them. 6. The sense of wonder has become diminished the further corporate media imagery has been inculcated in children. Once they dressed up as folkloric spooks, devils and witches, with costumes they made at home. Now they as often as not dress as characters from cartoons, comic books, or other media being sold to them, with costumes sold to them at stores. 7. There are no treats without tricks. There is something in the quality of the American soil, something deep in the consciousness and the bedrock of the land, that lends itself to tricks and trickery. Some might call it the trickster spirit. Now the trickster spirit isn’t all fun and games, though to trickster it might all be fun and games. But without the trickster, there is no change. As Halloween evolved on this continent the trickster used it as a lively vehicle for the transmission of trickery and tricksterism. Children playing tricks on children. Adults playing tricks on children. Children playing tricks on adults. All the kinds of fun if mischievous shenanigans that can ensue have a way of releasing a lot of pressure off the industrialized human. Old man coyote strikes back at those who have been at war with the wild. Sometimes Coyote plays dress up to disguise who he really is. 8. A little sugar maketh the heart merry. In times when it was scarce it was a real treat. The Halloween stash was meted out little by little over the coming weeks. In times when it has become hard to avoid, the sugary Halloween stash becomes another opportunity to binge, just like the adults do at their Halloween parties. Bingeing itself can be seen as a way to blow off steam. Cutting loose in a society where the girders of mind control in the form of the spectacle have been arrayed against everyday people is one way to shake the chains and rattle the cage. The unfortunate side effect however, is sickness in the morning. 9. These days, adults seem to love Halloween almost more than kids. The eponymous Halloween party has become a staple of the calendar year. Though drinking a few pumpkin ales, or a few too many is a part of it, the adults who still love Halloween are searching for that sense of wonder, that sense of magic and phantasy, they’ve missed out on since childhood. Dressing up, believing in ghosts, ghouls and goblins, even if only for a night, is a way to recapture that sense, even if the needs behind the activity remain unconscious. 10. Haunted houses exists. Belief or disbelief is not required. The experience of the haunted house is commensurate with the experience of urban decay. Also, everyone has heard bad stories of dysfunctional families, of wife beaters, and child abusers. Those who live in this unfortunate reality abide in an everyday haunted house, and there are many of them all across America. Sometimes they leave behind ghosts. 11. We are surrounded by the Walking Dead. This may sound harsh, but its true. A softer term would be sleep walkers. Those who are only barely awake to their potential, subsisting on base appetites, wanting to eat everyone else’s brain. At least on Halloween, if you aren’t one of the zombies, you can pretend to be a mad scientist searching for the antidote that will cure this abysmal condition. 12. Things aren’t always what they seem. What is on the outer does not always show the truth of what is on the inner. The old scary witch may hide decades of wisdom behind her wrinkled pockmarked face. The monster pieced together from disparate body parts may be kinder and gentler than the soul who aimed to give him life. 13. In its current American incarnation Halloween allows people the chance to “choose their own adventure” to role-play, and see who they yet might be. This life that we don is temporary, worn like a mask over that which is eternal. While here in this costume of flesh and bone, we each have a unique part to play. We may belong to families, communities, tribes, and societies, but if life were a costume contest, surely one of the top prizes would be the one for “most original”. Abbey veered her sedan to the right to avoid making roadkill of the skunk as they zoomed along the potholed Indiana back-road, causing branches from the hanging trees to scrape side of her ride, and her friend Sara to drop her cigarette on the floor.
“What the hell, Abbey!” Sara yelled. Peggy griped from the back, “Chill out. We’re okay.” “Sorry, all this ghost talk is working me up.” “We all just need to simmer down,” Abbey said, as she re-centered on the narrow road. “Well, slow down first. It’s not like we have to punch in when we get there.” Peggy videotaped it all with a small camera. Later she’d edit the footage for their Midwest Psychic Quest channel on Witchtok. Sara relit her smoke. They’d been in the car over two hours after a crappy day at the salon. Her boss had flaked out again, made her go pick up product on her own dime. As general manager the only perk seemed to be extra hassle and coworkers who talked behind her back. Maybe one day their channel would take off, they’d get some sponsors, ghost hunt and legend trip full-time. It was a dream, but it kept the encroaching winter blues at bay on the dull days of drudgery. The legend tripping videos got the most likes and comments of all their content, and the episode on schedule was a visit to the site of the brutal circus slayings in Euterpe, Indiana, where the Wallbanger Big Top had kept its winter camp and quarters; those quarters now moldered in ruins on an abandoned property behind a strip mall whose last denizens barely stayed in business. They parked their car between Indie CBD and Dollar Discounts, got out, checked flashlights, checked pepper spray, and crept behind the building to look for the hole in the fence that led into the abandoned property. Many others had been there before them. It was easy to follow the trail of beer cans, condom and candy wrappers to the husks of empty outbuildings whose only coats of paint were decades of graffiti. “Let’s get the story on camera.” Peggy set up her light, and prodded Abbey and Sara into place, standing in front of a fading mural of a calliope sprayed on wall that slanted with decay. Sara began. “Before the killings, Ringmaster George Wallbanger often complained he was being driven insane by the sound of the steam calliope. It’s piercing high pitched whistle haunted his dreams. Some researchers have wondered if it was just tinnitus, the gradual loss of his hearing as he aged. Maybe. But when authorities found his journal, a darker picture unfolded. “Wallbanger wrote page after page about the calliope being possessed. He said it’s player Alan Dennison was a servant of hell and whenever he played, the infernal instrument reverberated with the shrieks of the dead and the damned.” “Of course the police dismissed the paranormal connection,” Abbey said, taking her turn. “But the troupe didn’t have to be convinced. The fortune teller Madame Mori had seen the tragedy in her cards. Death. The Hanged Man. The Eight of Swords. Soon this land, next to Indiana’s cornfields, was all splattered with blood.” “Alan didn’t see it coming, despite the arguments he’d had with George over the noise. Then the ice pick was in his neck. Alan’s lover Dolores the Clown tried to stop him. All she got for her trouble was an instant lobotomy when he stabbed her in the eye.” “George poured kerosene over the bodies slumped against the tractor tow that pulled and powered the calliope then flicked the smoldering nub of his cigar to set it all ablaze. Next he pulled out his .22 pistol.” Abbey made a gun shape with her hand, “and blammo, he blew his fucking brains out.” Sara finished it up. “Soon the whole camp was gathered around the fire. The tattooed lady and the merman pulled Dolores to safety. She was alive, but burned, and never recovered her faculties. She spent the rest of her life at the Fort Wayne Sanitarium.” She let out her breath. “Legend has it, that if you come here and circle these ruins three times while reciting this chant, you can still hear Alan playing his calliope.” Sara and Abbey walked around, chanted, hands held. “See the freaks in a snow-white tent, See the tiger and elephant, See the monkey jump the rope, Listen to the Kally-ope! Hail, all hail, the cotton candy stand Hail, all hail, the steam whistle band. Music from the Earth am I! Circus days tremendous cry! My steam may be gone, But my sound will never die!” They chanted as they walked, and the late fall leaves crunched beneath their sneakers. Peggy saw a flicker of red and blue through the camera lens, then a painted face smeared with tears in the haze of moonlight and billows of steam. She smelled sulfur as an acrid taste crept into her mouth, and felt a weakness in the knees, as if she’d seen a guy she had crushed on, but now knew he was a creep, a sociopath hiding behind a charmed smile. She glanced at the ghostmeter clipped to the belt of her jeans and the numbers on its LED display jumped up and down. As they finished a third revolution around the circle, the ether blue outline of a faded canvas tent appeared with a whoosh of scorching vapor as the calliope released its high-pitched cry. A whirling gyre of phantasmal and miasmic shades slithered into being, spinning, as if on a carousel of sound, whose piercing tones splintered the air in a babble of laughter. Then it was gone, and only the smell of popcorn and sawdust remained. Sara felt sick to her stomach, and wished she hadn’t ordered the fried pickles at Diane’s Diner. As they walked back to the car, she couldn’t shake the high-pitched buzzing that rang and rang and rang in her ears, following her the whole way home. This selected Z'ev discography is included here to coincide with my article "Stream Foraging: Mudlarking for Found Objects and the Genius Loci" in my Cheap Thrills column out in in Vol. 2 Issue 3. of New Maps. The Z'ev section of the article focuses on his use of found objects in his music, with a special emphasis on his work creating sculptures out of materials found while mudlarking the River Thames. Z'EV was an American poet, percussionist, sound & visual/video artist. He studied a variety of world music traditions at CalArts. He was also extremely interested in using the drum, not just as a tool for musical entertainment, but communication and majik. He began creating his own percussion sounds out of industrial materials for a variety of record labels was considered pioneer of industrial music. Z'ev was a lifelong seeker. He was on a personal and poetic spiritual quest for knowledge and wisdom. He left traces of his quest behind in the form of the many artifacts, recordings, and texts he composed. The following list only scratches the surface of the various media he was able to create. A Selected Z’ev Discography: Z’ev. Elemental music. Subterranean Records, sub30, 1982, LP. Z’ev. My favorite things. Subterranean Records, sub33, 1985, LP. Genesis P-Orridge and Z’ev. Direction ov Travel. Cold Spring. CSR30CD. Originally released 1990 on Temple Records as Psychic TV, Direction ov Travel. (TOPY 059) Z'ev. Opus 3. Recorded at the church "De Duif", Amsterdam, April 20, 1990. Staalplaat. Z’ev. The Subterranean Years. Klanggalerie, gg129. 2009, compact disc. This recording is a reissue of Elemental Music and My Favorite Things on one CD. Z’ev. Face the Wound. Soleilmoon Records, Sol 72, 2001, compact disc. This album keeps with Z’ev’s aesthetic use of found materials, but here the materials are all recycled and sound collaged spoken word recordings found on tapes he collected scouring thrift stores and other second hand sources. The voices are foregrounded with the percussion as more of an accompaniment. Z’ev. The Sapphire Nature. Tzadik, TZ7161, 2002, compact disc. This recording taps into Z’ev’s cabalistic studies, and comprises “sixteen metaphonic meditations” on the Sefer Yetzirah or Book of Formation. The CD contains PDF material including a translation of the Sefer Yetzirah as well as essays and commentary by Z’ev. Z’ev, Parkin, Nick. The Ascending Scale. Soleilmoon Records, Sol 174, compact disc. Recorded in the Christ Church Crypt in Spitalfields, London. Christ Church was designed by architect Nicholas Hawksmoor, and built between 1714 and 1729. It has been considered an important location among psychogeographers, such as Iain Sinclair, as discussed in his book Lud Heat, and by other authors in other places. A variety of recordings, including many live performances, can be found here: https://zev-rhythmajik.bandcamp.com/ His book Rhythmajik: Practical Uses of Number, Rhythm and Sound, is available from archive.org. "Rhythmajik is not about music but spells out the use of rhythm and sound and proportion for Trance, Healing, etc. it features a unique Numerical Encyclopedia and two Numerical dictionaries comprising over 5000 beat patterns with their semantic meanings encompassing both healing and ritual vocabularies RHYTHMAJIK illuminates the processes allowing these vocabularies to be transformed into potent rhythmic patterns enabling you to focus the awesome energies of the Earth and Mother Nature and let them flow throughout and then out through you it includes information and has applications for people interested in Astrology, Divination, the Music of the Spheres, Numerology, Tarot and Visualization regardless of any particular interest in drumming and by the way, for the first time it delivers the functions of the 9 Chambers all that RHYTHMAJIK requires is the ability to count and the desire to achieve an intentionally considerate consciousness." .:. .:. .:. ELECTROMAGNETIC DADA SURREALISIMO on Trash Flow Radio with Dr. Jacques Cocteau and the Fluxotone Radio Singers Also in conjunction with the article "Stream Foraging: Mudlarking for Found Objects and the Genius Loci" in my Cheap Thrills column for Vol. 2 Issue 3. of New Maps, Dr. Jacques Cocteau put together this two hour radio special on the occasion of filling in for Ken Katkin on Trash Flow Radio. In this episode many recordings from Z'ev were featured, as well as Kurt Schwitter's related material.
Thanks to Ken for hosting the following on his extensive Trash Flow Radio Archive: Stream: Trash Flow Radio July 23, 2022 (Electromagnetic Dada Surrealisimo Special) (115 mins): <https://www.mixcloud.com/ken-katkin/trash-flow-radio-july-23-2022-electromagnetic-dada-surrealisimo/>. Download: Trash Flow Radio July 23, 2022 (Electromagnetic Dada Surrealisimo Special) (115 mins | 104 MB): <https://www.sendspace.com/pro/dl/dokaok>. Playlist For Trash Flow Radio -- July 23, 2022 (Electromagnetic Dada Surrealisimo Special): <https://imgur.com/UVPzoEq>. Sferics is one of Lucier’s most elegant and simple works. It is just a recording. Other versions of Sferics could be produced, and many science and radio hobbyists make similar recordings without ever having heard of Alvin Lucier. The phenomenon at the heart of Sferics existed long before they were ever able to be detected and recorded. Listening to this form of natural radio requires going down to the Very Low Frequency (VLF) portion of the radio spectrum.
The title of Lucier’s work refers to broadband electromagnetic impulses that occur as a result of natural atmospheric lightning discharges and are able to be picked up as natural radiofrequency emissions. Listening to these atmospherics dates all the way back to Thomas Watson, assistant of Alexander Graham Bell, as mentioned at the beginning of this book. He picked them up on the long telegraph lines which acted as VLF antennas. Since his time telegraph operators and radio hobbyists and technicians have heard these sounds coming in over their equipment. For some chasing after these sferics has become a hobby in itself. The VLF band ranges from about 3 kHz to 30 kHz and the wavelengths at this frequency are huge. Most commercial ham radio transceivers tend to only go as low as 160 meters which translates to between 1.8 and 2 MHz in frequency. A VLF wave at 3 kHz is by comparison a length of 100 kilometers. The VLF range includes a portion of the spectrum that is in the range of human hearing, from 20 Hz to 20 kHz. Yet since the sferics are electromagnetic waves rather than sound waves a person needs radio ears to listen to them: i.e., an antenna and receiver. On average lightning bolts strike about forty-four times a second, adding up to around 1.4 billion flashes a year. It’s a good thing the weather acts as a variable distribution system of these strikes, though some places get hit more than others. The discharge of all this electricity means there are a lot of electromagnetic emissions from these strikes going straight into the VLF band where they can be listened to with the right equipment. Because these wavelengths are so long, you could be in California listening to a thunderstorm in Italy or India, or in Maine listening to sferics caused by storms in Australia. The sound of sferics is kind of soothing and reminds me of the crackle of old vinyl that has been unearthed from a dusty vault in a thriftstores basement. There are lots of pops and lots of hiss. As these are natural sounds picked up with the new extensions to our nervous system made available by telecommunications listening to sferics has the same kind of soothing effect as listening to a field recording of an ocean, or stream meandering through lonely woods. But for a long time, listeners, hobbyists and scientists didn’t really know what these emissions were caused by. During the scientific research activities surrounding the International Geophysical Year (IGY) overlapping 1957-58 their presence and source was verified. The IGY yearlong event was an international scientific project that managed to receive backing from sixty-seven countries in the East and West despite the ongoing tensions of the Cold War. The focus of the projects was on earth science. Scientists looked into phenomena surrounding the aurora borealis, geomagnetism, ionospheric physics, meteorology, oceanography, seismology, and solar activity. This was an auspicious area of study for the scientists, as the timing of the IGY coincided with the peak of solar cycle 19. When a solar cycle is at its peak, the ionosphere is highly charged by the sun making radio communications easier, and producing more occurrences of aurora, among other natural wonders. One of the researchers was a man by the name of Morgan G. Millett, and his recordings would go on to have a direct influence on Alvin Lucier. Millet was an astrophysicist who had established one of the first programs to use the fresh discoveries occurring in the VLF band as a way to investigate the properties of space plasma around the earth, in the region now known as the upper ionosphere and magnetosphere. His inquiries into this area allowed for deep gains of knowledge in a new area of study before space-crafts began making direct observations of this area. Millet was also a ham radio operator with the call sign W1HDA. He had been interested in radio since he was a teenager, and throughout his career found ways to use his inclination and knack to research propagation. Throughout the 1940s and early 50s Millet and his colleagues conducted radar experiments near his home in Hanover, New Hampshire. The purpose of these studies was to observe two modes of propagation that magnetoionic theories had predicted would occur when radio waves entered the atmosphere. During the IGY he chaired the US National Committee's Panel on Ionospheric Research of the National Research Council. In this capacity he oversaw the radio studies being conducted all around the earth. As part of that work he joined the re-supply mission to the US Antarctic station on the Weddell Sea in early 1958 as the senior scientific representative. For his own specific research he maintained a series of far-flung stations spread across the Americas. It was from these that he made a number of recordings of natural radio signals. Lucier later heard these at Brandeis. The composer writes, “My interest in sferics goes back to 1967, when I discovered in the Brandeis University Library a disc recording of ionospheric sounds by astrophysicist Millett Morgan of Dartmouth College. I experimented with this material, processing it in various ways -- filtering, narrow band amplifying and phase-shifting -- but I was unhappy with the idea of altering natural sounds and uneasy about using someone else's material for my own purposes.” Millets recordings were made at a network of receiving stations and he interpreted the audio data he collected to obtain some of the earliest measurements of free electron density in the thousands of kilometers above earth. A colorful vocabulary was built up to describe the sounds heard in the VLF portion of the spectrum. Sferics that traveled over 2,000 kilometers often shifted their tone and came to be called tweeks; the frequency would become offset as it traveled in distance, cutting off some of the sound and making it sound higher in the treble range. Whistlers were another phenomenon heard on the air. They occurred when a lightning strike propagated out of the ionosphere and into the magnetosphere, along geomagnetic lines of force. The sound of a whistler is one of a descending tone, like a whistle fading into the background, hence its name. It is similar to the tweek, but elongated due to it stretching out away from the surface followed by a return to the Earth’s magnetic field. Dawn chorus is another atmospheric effect some lucky eavesdroppers in the VLF range may be able to pick up from time to time. It is an electromagnetic effect that may be picked up locally at dawn. The cause of this is thought to be generated from energetic electrons being injected into the inner magnetosphere, something that occurs more frequently during magnetic storms. These electrons interact with the normal ambient background noise heard in the VLF band to create a sound that is actually similar to that of birdsong in the morning. This sound is likely to be heard when aurorae are active when it is dubbed auroral chorus. Millets experimental work in recording these phenomena created a foundation to study such things as how the earth and its magnetic field interact with the solar wind. Listening to Millet’s recordings wasn’t enough for Lucier. “I wanted to have the experience of listening to these sounds in real time and collecting them for myself. When Pauline Oliveros invited me to visit the music department at the University of California at San Diego a year later, I proposed a whistler recording project. Despite two weeks of extending antenna wire across most of the La Jolla landscape and wrestling with homemade battery-operated radio receivers, Pauline and I had nothing to show for our efforts. . . .” The idea was shelved for over a decade. In 1981 Lucier tried again. He got a hold of some better equipment and was able to go out to a location in Church Park, Colorado, on August 27th, 1981. For the Colorado recording he collected material continuously from midnight to dawn with a pair of homemade antennas and a stereo cassette tape recorder. He repositioned the antennas at regular intervals to explore the directivity of the propagated signals and to shift the stereo field. This was all done at Church Park, August 27th, 1981. It was in the early 80s that Millet continued his own radio investigations. He built a network of radar observing stations to study gravity waves that propagate to lower latitudes of Earth from the arctic region. These gravity waves appear as propagating undulations in the lower layers of the ionosphere. Lucier wasn’t the only musician to be interested in this phenomenon. Electronic music producer Jack Dangers explored these sounds under his moniker as Meat Beat Manifesto on a song called The Tweek from the album Actual Sounds & Voices. Pink Floyd used dawn chorus on the opening track of their 1994 album the Division Bell. VLF enthusiast Stephen P. McGreevy has been tracking these sounds for some time, and has collected a lot of recordings and been releasing them on CD and the internet via archive.org. At the time of this writing he has made eight albums of such recordings. On the communications side of things the VLF band’s interesting properties have been exploited for use in submarine communication. VLF waves can penetrate sea water to some degree, whereas most other radio waves are reflected off the water. This has allowed for low-bitrate communications across the VLF band by the worlds militaries. Some hams have also taken up experimenting with communication across VLF, learning more about its unique propagation in doing so. Just as the Hub was getting off the ground and into circulation as a performing ensemble, one of its members, Scott Gresham-Lancaster, was working with Pauline Oliveros on a new project she had initiated in creating the ultimate delay system: bouncing her music off the surface of the moon and back to earth with the help of an amateur radio operator. Since Pauline had first started working with tape she had always been interested in delay systems. Later she started exploring the natural delays and reverberations found in places such as caves, silos and the fourteen-foot cistern at the abandoned Fort Worden in Washington state. The resonant space at Fort Worden in particular had been important in the evolution of Pauline’s sound. It was there she descended the ladder with fellow musicians Paniotis, a vocalist, and with trombonist Stuart Dempster to record what would become her Deep Listening album. Supported by reinforced concrete pillars the delay time in the cistern was 45 seconds, creating a natural acoustic effect of great warmth and beauty. This space continued to be used by musicians, including Stuart Dempster, and the place was dubbed by them, the cistern chapel. Pauline had another deep listening experience in a cistern in Cologne when visiting Germany. Between these experiences, the creation of the album, and the workshops she was starting to teach, she came up with a whole suite of practices and teachings that came to be called Deep Listening. The term itself had started as a pun when they emerged up from the ladder that had taken them into the cistern. Pauline describes Deep Listening as, “an aesthetic based upon principles of improvisation, electronic music, ritual, teaching and meditation. This aesthetic is designed to inspire both trained and untrained performers to practice the art of listening and responding to environmental conditions in solo and ensemble situations.” Since her passing Deep Listening continues to be taught at the Rensselaer Polytechnic Institute under the directorship of Stephanie Loveless. The idea of bouncing a signal off the moon, which amateur radio operators had learned to do as a highly specialized communications technique, was another way of exploring echoes and delays, in combination with technology in a poetic manner. Pauline first had the idea for the piece when watching the lunar landing in 1969. “I thought that it would be interesting and poetic for people to experience an installation where they could send the sound of their voices to the moon and hear the echo come back to earth. They would be vocal astronauts. My first experience of Echoes From the Moon was in New Lebanon, Maine with Ham Radio Operator Dave Olean. He was one of the first HROs to participate in the Moon Bounce project in the 1970s. He sent Morse Code to the moon and got it back. This project allowed operators to increase the range of their broadcast. I traveled to Maine to work with Dave. He had an array of twenty four Yagi antennae which could be aimed at the moon. The moon is in constant motion and has to be tracked by the moving antenna. The antenna has to be large enough to receive the returning signal from the moon. Conditions are constantly changing - sometimes the signal is lost as the moon moves out of range and has to be found again. Sometimes the signal going to the moon gets lost in galactic noise. I sent my first ‘hello’ to the moon from Dave's studio in 1987. I stepped on a foot switch to change the antenna from sending to receiving mode and in 2 and 1/2 seconds heard the return ‘hello’ from the moon.” Though farther away in space than the walls of the worden cistern, the delay time between the radio signal going there and coming back is much shorter. In a vacuum radio waves travel at the speed of the light. Earth Moon Earth, or EME as it is known in ham radio circles was first proposed by W. J. Bray, a communications engineer who worked for Britain’s General Post Office in 1940. At the time, they thought that using the moon as a passive communications satellite could be accomplished through the use of radios in the microwave range of the spectrum. During the forties the Germans were experimenting with different equipment and techniques and realized radar signals could be bounced off the moon. The German’s developed a system known as the Wurzmann and carried out successful moon bounce experiments in 1943. Working in parallel was the American military and a group of researchers led by Hungarian physicist Zoltan Bay. At Fort Monmouth in New Jersey in January of 1946 John D. Hewitt working with Project Diana carried out the second successful transmission of radar signals bounced off the moon. Project Diana also marked the birth of radar astronomy, a technique that was used to map the surfaces of the planet Venus and other nearby celestial objects. A month later Zoltan Bay’s team also achieved a successful moon bounce communication. These successful efforts led to the establishment of the Communication Moon Relay Project, also known as Operation Moon Bounce by the United States Navy. At the time there were no artificial communication satellites. The Navy was able to use the moon as a link for the practical purpose of sending radio teletype between the base at Pearl Harbor in Hawaii, to the headquarters at Washington, D.C. This offered a vast improvement over HF communications which required the cooperation of the ionospheric conditions affecting propagation. When the artificial communication satellites started being launched into orbit the need to use the moon for communicating between distant points was no longer necessary. Dedicated military satellites had an extra layer of security on the channels they operated on. Yet for amateur radio operators the allure of the moon was just beginning, and hams started using it in the 1960s to talk to each other. It became one of Bob Heil’s favorite activities. In the early days of EME hams used slow-speed CW (Morse Code) and large arrays of antennas with their transmitters amplified to powers of 1 kilowatt or more. Moonbounce is typically done in the VHF, UHF and GHz ranges of the radio spectrum. These have proven to be more practical and efficient than the shortwave portions of the spectrum. New modulation methods also have given hams a continuing advantage on using EME to make contacts with each other. It is now possible using digital modes to bounce a signal off the moon with a set up that is much less expensive than the large dishes and amounts of power required when this aspect of the hobby was just getting started.
“For instance, an 80W 70 cm (432 MHz) setup using about a 12-15 dBi Yagi works well for EME Moonbounce communication using digital modes like the JT65,” writes Basu Bhattacharya, VU2NSB, a ham and moonbouncer located in New Delhi, India. On the way to the moon and back, the radio path totals some 50,000 miles and the signals are affected by a number of different factors. The Doppler shift caused by the motion of the moon in relation us surface dwellers is an important factor for making EME contacts. It is also something that effected the sound of the Pauline’s music when it got bounced off the lunar surface. “The sound shifted slightly downward in pitch… like the whistle of a train as it rushes past,” said Pauline of her performance. “I played a duo with the moon using a tin whistle, accordion and conch shell. I am indebted to Scott Gresham-Lancaster who located Dave Olean for me in 1986 and helped to determine the technology necessary to perform Echoes From the Moon. Ten years later Scott located all the Ham Radio Operators for the performance in Hayward, California which took place during the lunar eclipse September 23, 1996. Following is the description of that performance: The lunar eclipse from the Hayward Amphitheater was gorgeous. The night was clear and she rose above the trees an orange mistiness. As she climbed the sky the bright sliver emerged slowly from the black shadow - crystal clear. The moon was performing well for all to see. Now we were ready to sound the moon. “The set up for Echoes From the Moon involved Mark Gummer - a Ham Radio Operator in Syracuse New York. Mark was standing by with a 48 foot dish in his back yard. I sent sounds from my microphone via telephone line in Hayward California to Mark and he keyed them to the moon with his Ham Radio rig and dish and then he returned the echo from the moon. The return came in 2 & 1/2 seconds. Scott Gresham-Lancaster was the engineer and organized all. When the echo of each sound I made returned to the audience in the Hayward University Amphitheater they cheered. Later in the evening Scott set up the installation so that people could queue up to talk to the moon using a telephone. There was a long line of people of all ages from the audience who participated. People seemed to get a big kick out of hearing their voices return - processed by the moon. There is a slight Doppler shift on the echo because of the motion of both earth and moon. This performance marked the premiere of the installation - Echoes From the Moon as I originally intended. The set up for the installation involved Don Roberts - Ham Radio Operator near Seattle and Mike Cousins at Stanford Research Institute in Palo Alto California. The dish at SRI is 150 feet in diameter and was used to receive the echoes after Don keyed them to the moon. With these set ups it was only possible to send short phrases of 3-4 seconds. The goal for the next installations would be to have continuous feeds for sending and receiving so that it would be possible to play with the moon as a delay line.” It's a set up that could work for other musicians who want to realize again Oliveros’s lunar delay system. Or it could be modified to create new works. The thrill of hearing a sound or signal come back from the moon remains, and if creative individuals get together to explore what can be done with music and technology, new vistas of exploration will open up. .:. .:. .:. Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether. RE/Sources: http://www.kunstradio.at/VR_TON/texte/4.html https://muse.jhu.edu/article/810823 |
Justin Patrick MooreHusband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger. Archives
August 2023
Categories
All
|