Institute for Research and Coordination in Acoustics/Music
Back in 1966 Boulez had proposed a total reorganization of French musical life to André Malraux, the Minister of Culture. Malraux rebuffed Boulez when he appointed Marcel Landowski, who was much more conservative in his tastes and programs, as head of music at the Ministry of Culture. Boulez, who had been known for his tendency to express himself as an epic jerk, was outraged. In an article he wrote for the Nouvel Observateur he announced that he was "going on strike with regard to any aspect of official music in France."
As author John Michael Greer has noted, in French intellectual life, the pose of the philosopher, artist or thinker who dismisses the work of everyone else with a sneer is a familiar pose, and Boulez was accustomed to playing out this role, in his voluminous writings, talks, artistic rivalries with his contemporaries and barbed wire criticisms designed to prick at the flesh of the musicians he worked with. The French knew not to take this game too seriously, whereas Americans tended to be put off him and have their feelings hurt.
When confronted about this aspect of his reputation later in life Boulez said, "Certainly I was a bully. I'm not ashamed of it at all. The hostility of the establishment to what you were able to do in the Forties and Fifties was very strong. Sometimes you have to fight against your society."
So when Boulez was asked by the current French president Georges Pompidou to set up an institute dedicated to researching acoustics, music, and computer technology, he was quick to recant his strike with regards to official music in France, accept the offer, and get busy with work. This was the beginning of the Institut de recherche et coordination acoustique/musique, IRCAM. The space was built next to, and linked institutionally to the Centre George Pomidou, official work started in 1973.
Boulez took inspiration from the Bauhaus and used them as a model for the institute. The Bauhaus had been an interdisciplinary art school that provided a meeting ground for artists and scientists, and this was the aspect he sought to emulate. His vision for the institute was to bring together musicians, composers, scientists and developers of technology.
In a publicity piece for IRCAM he wrote, “The creator’s intuition alone is powerless to provide a comprehensive translation of musical invention. It is thus necessary for him to collaborate with the scientific research worker… The musician must assimilate a certain scientific knowledge, making it an integral part of his creative imagination...at educational meetings scientists and musician’s will become familiar with one another’s point of view and approach. In this way we hope to forge a kind of common language that hardly exists at present.”
To bring his vision into reality he needed the help of those at the forefront of computer music. To that end he brought Max Mathews on board as a scientific advisor to the IRCAM project, and he served in that capacity for six years between 1974 and 1980. Mathews old friend Jean-Claude Risset was hired to direct IRCAM’s computer department, which he did between 1975 and 1979. The work that their colleague John Chowning was doing back in California was crucial to the success of the institute and he was tapped as a further resource.
The Center for Computer Research and Musical Acoustics
Putting together IRCAM was a project that went on for almost a decade before it was fully up and running, and from 1970 to 1977 most of the work done was the preliminary planning, organization, and building of the vessel that would house the musical laboratory. It did not have the advantage of being part of an existing institution, such as the BBC or the West German Radio. Everything, including the space, had to be built from scratch. There were several existing templates for electronic music and research that IRCAM could have followed and it chose the American template, modeled on the work done at Bell Labs, when Max Mathews was asked to be the scientific director of IRCAM in 1975. He in turn took the advanced work with computer music being done at the Center for Computer Research and Musical Acoustics (CCRMA) at Stanford as his model and resource for state of the art computer music, based in no small part on his own MUSIC programs.
John Chowning had founded the CCRMA at Stanford officially in 1974, though the basis for it had already begun inside of SAIL. The other founding members were Leland Smith, John Grey, Andy Moore, and Loren Rush. The first course in computer composition had already been given at Stanford in 1969, taught by Chowning, Max Mathews, Leland Smith and George Gucker. Having shared the space and valuable computer time with other researchers at SAIL it was soon time for those interested in the specifics of composing with computers to have their own department at Stanford.
In 1975 Boulez spent two weeks at the CCRMA studying what they were getting up to. The connection continued, and there was a lot of contact between the staff at IRCAM. One of the results was that the computer systems used at each ended up being compatible with each other. A lot of American computer workers ended up in France helping to set up IRCAMs initial system until the French had enough people trained in the technology themselves. There was also extensive back and forth visiting between CCRMA and IRCAM staff. James Moorer did a residence, and Chowning went on to become a guest artist there in 1978, 1981, and in 1985.
Chowning composed his piece Phoné at CCRMA but the piece later had its premiere at IRCAM. In Phoné Chowning expanded upon his previous compositions in FM synthesis to give the work the feeling and texture of the human voice. It came together from work he started doing with his student Michael McNabb on using FM synthesis to produce vocal sounds in 1978. Chowning went to work at IRCAM in late 1979 and stayed into the next year, where he stayed until early 1980. While at IRCAM Chowning was shown the work of Johann Sundberg, and his research into vocal formants. This in turn led to the creation of algorithms used for vocal synthesis. The work Sundberg was doing went on to be the seed from which the CHANT program grew.
All of this work led to Chowning seizing on the goal of synthesizing vocal sounds from computers that mimicked the human voice as close as possible. A number of characteristics particular to speech needed to be implemented to deliver the goods, and these marked difficult technical hurdles. Some of the people who worked CCRMA and IRCAM were perceptual scientists, and Chowning noticed also that there was an indeterminate perceptual aspect with regards to the timbres of voice and instrument. One of the sounds he was experimenting with was that of a bell, and he became fascinated with transforming that bell sound into other sounds.
His piece Phoné was written with all of this in his mind. The title comes from the ancient Greek word for “voice,” the same word used to denote one of the main tools in telecommunications. Using FM synthesis Chowning was able to transform the voice of the bell into a number of different timbres, including that of a human voice with simulated formants.
Intercontemporary Underground Music
Much of the space for IRCAM was built below ground, beneath the Place Igor-Stravinsky, where the boisterous noise of the city streets above does not penetrate. The underground laboratories were first inaugurated 1978 and contains eight recording studios, and eight laboratories, an anechoic chamber, plus various offices and departments spaces. Though it has since be reorganized with the passing of the years, it was first arranged into five departments, each under its own composer-director, with Boulez as the tutelary head. These departments were Electro-Acoustics, Pedagogy, Computers, Instruments and Voice, as well as a department called Diagonal that coordinated between the other departments, who for the most part followed their own research and creative interests. Lucio Berio headed up the Electro-Acoustic department at the beginning.
The piece de resistance at IRCAM is the large Espace de Projection, also known as Espro, a modular concert hall whose acoustics can be changed according to the temperament and design of the composers and musicians working there. The Espro space was created under the direction of Boulez and features a system of “boxes in boxes” to create the variable acoustics. When the space was first opened Boulez said of it was “really not a concert hall, but it can project sound, light, audiovisual events, all possible events that are not necessarily related to traditional instruments.” The position of the ceilings can be moved to change the volume of the room. The walls and ceilings have panels that are mode of rotatable prismatic modules that each have three faces, one for absorbing, another reflecting, and one for diffusing sound. These are called periacts and can be changed on the spot.
Boulez was busy as all get out in the seventies. If developing IRCAM and conducting the BBC Symphony Orchestra from 1971 to 1975, and the New York Philharmonic from 1971 to 1978, was not enough, he also founded the Ensemble intercontemporain (EIC) in 1976. The EIC was built up with support from Minister of Culture Michel Guy, and the British arts administrator Nicholas Snowman. EIC filled a gap in contemporary music by providing an ensemble available to play chamber music. He also wanted to cultivate a group of musicians dedicated to performing contemporary music. EIC would have a strong working relationship with IRCAM so that musicians were available to play compositions made in conjunction with the institute inside the Espro, as well as tour and make recordings. This of course included Boulez’s own compositions as he had the energy to return to writing music as his conductive activities slowed down.
Though Boulez had made a piece of musique concrète at GRM, and had experimented with tape music with Poesie Pour Pouvoir, these were not his main interests in avantgarde music. What concerned Boulez was the live transformation of acoustic sound electronically. He felt that recordings, played in a concert hall, was like going to listen to a dead piece of music. The live transmformation of live sound was what held promise. While the possibility for the live transformation of acoustic sounds had been explored by Stockhausen and Cage, these did not have the same precision that was now available with the computers and programs created at CCRMA and IRCAM.
Répons was composed in various versions between 1980–1984 once IRCAM was up and running and his conductive activity had slowed enough to give him time to compose. The instrumental ensemble is placed in the middle of the hall. Six soloists are place around the audience at various points. These include two pianos, harp, cimbalom, vibraphone and glockenspiel or xylophone, and it these instruments that give Répons much of its color.
The instrumental music gets transformed by computer electronics and projected through the space. The harp, vibraphone and piano create glittering sparkles that illuminate the space fulfilling Boulez’s dream of the live electronic transformation of acoustic sound.
Once IRCAM got into a groove it started pushing out a steady stream of compositions, papers, and software from its many scientific and artistic residents and collaborators. Boulez’s vision of a “general school or laboratory” where scientists and sound artists mixed and mingled had come to fruition. One of its most famous outputs is the software suite Max/MSP.
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP. Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.
Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.
Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.
Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.
Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today.
Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.
It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
Max Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
The first Digital Signal Processing (DSP) workstation computer, the 4A, was built at IRCAM. The computer was used by Xavier Rodet, Yves Potard and Jean-Baptiste Barrière to create the CHANT program that was originally made for the analysis and synthesis of the singing voice. They developed an algorithm known as Fonction d’Onde Formantique (FOF) to emulate the human voice. By using four or five FOF generators in parallel the program is able to model the formants created by the human vocal tract. The flexibility of their program also allows for the synthesis of instrumentals sounds and noises, such as those of bells and cymbals among many others.
CHANT’s creators made subprograms to use with CHANT for specific types of singing or utterance, such as bel canto voice for western style soprano singing, and Tibetan chant. In the bel canto subprogram they used a phase vocoder to analyze the same pitch as interpreted by a number of different singers. With this data they were able to obtain the precise frequencies of the first eight formants used by the singer. In writing the code for the algorithm they kept the frequencies of the last six formants. For the first two they revealed a relationship between the frequencies of the formants and the pitch of the note that was sung. They next created a rule where the first and second formants were placed on the first and second harmonics except in cases where the frequency obtained is below a fixed threshold. This allowed them to create uniform vocal color over a range of two octaves. Next they programmed other rules for various parameters of singing.
For the Tibetan chant subprogram their main concern was to develop a system for voice emulation that accounted for noise and strange harmonics, in contrast to the typical voice of the trained western singer who tries to eliminate randomness and regional accents. When using CHANTS basic presets, noise is controlled by rules that are dependent on the formant. For the Tibetan subprogram noise was approached from the random aspects microfluctuations within the fundamental and frequencies of the formant. For timbre they added separate amplitude controls for the even and odd harmonics with additional envelopes for random variation. They also tooled the articulation, modeling the consonants and constructing them in “the form of transitions from one vowel to another, affecting the amplitude, the fundamental, and the formant trajectories, that is, the frequency of each formant as a function of time.” They used the length of phonemes, fundamental frequency, vibrato and vocal effort (or the way a person speaks to another based on proximity to each other) to create rules around rhythm and stress. All of this was done in the effort to synthesize a non-western style of vocal art.
CHANT was equipped with a number of basic parameters for relative ease of use, but those who sought total compositional control could use an extended version of the program that allowed for different models to be implemented, including the non-vocal models. CHANT began with analyzing and mimicking vocal behavior, but was capable of going beyond vocal behavior into other areas of sound, including that of granular textures that opened up a variety of possibilities for spectral exploration.
The CHANT team created a number of different models to encompass all the traditional instruments and some non-traditional. With the models in place, composers can work with these definitions to create “imaginary hybrid instruments” that give them and listeners a chance to explore new timbral spaces. Some of these possibilities offered by CHANT have been explored by a number of different composers including Jonathan Harvey, Jukka Tiensuu and Tod Machover among a number of others.
Jonathan Harvey’s Ritual Melodies
Jonathan Harvey was a British composer born in 1939 who liked to jump across the boundaries of genre within contemporary classical music. He had begun his studies with Benjamin Britten who advised him to learn also from Erwin Stein and Hans Keller. Like many other composers in his general age group, he fell under the spell of Karlheinz Stockhausen and attended his composition courses at Darmstadt in 1966 and 1967. In 1969 he got a Harkness Fellowship at Princeton University where he was able to study under Milton Babbitt.
The Baleriac islands have a history of being good for music, and Harvey wrote his 1973 piece Inner Light I, while staying in Menorca. It is an electroacoustic work for seven instruments and tape dedicated to Benjamin Britten on the occasion of his 60th birthday. He realized the tape portion when sequestered away inside the studios of Swedish Radio, Stockholm and at University College in Cardiff, Wales. This electronic portion features ring modulation and varispeed tape.
Unlike many of his fellow composers on the experimental end of the spectrum, some of Harvey’s work are played with frequency, rather than just being concerned with frequency. This is in part due to his early religious affiliations with the Church of England, and his own time as a chorister at St. Michaels, Tenbury. Harvey loved choral music and wrote pieces for the British cathedral choirs. His I, Love the Lord (1976) and The Angels (1992) are thus the most recorded and performed of his music.
Harvey followed the path of many other 20th century composers and went on to teach composition, working at Southampton and Sussex Universities, while doing stints as a guest lecturer in the United States. He was happy to encourage his students, and help them develop in their own ways, rather than demanding anyone adhere to a particular school of musical thought. He hadn’t, so why should they?
Throughout his career he would flit between electroacoustic works, purely electronic pieces, and orchestral pieces that utilized live electronics. A number of works he wrote concerned the nature of speech, whether sung, spoken, or synthesized and its relationship to song.
Mortuous Plango, Vivos Voco is a short work for eight-channel tape. It uses concrète sounds of his son singing, who was then a chorister in the Winchester Choir, and the recorded sounds of the largest bell of Winchester Cathedral, transformed in various ways by the use of MUSIC V and CHANT. Other synthesized sounds were also used. The piece also uses phonetics, linguistic analysis, proportions from the golden ratio, and the judicious use of spatialization and a sonorous reverb that gels it all together.
The voice of the bell is strong in this work. The title was taken from the latin words inscribed on the bell, that translate as “I lament the dead, I call the living.” The work is one of ethereal and genius and recalls the similar use of concrète voices and electronic techniques used in Gesang der Junglinge.
Like Stockhausen, Harvey was completely open about his mysticism, and his belief in spiritual realities shines through in his music. In spiritual matters he was also as eclectic as he was in his compositions. He had a pronounced interest in Eastern religions which he seemed to be as comfortable writing music about as he was within the Christian milieu.
Bhakti was written in 1982 as a commission from IRCAM and is a piece for 15 instruments and tape. The structure of the close to hour long composition is based arounds texts from the Hindu Rig Veda, which give it a meditative and contemplative aspect. Twelve short movements, each varied three times, give it thirty-six subsections, each of these defined by a certain grouping of instruments playing a particular pitch cell. Showing his serialist leanings, Bhakti explores the partials of a single pitch, a quarter-tone above G, below A440. The series are made from proportional intervals above below that frequency, with space for what Harvey calls “glossing” or allowing for improvisation in devising the pitch cells. The tape part of Bhakti was made using sounds from the instrumental ensemble mixed and transformed by the computer. At the end of each movement a quotation from the Rig Veda is heard. Harvey considered these 4,000 year old hymns “keys to consciousness.”
Harvey used synthesized voices and instruments again in his 1990 electronic piece Ritual Melodies. Realized at IRCAM with the help of Jan Vandenheede and the program Formes, which had been designed originally as Computer Assisted Composition environment for the synthesis program CHANT. Vandenheede created a number of sounds using the program. These included voices again, both western plainchant and Tibetan style chant. The other instrument sounds were all decidedly eastern, and included a Vietnamese koto, and Indian oboe, Japanse shakuhachi, and a Tibetan bell. Listening, they do not sound at all artificial. Voice synthesis had come a long way since the days of Daisy Bell. All of the instruments used are for ritual or religious purposes in different cultures, but Harvey wanted to bring them together in a way that wouldn’t normally happen real world rituals. Here Harvey composed 16 melodies that seamlessly move between the different synthesized instruments, and form an intertwined circular chain with each other when other melodies are introduced and morph into each other. He writes of the piece that, “Each melody uses the same array of pitches, which is a harmonic series omitting the lowest 5 pitches. Each interval, therefore is different from every other interval. So the piece as a whole reflects the natural acoustic structure of the instruments and voices.” The bell sounds are used to mark different
Harvey was as happy to work with traditional instruments and timbres as he was making purely electronic works or purely choral works. He was also happy to mix and match. He liked variety and drew his influences from a diverse grouping of musicians and teachers. All of these influences are present in his own diversity of work. His ability to work back and forth between modes gave him a lot of freedom, even if it made critics hard to pigeonhole his music.
Between 2005 and 2008 Jonathan Harvey was a composer in residence with the BBC Scottish Symphony Orchestra. Three major works known together as the “Glasgow Trilogy” came out of this period. The trilogy begins with Towards a Pure Land… (2005), continues with Body Mandala (2006), and finishes with the masterpiece Speakings (2008). All three pieces combine orchestral instruments with electronics, and all three are inspired by the Buddhist side of his spiritual inclinations, but it is Speakings where Harvey once again looks into the correlations between speech and song. Within that same time span Harvey wrote Sprechgesange (2007) for Oboe, Cor Anglais and Ensemble, and it is these two pieces that we will look at here.
Harvey ties his purely instrumental piece Sprechgesange to the earlier efforts of Schoenberg and Berg by using this word as its title. Harvey’s idea for Sprechgesange came from musing on the psychological roots of speech and sound and how those are so often connected to the cooing voice and talking and singing of the mother to her baby child, who experiences these first in the womb and then as a newborn and baby in the very early process of learning to speak and sing themselves. Halfway through the piece Harvey inserts a Wagner reference. He says this is a “a moment when Parsifal 'hears' the long-forgotten voice of his dead mother call the name, his own name, that he had forgotten - an action of the shamanistic Kundry. From this awakening, this healing, comes the birth of song from the meaningless chatter of endless human discourse. 'Speech' with deep meaning...”
Speakings was commissioned in part by IRCAM and Radio France who helped with the electronic side of things, again using programs to synthesize speech. He makes use of the orchestral palette to make further voicings that mimic the utterance of phonemes, building on the techniques he had used in Sprechgesange. From the slow beginning the organic and the digital merge together into a gradual towering babble of enunciation by the pieces second movement. The tracery of vocoded signals is laced into the chaos of linguistic polyphony.
Harvey writes, “The orchestral discourse, itself inflected by speech structures, is electro-acoustically shaped by the envelopes of speech taken from largely random recordings. The vowel and consonant spectra-shapes flicker in the rapid rhythms and colours of speech across the orchestral textures. A process of 'shape vocoding', taking advantage of speech's fascinating complexities, is the main idea of this work.” Different instruments had the “shape vocoding” applied to them through the judicious use of microphones.
The third and final movement begins with bell rings and horn blasts weaving between each other in a way that is reminiscent of how mantras are intoned with full vibration. The listener is now in a sacred place, a cathedral or temple, and the voices here chant an incantatory song, along single monodic lines reverberating through space. Here we return to what Harvey says is the “womb of all speech”, the Buddhist mantra OM-AH-HUM, which in the mythology of India is said to be half-song, half-speech. This is pure speech. The original tongue. In Judaeo-Christian terms it be likened to the original language spoken by Adam and Eve, and before the time of Babel when humanities tongues were shattered and split into multiplicity.
In Buddhist mythology from India there is a notion of original, pure speech, in the form of mantras - half song, half speech. The OM-AH-HUM is said to be the womb of all speech.
Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic.
IRCAM, CCRMA, Intercontemporary Underground Music,
Georgina, Born. Rationalizing culture: IRCAM, Boulez, and the Institutionalization of the Musical Avantgarde. Berkeley, CA.: University of California Press, 1995.
IRCAM. “How Well Do You Know Espro?” <https://manifeste.ircam.fr/en/article/detail/connaissez-vous-lespace-de-projection/>
Krämer, Reiner. “X: An Analytical Approach to John Chowning’s Phoné.” < https://ccrma.stanford.edu/sites/default/files/user/jc/phone_kraemer_analysis_0.pdf>
National Public Radio. “IRCAM: The Quiet House Of Sound” <https://www.npr.org/templates/story/story.php?storyId=97002999#:~:text=IRCAM%20was%20created%20more%20than,20th%20century's%20pre%2Deminent%20composers.>
Smith, Richard Langham, Potter, Caroline, ed. French Music Since Berlioz. Burlington, VT. Ashgate Publisher, Inc., 2006.
Tingen, Paul. “IRCAM: Institute For Research & Co-ordination in Acoustics & Music.” <https://www.soundonsound.com/people/ircam-institute-research-co-ordination-acoustics-music>
CHANT, Jonathan Harvey’s Ritual Melodies, Speakings
Anderson, Julie. “Jonathan Harvey Dies Aged 73.” <https://www.takte-online.de/en/portrait/article/artikel/ircam-und-kathedralchor-zum-tode-jonathan-harveys/index.htm>
Bresson, Jean, Agon, Carlos. “Temporal Control over Sound Synthesis Processes.” Sound and Music Computing (SMC’06), 2006, Marseille, France.
Chamorro, Gabriel José Bolaños Chamorro. “An Analysis of Jonathan Harvey’s Speakings For Orchestra and Electronics.” Ricercare No. 13, 2020.
Faber Music. “Jonathan Harvey's masterpiece trilogy at Edinburgh International Festival.” < https://www.fabermusic.com/news/jonathan-harveys-masterpiece-trilogy-at-edinburgh-international-festival-252>
Harvey, Jonathan. “Inner Light 1 (1973)” < https://www.wisemusicclassical.com/work/7644/Inner-Light-1--Jonathan-Harvey/>
Harvey, Jonathan. “Ritual Melodies.” <https://www.fabermusic.com/music/ritual-melodies-1504>
Harvey, Jonathan. “Sprechgesang.” < https://www.fabermusic.com/music/sprechgesang-3850>
Harvey, Jonathan. “Speakings.”< https://www.fabermusic.com/music/speakings-5282>
Harvey, Jonathan, Lorrain, Denis, Barrière, Jean-Baptiste, Haynes, Stanley. “Notes on the Realization of ‘Bhakti.” Computer Music Journal, Vol. 8, No. 3 (Autumn, 1984), pp. 74-78 (5 pages)
Holmes, Thom. Electronic and Experimental Music: Technology, Music, and Culture. New York, NY.: Routledge, 2020.
Manning, Peter. Electronic and Computer Music. Oxford, UK.: Clarendon Press, 1993.
Rodet, Xavier, Potard, Yves, Barriere, Jean-Baptiste. “The CHANT Project: From the Synthesis of the Singing Voice to Synthesis in General.” Computer Music Journal, Vol. 8, No. 3 (Autumn, 1984), pp. 15-31)
Service, Tom. “A Guide to Jonathan Harvey’s Music.”< https://www.theguardian.com/music/tomserviceblog/2012/sep/17/jonathan-harvey-contemporary-music-guide>
The Message Screams Its Purity
Reflections on Skinny Puppy, Bogarts April 28th, 2023
I never thought I’d get to see Skinny Puppy live. Back in 1999 when I first became a fan of the band, around the age of twenty, they had fallen into an inactive stasis. At that time their last album had been The Process. The album was marked by a number of production and recording issues that had largely been absent from the collaborative spirit felt by the members on previous albums and punctuated by the death of Dwayne Goettel. The band dissipated, but cEvin Key kept busy with his Download project. Download had been my first entry point towards Skinny Puppy anyways. The other entry point had been The Tear Garden. This group had been formed by cEvin Key with Edward Ka-Spel from the Legendary Pink Dots -still one of my favorite experiemental-psychedelic-goth bands ever. The Tear Garden featured a lot of other members from both Skinny Puppy and Legendary Pink Dots, and I was obsessed with the Dots and Tear Garden at the time. Listening to Skinny Puppy back then was part of retracing cEvin Keys first steps, and I fell in love with what I heard.
I lost track of Skinny Puppy’s output until Weapon came out in 2013. Later still I backed-tracked again to listen to the three albums they put out between 2004 and 2011. Of those I think hanDover may be my favorite. I still followed along with Key here and there, and was delighted a few years ago when The Tear Garden put out The Brown Acid Caveat, wonderfully lysergic, if a bit of a bum trip. Yet if it hadn’t been for the darkness and melancholy I don’t know if I would have listened in the first place.
When I first heard Skinny Puppy was going on tour I got terribly excited and made sure to get a ticket as soon as I could. The tour was slated at their 40th anniversary and Final Tour, so now was the chance to go if I was going to go. By the time the Friday of the show rolled around my anticipation was peaking. I’d done my homework over the past few weeks and gone back and relistened to some of my old favorite songs and several favorite albums, along with some of the others I’d never listened to as often. When I saw the line to get inside Bogart’s was going all the way down the short side of Short Vine, down Corry St. towards Jefferson, I realized I was going to be standing in the longest line I’d ever been in for a concert at Bogarts. The only others I can remember that were maybe as long were for my first punk show ever, Rancid with The Queers, or later, Fugazi.
Being in line for this show was similar to that first concert I went to as a young and aspiring skater punk: I felt a sense of unity being there with a bunch of misfit freaks. I felt at home, and normal for a couple hours. I was also happy to know that: Goths Not Dead.
Goth may not be in the same advanced and glorious state of gloom and decay as was when Charles Baudelaire roamed the streets Paris, or when Edgar Allan Poe rued America’s imagination, but those who love the imagery of death are very much alive. Also, I’ve never seen so many Bauhaus and Peter Murphy T-Shirts in one place.
Granted, a lot of these goths were aging goths and rusting industrial music rivetheads. A large contingent seemed to be middle-aged like me, in their forties, but others were in their fifties, sixties and a few beyond. I wasn’t sure how many newer fans Skinny Puppy had generated but there were a lot of young people too. These included the children of just-now-having-kids Gen Xers and older Millenials who had brought several 6 to 10 year olds to join in the fray. I guess that shouldn’t have been surprising to me, as I had taken my grandson, then about nine, to see Lustmord when he came to Columbus to do a set at the CoSci planetarium. Either way it was good to see that the intertwined Gothic and Industrial subcultures are not only still alive, but apparently reproducing.
The show had sold out and inside the place was packed. It seems obvious that industrial music still resonates. Even as the industrial system that inspired this type of music has long been in decline, our society still grapples with its after effects. Machines have wreaked havoc on human relations, and industrial music still grapples with our relationship troubled relationship to technology.
The opening band Lead Into Gold made some pretty mean electronic cuts. I liked all the instrumental aspects of the band, and they had some chest thumping bass. I’m glad I don’t need a pacemaker, because if I did I would have worried it might get jarred loose from the vibrations. This was the side project of Paul Barker, aka Hermes Pan, former bassist for Ministry, and as such I can see why it left me a bit in the middle. Ministry had reliably been a band I always felt stuck in the middle on. I didn’t much care for the vocal side of Lead Into Gold’s performance and they did little to interact with the audience. They weren’t bad, I just would have liked them better sans vocals.
During the intermission the palpable pressure continued to build until the lights dimmed, the members of Skinny Puppy came on stage, and the first strains of “VX Gas Attack” slipped out of the speakers. The song uses the sampled word “Bethlehem” among others, and I knew was in some kind of embattled holy land for the duration of the concert.
Ogre started off behind a white screen, singing, doing shadow puppet maneuvers, as his growls and inflections pummeled the gathered masses, the drums assaulted and electronics laid everything in a thick bed. By the time the third song “Rodent” came on, Ogre was out from behind the veil, wearing a long dark cloak, face covered in shadows.
Then the cloak came down to reveal Ogre as an alien with green lit up eyes that pulsed down on the audience. Another player prowled the stage with him, brandishing a kind of cattle prod or taser. Whatever little sanity was left in the crowd disappeared. The alien wasn’t here to torture us earthlings, but had come and gotten tortured as an alien. The alien hadn’t come to earth to abduct, torture or experiment on any in the crowd, but had come down and was now subject to being prodded, probed, manipulated, and perhaps even vivisected at the hands of humans. I saw this aspect of the performance as a perfect metaphor for our own state of affairs at this time within the larger culture; this time where we are more alienated from each other than we have ever been before, at least in my memory as a tail end Gen Xer. We live in a time of massive projection, of what Carl Jung called “the shadow”. These are the blind spots of our psyches, the places where all the things we refuse to acknowledge got to live. In our society, with all the things we repress and suppress, the contents of the shadow are bound to bubble up as a kind of crude oil used to fuel both sides of the forever culture wars. Since we refuse to look in the mirror at our own shadows, at the alien inside of us, we must find that alien other, prod, tase, torture and beat them down.
I had expected to see a montage of horror movie clips projected behind the band, the kind of stuff that might traumatize me for the rest of my life. That wasn’t part of the show, but it didn’t need to be. They did use abstract imagery behind them, but the interaction between ogre and the cattle prodder on stage was more than engaging.
Now I love Ogre’s vocals, but the main draw for me as a Skinny Puppy fan has always been the electronic wizardry of cEvin Key. It was fun to watch Key playing, but as I got jostled about in the slew of people, I at first couldn’t see him so well. I was finally able to get into a spot where I could see all the players doing their thing. This leads me to the drummer. I hadn’t expected to see one there. Previous accounts of their live shows noted the use of drum machines. The live drummer was a real plus to the overall experience of the event. He beat the hell out of those drums and the sound was great, matching up expertly to all the songs.
Eventually as the music at the show built to the first climax a giant brain was brought out on stage, and the player with taser or cattle prod went straight at it, hitting the brain in rhythm, just like the electrical pulses from the music pulsed my own head. As the concert wound down they took the music all the way back to the beginning with “Dig It,” the last song before the first encore.
When they came back on stage, it was just Key and the guitarist Matthew Setzer, engaging in an electronic jam or “Brap.” This could have gone on a lot longer in my opinion. These extended freakout sessions were some of my favorite output of material from their archives.
Then Ogre came back out without his costume and ripped into “Gods Gift (Maggot)” before launching into “Assimilate.” A second encore brought them back with another oldie “Smothered Hope” and they ended the whole thing with their more recent song “Candle.”
There is something about Skinny Puppy that is tribal. It’s like they were able to draw something out of the terroir of the land they grew up and lived on, and infused that Cascadian vibe into the music. Last year I listened to an interview done by cEvin Key with his fellow Vancouver friends Bill Leeb and Rhys Fulber (of Front Line Assembly, Delerium). They were talking about the scene in the early days, as the Canadian iteration of industrial music developed with the birth of Skinny Puppy. Key talked about being able to walk to anything that was happening, and not needing a car, and how that was an advantage of those early days in the late 70s and early 80s.
It got me thinking, as they bantered back and forth about taking drugs, walking around with big poofed up hair, kind of like a gothed out version of glam, of how tribalistic that scene must have been for them. The creativity that was coming to them seemed to come not only from the music they were listening to that was inspiring them (Nocturnal Emissions, Throbbing Gristle, early electronics, lots of punk) but also the energy of the land itself. The kind of industrial music that came out of Vancouver had its own particular flavor. It only could have come from there, as the consciousness of the land spilled into the people making the music. This idea about the terroir of music was later born out by another interview I listened to on Key’s YouTube channel, about the making of the album Too Dark Park -and one of the Vancouver parks the members of the band spent a lot of time hanging out at, and the ancient Native American burial sites within the park.
I was grateful to be able to share this tribal experience with the band on stage, and feel the sense of camaraderie with the others in the crowd. All these weeks later I am still going back and forth over it in my head, assimilating the download of their specific style of industrial sound, born in Vancouver and spread around the world.
Pierre Boulez was of the opinion that music is like a labyrinth, a network of possibilities, that can be traversed by many different paths. Music need not have a clearly defined beginning, middle and end. Like the music he wrote, the life of Boulez did not follow a single track, but shifted according to the choices available. Not all of life is predetermined, even if the path of fate has already been cast. Choices remain open. Boulez held that music is an exploration of these choices. In an avantgarde composition a piece might be tied together by rhythms, tone rows, and timbre. A life might be tied together by relationships, jobs and careers, works made and things done. The choices Boulez made take him through his own labyrinth of life.
As Boulez wrote, “A composition is no longer a consciously directed construction moving from a ‘beginning’ to an ‘end’ and passing from one to another. Frontiers have been deliberately ‘anaesthetized’, listening time is no longer directional but time-bubbles, as it were…A work thought of as a circuit, neither closed nor resolved, needs a corresponding non-homogenous time that can expand or condense”.
Boulez was born in Montbrison, France on March 26, of 1925 to an engineer father. As a child he took piano lessons and played chamber music with local amateurs and sang in the school choir. Boulez was gifted at mathematics and his father hoped he would follow him into engineering, following an education at the École Polytechnique, but opera music intervened. He saw Boris Godunov and Die Meistersinger von Nürnberg and had his world rocked. Then he met the celebrity soprano Ninon Vallin, the two hit it off and she asked him to play for her. She saw his inherent and talent and helped persuade his father to let him apply to the Conservatoire de Lyon. He didn’t make the cut, but this only furthered his resolve to pursue a life path in music.
His older sister Jeanne, with whom he remained close the rest of his life, supported his aspirations, and helped him receive private instruction on the piano and lessons in harmony from Lionel de Pachmann. His father remained opposed to these endeavors, but with his sister as his champion he held strong. In October of 1943 he again auditioned for the Conservatoire and was struck down. Yet a door opened when he was admitted to the prepatory harmony class of Georges Dandelot. Following this his further ascension in the world of music was swift.
Two of the choices Boulez made that was to have a long-lasting impact on his career was his choice of teacher, Olivier Messiaen, who he approached in June of 1944. Messiaen taught harmony outside the bounds of traditional notions, and embraced the new music of Schoenberg, Webern, Bartok, Debussy and Stravinsky.
In February of 1945 Boulez got to attend a private performance of Schoenberg’s Wind Quartet and the event left him breathless, and led him to his second influential teacher. The piece was conducted by René Leibowitz and Boulez organized a group of students to take lessons from him for a time. Leibowitz had studied with Schoenberg and Anton Webern and was a friend of Jean Paul Sartre. His performances of music from the Second Viennese School made him something of a rock star in avant-garde circles of the time. Under the tutelage of Leibowitz, Boulez was able to drink from the font of twelve tone theory and practice.
Boulez later told Opera News that this music “was a revelation — a music for our time, a language with unlimited possibilities. No other language was possible. It was the most radical revolution since Monteverdi. Suddenly, all our familiar notions were abolished. Music moved out of the world of Newton and into the world of Einstein.”
The work of Leibowitz helped the young composer to make his initial contributions to integral serialism, the total artistic control of all parameters of sound, including duration, pitch, and dynamics according to serial procedures. Messiaen’s ideas about modal rhythms also contributed to his development in this area and his future work.
Milton Babbitt had been first in developing has own system of integral serialism, independently of his French counterpart, having published his book on set theory and music in 1946. At this point the two were not aware of each others work. Boulez’s first works to use integral serialism are both from 1947: Three Compositions for Piano and Compositions for Four Instruments.
While studying under Messiaen Boulez was introduced to non-western world music. He found it very inspiring and spent a period of time hanging out in the museums where he studied Japanese and Balinese musical traditions, and African drumming. Boulez later commented that, "I almost chose the career of an ethnomusicologist because I was so fascinated by that music. It gives a different feeling of time."
In 1946 the first public performances of Boulez’s compositions were given by pianist Yvette Grimaud. He kept himself busy living the art life, tutoring the son of his landlord in math to help make ends meet. He made further money playing the ondes Martentot, an early French electronic instrument designed by Maurice Martentot who had been inspired by the accidental sound of overlapping oscillators he had heard while working with military radios. Martentot wanted his instrument to mimic a cello and Messiaen had used it in his famous symphony Turangalîla-Symphonie, written between 1946 and 1948. Boulez got a chance to improvise on the ondes Martentot as an accompanist to radio dramas. He also would organize the musicians in the orchestra pit at the Folies Bergère cabaret music hall.
His experience as a conductor was furthered when actor Jean-Louis Barrault asked him to play the ondes for the production of Hamlet he was making with his wife, Madeline Reanud for their new company at the Théâtre Marigny. A strong working relationship was formed and he became the music director for their Compagnie Renaud-Barrault. A lot of the music he had to play for their productions was not to his taste, but it put some francs in his wallet and gave him the opportunity to compose in the evening. He got to write some of his own incidental music for the productions, tour South America and North America several times each, in addition to dates with the company around Europe. These experiences stood him well in stead when he embarked on the path of conductor as part of his musical life.
In 1949 Boulez met John Cage when he came to Paris and helped arrange a private concert of the Americans Sonatas and Interludes for Prepared Piano. Afterwards the two began an intense correspondence that lasted for six-years. In 1951 Pierre Schaeffer hoste the first musique concrète workshop. Boulez, Jean Barraqué, Yvette Grimaud, André Hodeir and Monique Rollin all attended. Olivier Messiaen was assisted by Pierre Henry in creating a rhythmical work Timbres-durè es that was mad from a collection percussive sounds and short snippets.
At the end of 1951, while on tour with the Renaud-Barrault company he visited New York for the first time, staying in Cage’s apartment. He was introduced to Igor Stravinksy and Edgard Vaèse. Cage was becoming more and more committed to chance operations in his work, and this was something Boulez could never get behind. Instead of adopting a “compose and let compose” attitude, Boulez withdrew from Cage, and later broke off their friendship completely.
In 1952 Boulez met Stockhausen who had come to study with Messiaen, and the pair hit it off, even though neither spoke the others language. Their friendship continued as both worked on pieces of musique concrète at the GRM, with Boulez’s contribution being his Deux Études. In turn, Boulez came to Germany in July of that year for the summer courses at Darmstadt. Here he met Luciano Berio, Luigi Nono, and Henri Pousseur among others, and found himself moving into a role as an acerbic ambassador for the avantgarde.
Sound, Word, Synthesis
As Boulez got his bearings as a young composer, the connections between music and poetry came to capture his attention, as it had Schoenberg. Poetry became integral to Boulez’s orientation towards music, and his teacher Messiaen would say that the work of his student was best understood as that of a poet.
Sprechgesang, or speech song, a kind of vocal technique half between speaking and singing, was first used in formal music by Engelbert Humperdink in his 1897 melodrama Königskinder. In some ways sprechgesang is a German synonym for the already established practice of the recitative in operas as found in Wagner’s compositions. Arnold Schoenberg used the related term Sprechstimme as a technique in his song cycle Pierrot lunaire (1912) where he employed a special notation to indicate the parts that should be sung-spoke. Schoenberg’s disciple Alban Berg used the technique in his opera Wozzeck (1924). Schoenberg employed it again in his Moses and Aron opera (1932).
In Boulez’s explorations of the relationship between poetry and music he questioned "whether it is actually possible to speak according to a notation devised for singing. This was the real problem at the root of all the controversies. Schoenberg's own remarks on the subject are not in fact clear."
Pierre Boulez wrote three settings of René Char's poetry, Le Soleil des eaux, Le Visage nuptial, and Le Marteau sans maître. Char had been involved with Surrealist movement, was active in the French Resistance, and mixed freely with other Parisian artists and intellectuals. Le Visage Nuptial (The Nuptial Face) from 1946 was an early attempt at reuniting poetry and music across the gap they had taken so long ago. He took five of Chars erotic texts and wrote the piece for two voices, two ondes Martenot, piano and percussion. In the score there are instructions for “Modifications de l’intonation vocale.”
His next attempt in this vein was Le Marteau sans maître (The Hammer without a Master, 1953-57) and it remains one of Boulez’s most regarded works, a personal artistic breakthrough. He brought his studies of Asian and African music to bear on the serialist vortex that had sucked him in, and he spat out one of the stars of his own universe. The work is made up of four interwoven cycles with vocals, each based on a setting of three poems by Char taken from his collection of the same name, and five of purely instrumental music. The wordless sections act as commentaries to the parts employing Sprechstimme. First written in 1953 and 1954, Boulez revised the order of the movements in 1955, while infusing it newly composed parts. This version was premiered that year at the Festival of the International Society for Contemporary Music in Baden-Baden. Boulez had a hard time letting his compositions, once finished, just be, and tinkered with it some more, creating another version in 1957.
Le Marteau sans maître is often compared with Schoenberg’s Pierrot Lunaire. By using Sprechstimme as one of the components of the piece, Boulez is able to emulate his idol Schoenberg, while contrasting his own music from that of the originator of the twelve tone system. As with much music of the era written by his friends Cage and Stockhausen, the work is challenging to the players, and here most of the challenges are directed at the vocalist. Humming, glissandi and jumps over wide ranges of notes are common in this piece.
The work takes Char’s idea of a “verbal archipelago” where the images conjured by the words are like islands that float in an ocean of relation, but with spaces between them. The islands share similarities and are connected to one another, but each is also distinct and of itself. Boulez took this concept and created his work where the poetic sections act as islands within the musical ocean.
A few years later, he worked with material written by the symbolist and hermetic poet Stéphane Mallarme, when he wrote Pli selon pli in (1962). Mallarme’s work A Throw of the Dice is of particular influence. In that poem the words are placed in various configurations across the page, with changes of size, and instances of italics or all capital letters. Boulez took these and made them correspond to changes to the pitch and volume of the poetic text. The title comes from a different work by Mallarme, and is translated as “fold according to fold.” In his poem Remémoration d'amis belges, he describes how a mist gradually covers the city of Bruges until it disappears.
Subtitled A Portrait of Mallarme Boulez uses five of his poems in chronological order, starting with "Don du poème" from 1865 for the first movement finishing with "Tombeau" from 1897 for the last. Some consider the last word of the piece, mort, death, to be the only intelligible word in the work. The voice is used more for its timbral qualities, and to weave in as part of the course of the music, than as something to be focused on alone.
Later still Boulez took e.e. cummings poems and used them as inspiration for his work Cummings Ist der Dichter in 1970. Boulez worked hard to relate poetry and music together in his work. It is no surprise, then, that the institute he founded would go far in giving machines the ability to sing, and foster the work of other artists who were interested in the relationships between speech and song.
Ambassador of the Avantgarde
At the end of the 1950s Boulez had left Paris for Baden-Baden where he had scored a gig as composer in residence with the South-West German Radio Orchestra. Part of his work consisted of conducting smaller concerts. He also had access to an electronic studio where he set to work on a new piece, Poesie Pour Pouvoir, for tape and three orchestras. Baden-Baden would become his home, and he eventually bought a villa there, a place of refuge to return to after his various engagements that took him around the world and on extended stays in London and New York. His experience conducting for the Théâtre Marigny, had sharpened his skills in this area, making it all possible.
Boulez had gained some experience as a conductor in his early days as a pit boss at the Folies Bergère. He gained further experience when he conducted the Venezuela Symphony Orchestra when he was on tour with his friend Jean-Louis Barrault. In 1959 he was able to get further out of the mold of conducting incidental music for theater and get down to the business he was about: the promotion of avantgarde music.
The break came when he replaced the conductor Hans Rosbaud who was sick, and a replacement was needed in short notice for a program of contemprary music at the Aix-en-Provence and Donaueschingen Festivals. Four years later he had the opportunity to conduct Orchestre National de France for their fiftieth anniversary performance of Stravinsky's The Rite of Spring at the Théâtre des Champs-Élysées in Paris, where the piece had been first been premiered to the shock of the audience. Conducting suited Boulez as an activity for his energies and he went on to lead performances of Alban Berg’s opera Wozzeck. This was followed by him conducting Wagner’s Parsifal and Tristan and Isolde.
In the 1970s Boulez had a triple coup in his career. The first part of his tripartite attack for avantgarde domination involved his becoming conductor and musical director the BBC Symphony Orchestra. Then second part came after Leonard Bernstein’s tenure as conductor of the New York Philharmonic was over, and Boulez was offered the opportunity to replace him. He felt that through innovative programming, he would be able to remold the minds of music goers in both London and New York.
Boulez was also fond of getting people out of stuffy concert halls to experience classical and contemporary music in unusual places. In London he gave a concert at the Roundhouse which was a former railway turntable shed, and in Greenwich Village he gave more informal performances during a series called “Prospective Encounters.” When getting out of the hall wasn’t possible he did what he could to transform the experience inside the established venue. At Avery Fisher Hall in New York he started a series of “Rug Concerts” where the seats were removed and the audience was allowed to sprawl out on the floor. Boulez wanted "to create a feeling that we are all, audience, players and myself, taking part in an act of exploration".
The third prong came when he was asked back by the President of France to come back to his home country and set up a musical research center.
Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic.
Benjamin, George. “George Benjamin on Pierre Boulez: 'He was simply a poet.'” < https://www.theguardian.com/music/2015/mar/20/george-benjamin-in-praise-of-pierre-boulez-at-90>
Boulez, Pierre. Orientations: Collected Writings. Cambridge, MA.: Harvard University Press, 1986.
Glock, William. Notes in Advance: An Autobiography in Muisc. Oxford, UK.: Oxford University Press. 1991.
Greer, John Michael. “The Reign of Quantity.” < https://www.ecosophia.net/the-reign-of-quantity/>
Griffiths, Paul. “Pierre Boulez, Composer and Conductor Who Pushed Modernism’s Boundaries, Dies at 90.” < https://www.nytimes.com/2016/01/07/arts/music/pierre-boulez-french-composer-dies-90.html>
Jameux, Dominique. Pierre Boulez. London, UK.: Faber & Faber, 1991.
Peyser, Joan. To Boulez and Beyond: Music in Europe Since the Rite of Spring. Lanham, MD.: Scarecrow Press, 2008
Ross, Alex. “The Godfather.” <https://www.newyorker.com/magazine/2000/04/10/the-godfather>
Sitsky, Larry, ed. Music of the 20th Century Avant-Garde: A Biocritical Sourcebook. Westport, CT.: Greenwood Press, 2002.
Electric Oscillations: The Studio for Electronic Music of the West German Radio: Part II
Karlheinz Stockhausen’s Studies in Electronics
Stockhausen was born on August 22nd 1928 in a large manor house, called by locals the “castle” in the village of Mödrath, Germany. His father Simon was a school teacher, and his mother Gertrud had been born into a family of prosperous farmers. His sister Katherina was born the following year, and a brother Hermann-Josef the next. He experience music in the house growing up, with his mother playing piano and singing, but when she suffered a mental breakdown, she was quick to be institutionalized in 1932. His borther died the following year, and she was later murdered in a gas chamber by the Nazi regime in 1941. She had been deemed what the fascists called a “useless eater,” and part of the mass murder they carried out on those they deemed socially or physically defective.
A version of this episode was later dramatized in his first opera Donnerstag aus LICHT.
In 1935 Stockhausen began the early stages of his musical training with piano lessons from the organist at the Altenberger Dom, or Abbey Church of Altenberg where they now lived. Around age ten his father married the family housekeeper. After his two half-sisters were born he left home and became a boarder in 1942 while continuing to learn music, adding oboe and violin to his studies. In 1944 Stockhausen was forced to join the armed forces as a stretcher bearer, working for the hospital in Bedburg. During this time he played piano for the wounded on both sides. In February of 1945 he saw his father for the last time, who was sent to the Eastern Front to fight, and is thought to have been killed in action in Hungary.
His father had been a Nazi fanatic, and the death of his mother at the hands of those whom his father adored, and all the horrors and carnage he had seen during the war, left Stockhausen with a strong aversion to war and its atrocities. When he had been living with his father, he had liked to blast the militaristic marches and patriotic music of the fascist regime on the radio. Stockhausen hated these sounds thereafter, and felt that such strict types of rhythms had been used to goad people into complacence and compliance. He sought solace in the rituals and music of the Catholic Church. As he matured his sense of spirituality expanded to encompass the teaching from other world traditions, but his native Christian was always a touchstone, albeit one that he took to more as a mystic rather than a fundamentalist. In a similar way, he left behind the comforts of traditional music to explore the fringes of the avantgarde.
After the war, between 1947 and 1951, Stockhausen studied music at the Hochschule für Musik Köln (Cologne Conservatory of Music) and musicology, philosophy, and German studies at the University of Cologne. It was also in this time period when he traveled with the stage magician Alexander Adrion, accompanying his performances on piano. Towards the end of this period of study he met Herbert Eimert and Werner-Meyer Eppler.
Stockhausen had often thought of being a writer. He had a passion for the novels of Herman Hesse and Thomas Mann. The Glass Bead Game by Hesse and Dr. Faustus by Mann, both of which deal with music, touched him on many levels. Yet it was the mystical philosophy of music, and how it could be related to other bodies of knowledge in Hesse’s novel that became a model for the work he would go on to produce, providing a lasting influence.
In 1951 Stockhausen went to the avant-garde version of summer school, the annual courses held in that season at Darmstadt. It was here where he first encountered the music of Olivier Messiaen. Inspired he began studying and composing serial music, and wrote his early pieces Kreuzspiel and Formel. In Januray of 1952 he went to Paris to study under Messiaen he had the chance to meet his contemporary Pierre Boulez, and see firsthand what Pierre Schaeffer was getting up to with musique concrète.
While hanging about with Boulez in Paris he also met composers Jean Barraqué, and Michel Philippot, all of who were investing their time and efforts to create works of musique concrète at GRM. As his year in France progressed Stockhausen was finally given permission to work in the studio, but on the limited basis of recording natural sounds and percussion instruments for their tape library. In December Stockhausen was given the go-ahead to make a piece of his own, the first non-French composer to use their resources. The source sounds came from a prepared piano that were cut into fragments and spliced back together, then transposed using the phonogène. It took him twelve days to make something the length of pop song, at three minutes and ten seconds, though there is nothing pop about the result. The process caused him to become disenchanted with musique concrète. The piece was only released with his approval in 1992 as part of a collection of his early work, the rest of which was realized in the WDR Electronic Music Studios.
As 1953 rolled around, Eimert invited Stockhausen to become his assistant in the WDR studio. Soon after his arrival in March of 1953 he determined that the Monochord and Melochord were useless when it came to his ambition to totally organize all aspects of sound, including the timbre. Only the humble sine-wave generator or beat-frequency oscillator would be able to do with sound what he envisioned. He asked for these from Fritz Enkel who was the head of the calibration and testing department. Enkel brought him the gear, but was beside himself. The station had spent a pretty penny, 120,000 Marks, on their two showpiece instruments. Enkel was also skeptical of Stockhausen’s ability to accomplish his task with just this limited kit, saying, “it will never work!” This was to become a refrain throughout his career, when people didn’t think he’s be able to finish his ambitious projects. His reply stood him well for the rest of his career, "Maybe you're right, but I want to try it all the same".
When it came time for Stockhausen to create his first piece of pure electronic music in the studio in 1953, he did not go in for the use of the Monochord or the Melochord, but went straight for the sine tone oscillators. His idea was to build a piece totally from scratch, following a plan of the serial organization of sounds, with added reverb to give a sense of spatialized sound. The devices he used to create what became Studie I, were all originally used for the calibration of radio equipment. Here they were put into the service of art.
These pieces were as much an exploration of musical mathematics and acoustic science as they were novel pieces of new music made on tape with lab equipment. Behind these works is the work of Hermann Helmhotz, and behind him that of George Simon Ohm, and behind him Joseph Fourier, all of who provided the intellectual additives necessary to synthesize Stockhausen’s new music.
Studie I can be heard as a musical-scientific exploration of Joseph Fourier’s ideas about sine waves and how they correspond to the harmonic of a common fundamental. It can also be heard as a further exploration of Ohm’s Acoustic Law which states that a musical sound is perceived by the ear as a set of a number of constituent pure harmonic tones.
He began his musical study with a question. "The wave-constitution of instrumental notes and the most diverse noises are amenable to analysis with the aid of electro-acoustic apparatus: is it then possible to reverse the process and thus to synthesize wave-forms according analytic data? To do so one would ... have to take and combine simple waves into various forms..."
A sine tones made with electronics contains no overtones, since it is able to be made with just a single frequency. In this respect, the sine tone can be considered to the prima materia, or first matter in the radiophonic laboratory, the basic building block required to create the magnum opus. Using the tape machines he recorded different frequency sine waves at different volumes, and mixed them together to build up new synthesized timbres, in a process of manual additive synthesis. Studie I became the first composed piece of music using this laborious additive synthesis method.
Stockhausen said the piece was “the first composition with sine tones.” In this respect this first piece of pure electronic music showed his devotion to the electron as a kind of musical unit unto itself. Looking at it another way, he chose this method to differentiate himself from what Schaeffer and Henry were doing with recorded sounds, what Cage was doing with prepared pianos, what others were doing with the proto-synthesizers.
Stockhausen had cut some teeth cutting tape at the RTF studios when he created his Konkrete Etude, and now got to use the tool kit of musique concrete, by doing such things as running tapes backwards, speeding them up, slowing them down, fading them in and out. The idea behind the piece was to start at the center of the human auditory range and move outwards in both directions to the limits of perceptible pitch. It was further organized around justly intoned ratios taken from the partials of the overtone series.
In Studie II, Stockhausen explored the serial treatment of timbre. He again uses sine tones, and chose a combination of five, whose frequencies are all related to each other by being the 25th root of different powers of 515. This amounts to a close approximation of the Golden Section or Proportion, and it is hard to think he came to those numbers and powers just by chance. (He later used the Fibonacci sequence as a time signature in his piece Klavierstucke IX, and his use of other mathematics and magic squares in his compositions shows his familiarity with these subject.) The method of combining these tones differs from Studie I. Here he plays them back-to-back in a reverb chamber and records the result.
The Konkrete Etude and the Studies comprise a masterful warm up act as Stockhausen got comfortable working in the studio.
Gesang der Jünglinge
There is a mystery in the sounds of the vowels. There is a mystery in the sound of the human voice as it is uttered from the mouth and born into the air. And there is a mystery in the way electrons, interacting inside an oscillating circuit, can be synthesized and made to sing. Karlheinz Stockhausen set out to investigate these mysteries of human speech and circuitry as a scientist of sound, using the newly available radiophonic equipment at the WDR’s Studio for Electronic Music. The end result of his research was bridged into the vessel of music, giving the ideas behind his inquiries an aesthetic and spiritual form. In doing so he unleashed his electroacoustic masterpiece Gesang der Jünglinge (Song of the Youths) into the world.
Part of his inspiration for Gesang der Jünglinge came from his studies of linguistics, phonetics and information-theory with Meyer-Eppler at the Bonn between 1954 and 1956. The other part came from his spiritual inclinations. At the time of its composition Stockhausen was a devout Catholic. His original conception for the piece was for it to be a sacred electronic Mass born from his personal conviction. According to the official biography, he had asked Eimert, his other mentor, to write to the Diocesan office of the Archbishop for permission to have the proposed work performed in the Cologne Cathedral, the largest Gothic church in northern Europe. The request was refused on grounds that loudspeakers had no place inside a church. No records of this request have been uncovered, so this story is now considered apocryphal. There are doubts that Eimert, who was a Protestant, ever actually brought up the subject with Johannes Overath, the man at the Archdiocese responsible for granting or denying such requests. In March of 1955 Overath had become a member of the Broadcasting Council and it is likely he was an associate with Eimert. What we can substantiate is that Stockhausen did have ambitions to create an electronic Mass and that he experienced frustrations and setbacks in his search for a suitable sacred venue for its performance, one that would be sanctioned by the authorities at the church.
These frustrations did not stop Stockhausen from realizing his sound-vision. The lectures given by Meyer-Eppler had seeded inspiration in his mind, and those seeds were in the form of syllables, vowels, phonemes, and fricatives. Stockhausen set to work creating music where voices merged in a sublime continuum with synthetic tones that he built from scratch in the studio. To achieve the desired effect of mixing human voice with electronics he needed pure speech timbres. He decided to use the talents of Josef Protschka, a 12-year old boy chorister who sang fragments derived and permutated from the “Song of the Three Youths in the Fiery Furnace” in the 3rd book of Daniel. In the story three youths are tossed into the furnace by King Nebuchadnezzar. They are rescued from the devouring flames by an angel who hears them singing a song of their faith. This story resonated strongly with Stockhausen at the time who considered himself to be a fiery youth. Still in his twenties he was full of energy, but was under verbal fire and critical attack from the classical music establishment who lambasted him for his earlier works. Gesang der Jünglinge showed his devotion to the divine through song despite this persecution.
The electronic bedrock of the piece was made from generated sine tones, pulses, and filtered white noise. The recordings of the boy soprano’s voice were made to mimic the electronic sounds: vowels are harmonic spectra which may be conceived as based on sine tones; fricatives and sibilants are like filtered white noise; and the plosives resemble the pulses. Each part of the score was composed along a scale that ran from discrete events to statistically structured massed "complexes" of sound. The composition is now over sixty years old, yet the mixture of synthetic and organic textures Stockhausen created are still fresh. They speak of something new, and angelic.
Stockhausen eventually triumphed over his persecution when he won the prestigious Polar Music Prize (often considered the "Nobel Prize of music") in 2001. At the ceremony he controlled the sound projection of Gesang der Jünglinge through the four loudspeakers surrounding the audience.
These breakthroughs in 20th century composition practice wouldn’t have been possible without the foresight of the WDR in creating an Electronic Music Studio and promoting new music on their stations.
Making Telemusik at NHK
Following the success of the Studio for Electronic Music in Germany, other countries started to take note. Composer Toshiro Mayuzumi had already had his mind blown in May of 1952 at a musique concrete performance at Salle de l'Ancien Conservatoire in Paris, commenting that, “the concert was such a shock that it fundamentally altered my musical life.” He had visited Schaeffer’s studio while on the trip, and when he returned to Japan began to implement the techniques for a film soundtrack. Working at the JOQR (NCB) studios in Tokyo he produced his first explicitly musique concrete piece, “CEuvre pour Musique Concrete x, y, z”. The x portion was made up of metallic sounds, the y of human, animal and water sounds and the z portion was taken from sounds of musical instruments. When it was finished it premiered over the JOQR radio network and lit Japan on fire. In 1954 the station invited Mayuzumi to create more music in this vein. “Boxing” was the end product of this next effort and was a radio play with a script written by celebrated Japanese novelist, Yukio Mishima. For the work, Mayuzumi employed over 300 types of sounds, and it became a sensation across the island country.
That same year a group of technicians and program producers were sent some materials by their German colleagues at the WDR. This was the aptly named Technische Hausmitteilungen des NWDR's, 1954;Sonderheft tiber Electronische Musik (Technical In-House Communications from the NWDR, 1954; Special Issue about Electronic Music). The paper explored some of the gear and techniques being used in Cologne, and the theories they had behind their use.
Enter Makoto Moroi, a prolific composer who studied everything from Gregorian chant, to renaissance and baroque music on to twelve tone composition and serialism. Alongside his love of traditional Japanese instruments was a growing interest in what could be done musically with electronics. Music was an ocean he swam in, and many different rivers contributed to his flow. This led him on a pilgrimage to Cologne in 1955 to hang out with Stockhausen and take in the state of the art at the WDR Studio over a three week visit.
In the fall of 1955 the NHK followed the course charted by WDR and began to set up their studio in Tokyo. They acquired their own Monochord and Melochord alongside a collection of other oscillators, bandpass filters, tape machines, and the other gear that enabled Japan to start charting their own course in the world of avant-garde and electronic music.
Mayuzumi was quick to get to work and produced the first completely electronic music in Japan with his trilogy Music for Sine Waves by Proportion of Prime Number, Music for Modulated Waves by Proportion of Prime Number, and Invention for Square Waves and Sawtooth Waves. These investigations were directly influenced by Stockhausen’s Studie I and II. A year later in 1956 the laboratory in NHK had distilled its second piece of pure electronic music, Variations on the Numerical Principle of 7, by Mayuzumi and Moroi. For this piece the influence of Studie II was acutely copied, though with a different numerical basis, as here it was based on a scale of 49/7, divided into 49 tones up to the seventh overtone.
After these initial inquiries and treatments in the studio where the composers followed the lead of their European counterparts things started to move off in directions more thoroughly Japanese. Mayuzumi created the thirty minute Aoi-no-Ue based on a traditional Noh play from the Muromachi period (15th century). Noh singing is combined with electronics in place of the normal instruments and drums to create a unique 20th century version of the material.
In 1959 Mayuzumi started to explore the sonorities of traditional Japanese bells in his compositions. This resulted in a series of pieces with Campanology in the title. He started this work by recording the sounds of the huge bells found at Buddhist temples all over Japan. He acoustically analyzed the sound of these bells and then made his first Campanology, a 10-minute piece synthesized from the data retrieved from his recordings. In his Nirvana Symphony he called the first, third and fifth movements by this name. Later in 1967 when the NHK equipped an 88-string piano with magnets and pickups that could be electronically modulated, he wrote the first piece for it, Campanology for Multipiano.
The NHK continued to produce a variety of works by a number of composers throughout the 1950s and into the next decade. Wataru Uenami had been the chief of the studio from its beginning and he had always wanted to invite Stockhausen over to and commission him to create works for their airwaves. He finally succeeded in this endeavor and brought him over in January of 1966, four years after Stockhausen had himself taken over as director of the WDR studio from Herbert Eimert.
When he arrived in Japan Karlheinz was severely jet lagged and disoriented. For several days he couldn’t sleep. That’s when the strange hallucinatory visions set in. Laying awake in bed one night his mind was flooded with ideas of "technical processes, formal relationships, pictures of the notation, of human relationships, etc.—all at once and in a network too tangled up to be unraveled into one process.” These musings of the night took on a life of their own and from them he created Telemusik.
Of Stockhausen’s many ambitions, one of them was to make a unified music for the whole planet. He was able to do that in this piece, though the results sounded nothing like the “world music” or “world beat” genre often found playing in coffee houses and gift shops. In the twenty minutes of the piece he mixed in found sounds, folk songs and ritual music from all over the world including the countries Hungary, Spain, China, Japan, the Amazons, Sahara, Bali and Vietnam. He also used new electronic sounds and traditional Japanese instruments to create what he called "a higher unity…a universality of past, present, and future, of different places and spaces: TELE-MUSIK." This practice of taking and combining sound sources from all over is now widely practiced across all genres of music in the form of sampling. But for Karlheinz it wasn’t simply making audio collage or taking one sample to build a song around it. Even though he used samples from existing recordings to make something different, he also developed a new audio process that he termed intermodulation.
In his own words he speaks of the difference between collage and intermodulation. “I didn’t want a collage, I wanted to find out if I could influence the traits of an existing kind of music, a piece of characteristic music using the traits of other music. Then I found a new modulation technique, with which I could modulate the melody curve of a singing priest with electronic timbres, for example. In any case, the abstract sound material must dominate, otherwise the result is really mishmash, and the music becomes arbitrary. I don’t like that.” For example he used "the chant of monks in a Japanese temple with Shipibo music from the Amazon, and then further imposing a rhythm of Hungarian music on the melody of the monks. In this way, symbiotic things can be generated, which have never before been heard"
Stockhausen kept the pitch range of Telemusik piece deliberately high, between 6 and 12 kHz. This is so that the intermodulation can project sounds downwards occasionally. He wanted some of the sections to seem “far away because the ear cannot analyse it” and then abruptly it would enter “the normal audible range and suddenly became understandable". The title of the piece comes from Greek tele, "afar, far off", as in "telephone" or "television". The music works consistently to bring what was “distant” close up. Cultures which were once far away from each other can now be seen up close, brought together by the power of telecommunications systems, new media formats, new music. By using recordings of traditional folk and ritual music from around the world Stockhausen brought the past into the future and mixed it with electronics.
To accomplish all this at the NHK studio he used a 6-track tape machine and a number of signal processors including high and low-pass filters, amplitude modulators and other existing equipment. Stockhausen also designed a few new circuits for use in the composition. One of these was the Gagaku Circuit named after the Japanese Gagaku orchestra music it was designed to modulate. It used 2 ring-modulators in series to create double ring-modulation mixes of the sampled sounds.12 kHz was used in both the 1st and 2nd ring-modulation, with a glissando in the 2nd ring modulation stage. Then music was frequency-filtered in different stages at 6 kHz and 5.5 kHz.
Writer Ed Chang explains the effect of the Gagaku Circuit: “For example, in one scenario the 1st ring modulation A used a very high 12 kHz sine-wave base frequency, resulting in a very high-pitched buzzing texture (for example, a piano note of A, or 0.440 kHz, would become a high 12.440 kHz and 11.560 kHz).The 2nd ring-mod B base frequency (in this case with a slight glissando variation on the same 12 kHz base frequency) has the effect of ‘demodulating’ the signal (bringing it back down to near A). This demodulated signal is also frequency filtered to accentuate low frequencies (dark sound).These 2 elements (high buzzing from the 1st signal and low distorted sounds from the 2nd) are intermittently mixed together with faders. By varying the 2 ring-mod base frequencies and the 3 frequency filters, different effects could be achieved. This process of modulation and demodulation is what Stockhausen means when he says he was able to ‘reflect a few parts downwards’.”
The first public performance of Telemusik took place at the NHK studios in Tokyo on April 25th, 1966. He dedicated the score to the spirit of the Japanese people.
After Stockhausen’s visit the experimental music germ continued to spread, and the composers who were already in on the game challenged themselves with bolder, more technical and ambitious pieces.
Telemusik prepared Stockhausen for his next monumental undertaking, Hymnen (Anthem) made at the WDR studio. The piece had already been started before Telemusik but he had to set it aside while in Japan. Hymnen is a mesmerizing elaboration of the studio technique of intermodulation first mastered at NHK. It is also a continuation of his quest to make a form of world music at a time when the people around the planet were becoming increasingly connected in McLuhan’s global village. To achieve this goal, he incorporated forty national anthems from around the globe into one composition. To start, he collected 137 national anthems by writing to radio stations in those countries and asking them to send recordings to the WDR in Germany.
The piece has four sections though it was first slated for six. The last two never materialized. These anthems from around the world are intermodulated into an intricate web of sound lasting around two hours long. Thrown into the kaleidoscopic mix are all manner of other sounds produced from the entire toolkit of the WDR studio, alongside added sounds from shortwave radio. These radio sounds make the entire recording sound as if you are tuning across the bands of a world receiver radio, and hearing the anthems of different countries as interval signals, colliding with each other, and causing transformations as the two signals meet. In the audio spectrum and in the radio spectrum borders and boundaries are porous, permeable.
The point of all this is, in Stockhausen words, “to imagine the conception of modulating an African style with a Japanese style, in the process of which the styles would not be eliminated in order to arrive at a supra-style or a uniform international style - which, in my opinion, would be absurd. Rather, during this process, the original, the unique, would actually be strengthened and in addition, transformations of the one into the other, and above all two given factors in relation to a third would be composed. The point is to find compositional processes of confrontations and mixtures of style - of intermodulations - in which styles are not simply mixed together into a hodge podge, but rather in which different characters modulate each other and through this elevate each other and sharpen their originality."
As with Telemusik, his aim was to go beyond what he thought of as mere collage, or what in the early 2000s might have been called a mash-up. The combination of the different materials is only the first step. When each of the elements interacts with another, it ends up being transformed, changed by the association, and something new is distilled from the alembic of creativity.
Just as Hymnen mixes different anthems together, it also fuses musique concrete with electronic music. Hymnen can be heard as just this recorded tape piece, but he also wrote a symphony version where the tape is played by a sound projector (or diffusionist) with a score for the accompanying orchestra. This shows his tenacity in using all manner of music making tools, and intermodulating these with one another.
Hymnen ends with a new anthem for a utopian realm called "Hymunion," a mixture of the words Hymn and Union. Perhaps Hymunion can be reached through the shared communion that comes from truly listening to each other.
Gyorgi Ligeti was born in Transylvania, Romania in 1923 into a Hungarian Jewish family. His parents were both doctors. He was the great-grand nephew of violinist Leopold Auer, and his second cousin with the philosopher Ágnes Heller. In 1940 the Northern Transylvania town of Kolozsvár (Cluj) his family lived in became a part of Hungary, and the next year he began his formal musical training in the local conservatory.
The events of WWII would not leave his family untouched for long. At the time Hungary had been a part of the axis powers, relying on fascist Italy and Germany for help to pull them out of economic plight caused by bank failures that had rippled through world during the Great Depression. In 1944 he was sent to a forced labor brigade by those inside the Horthy regime. His parents and sixteen year old brother suffered a worse fate, as all three were sent to the death camps, his parents to Auschwitz, and his brother to Mauthausen-Gusen. His mother was the only one to survive.
After the war was over, Ligeti was able to return to Budapest and take what solace he could in his musical pursuits and he graduated from the Franz Liszt Academy of Music in 1949. Ligeti also spent some time doing ethnomusicological research into the folk music Hungarians in Transylvania, but eventually got a job at his alma mater teaching harmony, counterpoint and musical analysis.
Communications with those outside Eastern Bloc had been effectively stifled in the first half of the fifties when Ligeti was teaching. Communist Hungary was already putting restrictions on what was acceptable for his creativity. In 1956 there was an uprising against the People’s Republic, but it was quickly smashed down by the Soviets. In the aftermath he fled with his wife to Vienna, Austria and then made his way to Cologne where he met Karlheinz Stockhausen and Gottfried Michael Koenig. In the summer he attended the Darmstadt courses and started working in the WDR electronic music studio.
Ligeti, like the others in the Cologne milieu came under the influence of Werner Meyer-Eppler’s ideas and decided to write a work that would address “the age old question of the relationship between music and speech.” The piece was composed to be an imaginary conversation of multiple ongoing monologues, dialogues, many voices in arguments and chatter.
He first chose different types of noise to use to create artificial phonemes out of, made recordings, and grouped them into a number of categories. Then he made a formula to determine the tape-length of each type. After this he used aleatoric methods and took the different phonemes at random and combined them into what would become the sonic articulation of words. The work was realized in 1958 with the help of Cornelius Cardew (himself an assistant of Karlheinz Stockhausen). In it Ligeti created a kind of artificial polyglot language full of strange whispers, enunciations and utterance.
Artikulation was just one of many notable works produced at the WDR, which became a kind of ground zero for the subsequent explosion of electronic music and studios modeled on its image. Gottfried Michael Koenig was one of the technicians at the studio and composer who created many key pieces there, such as Klangfiguren II (1955), Essay (1957) and Terminus I (1962). Naim June Paik moved from Korea to Cologne in 1958 to work at the studio. While there he became interested in the use of televisions as a medium for making art, and he would go on to become a pioneer of video art. Cornelius Cardew and Holger Czukay all made use of the studio, among many others.
As the 1960s rolled into the 1970s new electronic music equipment became available and the place received a bit of an overhaul under Stockhausen’s direction. It was in this era that they obtained an EMS Synthi 100 as part of their laboratory set-up.
Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic.
Maconie, Robin. Other Planets: The Music of Karlheinz Stockhausen
Maconie, Robin. Stockhausen’s Electronic Studies I and II. 2015
The works of Karlheinz Stockhausen, by Robin Maconie, 2nd edition
A video of the 2001 performance of Gesang Der Junglinge can be seen here: https://www.youtube.com/watch?v=UmGIiBfWI0E
Music and Technology in Japan by Minao Shibata (article)
Telemusik CD Liner notes, Stockhausen Verlag Edition
Hymnen, Liner notes from Stockhausen edition
Holmes, Thom. Electronic and Experimental Music. Sixth Edition.
Music of the 20th Century Avant-Garde: A Biocritical Sourcebook
Electric Oscillations: The Studio for Electronic Music of the West German Radio: Part I
Dr. Friedrich Trautwein the Radio Experimental Laboratory
The story of The Studio for Electronic Music at the WDR is linked to the earlier work of two German instrument makers, Dr. Friedrich Trautwein and Harold Bode. Two institutions were also critical precursors for the development of the technology around electronic music, the Heinrich Hertz Institute for Research on Oscillations and the Staatlich-akademische Hochschule für Musik. For the latter, in particular, the opening of its Rundfunkversuchstelle, or Radio Experimental Lab, will be briefly explored as they important in the history of radio and electronic music. The philosophical and aesthetic milieu surrounding what was called “electrical music” in Germany at the time, became one of the intellectual cornerstones from which the studio in Cologne was created.
Dr. Friedrich Trautwein was born on August 11, 1888 in Würzburg, Germany and became an engineer with strong musical leanings. After beginning an education in physics, he quit and turned his attentions to law, so he could work for the post office in the capacity of a patent lawyer, and protect intellectual properties around developments in radio technology. When WWI broke out he became the head of a military radio squadron. The experience cemented his love for communications technology. After the war ended he went on to receive a PhD in electrical engineering. Between 1922 and 1924 he got two patents under his belt, one for generating musical notes with electrical circuits. Trautwein then went to Berlin in 1923 where worked at the first German radio station, the Funk-Stunde AG Berlin.
On May 3, 1928 the the Staatlich-akademische Hochschule für Musik (State-Academic University of Music) opened their new department the Rundfunkversuchstelle (RVS) or the Radio Experimental Lab. One of their goals was of researching new directions and possibilities associated with the development of radio broadcasting. At the time in Germany, much thought was going into the way music was played and heard over the radio. There were many issues around noise and fidelity on early broadcasting equipment and receiver sets that made listening to symphonies, opera singers and other music not as pleasant to listen to when it came over the air. Some people thought it was because listening to a radio broadcast was just different from the way music was perceived when at a concert hall or music venue. These minds thought that a new form of music should be created specifically for the medium. This idea for a new musical aesthetic came to be known as rundfunkmusik, or radio-music and neue sachlichkeit, or the new objectivity. The RVS was in part established to explore the possibilities of radio-music.
In 1930 Trautwein was hired as a lecturer on the subject of electrical acoustics for the RVS. One of the other goals of the institution was to create new musical instruments that specifically catered to the needs of radio. An overarching goal was to create new tonalities that would electrify the airwaves and sing out in greater fidelity inside people’s homes on their receiving sets. It was at RVS that Trautwein collaborated with the composers Paul Hindemith, Georg Schünemann and the musician Oskar Sala to create his instrument the trautonioum. Another objective Trautwein had during his time at RVS was to analyze problems around the electronic reproduction and transmission of sound, like Harvey Fletcher and others had at Bell Labs. Unlike the people at Bell Labs, the RVS was specifically part of a music conservatory, and though they also had the goal of clarifying speech, they were very interested in electronic music. It took Bell Labs until the 1950s to get in on that game.
One of the aims of the trautonium was to be an instrument that could be used in the home among family members for what the Germans called hausmusik. They wanted it to be able to mimic the sounds of many other instruments in a way similar to an organ. To achieve this aim they worked with various resistors and capacitors and employed a glow lamp circuit to create the fundamental frequencies. Changes in resistance and capacitance on the circuit altered the frequency. Trautwein also added additional resonance circuits to his design that were tuned to different frequencies. He connected these to high and low pass filters that could then create formants with the sound. All this control over the sound led to the ability to create very unusual tonalities alongside the familiar and traditional.
Changes in tone color were made available with the turn of a dial. A new sound could be dialed in just as a new station could be listened to by turning the knob of radio. Tone color isn’t static either, but changes as the sound moves through time. This is the acoustical envelope of a sound, and Trautwein took this into consideration when designing his instrument.
In their search for rich tonalities Trautwein and his colleagues stumbled across the mystery of the vowels. Preceding Homer Dudley’s vocoder by eight years, it became the first instrument able to reproduce the sounds of the vowels. This led Trautwein and Sala to discover the many similarities that exist between vowel sounds and the timbre of a variety of instruments.
Trautwein compared the oscilliograms of spoken vowel formants with those played by the trautonium and found that they conformed to each other. “The trautonium is an electrical analogy of the sound creation of the human speech organs” he wrote in his 1930 paper Elektische Musik. “The scientific significance lies in the physico-phsyiological impression of the synthetically generated sounds compared with the timbre of numerous musical instruments and speech sounds. This suggests that the physical processes are related in many cases.”
For the first iteration of the instrument there were knobs for changing the formants and timbre, and a pedal for changing the volume. The process it used to change the tone color was an early form of subtractive synthesis that simply filtered down an already complex waveform, rather than building one up by adding sine waves together.
On June 20th 1930 a demonstration of the Trautonium was given at the New Music in Berlin festival. This was to be an “Electric Concert” and one of the main attractions was the premiere of Paul Hindemith’s Trio-Pieces written for the instrument. On one of the three instruments Hindemith himself played the top part with Trautwein and Oskar Sala playing the middle voice. A piano-teacher named Rudolph Schmidt played the bass portion.
A commercial version of the instrument, dubbed the Volkstrautonium, was manufactured and distributed by the German radio equipment company Telefunken starting in 1932, but it was expensive and difficult to learn to play, and so remained unpopular. The company managed to only sell about two a year, and so by 1938 the product was discontinued. Composers remained were somewhat interested in its abilities and Hindemith, who had acted as an advisor to Trautwein, wrote the Concertina for Trautonium and Orchestra in 1940.
Oskar Sala became a virtuoso on the instrument and would play compositions by Niccolò Paganini on it. In time, he took over the further development of the trautonium and created his his own variations- the Mixtur-Trautonium, The Concert-Trautonium and the Radio – Trautonium. He continued to champion it until his death in 2002. Famously, the sound of the birds in Alfred Hitchcock’s movie of the same name is not sourced from real birds, but come from the Mixtur-Trautonium as played by Sala.
In 1935 the RVS was shutdown by Joseph Goebbels, but it did not disappear entirely as its various elements were diffused into different parts of the music school. After WWII, Trautwein had a hard time getting a job because he had been a card-carrying Nazi. He did build a few more instruments, including the Amplified Harpischord in 1936 and the Electronic Bells in 1947. A modified version of the original Trautonium called the Monochord (not to be confused with the stringed instrument and learning tool of the same name) was purchased by the Electronic Music Studio at the WDR in 1951, as detailed below. His later legacy was to create the first sound engineering programs in Dusseldorf in 1952.
Harold Bode and the Heinrich Hertz Institute for Research on Oscillations
Harold Bode was the next instrument maker to place his stamp upon the Electronic Music Studio at WDR, and later added a few flourishes to the work done at the Columbia-Princeton Center for Electronic Music. He was born the son of a pipe organ player, and in his own time became an inventor of musical instruments. He had studied mathematics, physics and natural philosophy at Hamburg University. His first instrument was the Warbo-Formant Organ in 1937, a completely electronic polyphonic formant organ. New sounds could be created on it by simply adjusting its half-rotary and stop knobs.
Bode’s next step for further education was the Heinrich-Hertz-Institut für Schwingungsforschungin or the Heinrich Hertz Institute for Research in Oscillations (HHI), located in Berlin where he went for his postgraduate studies. At the time the HHI had a focus on the following subjects: high frequency radio technology, telephony and telegraphy, acoustics and mechanics. The research done at the HHI had a focus on radio, television, sound-movie technology, architectural acoustics and the new field of electronic music. The HHI, like the RVS, was interested in developing and promoting the idea of electronic music and radio-music.
It was in this phase that Bode developed his Melodium, alongside his collaborators Oskar Vierling and Fekko von Ompteda. The Melodium was a touch-sensitive monophonic yet multi-timbral instrument that became popular with film score composers of the era. Since it was monophonic, it presented fewer problems with tuning than had his wobbly Warbo-Formant Organ. Feeling inspired by his achievement, Bode then decided that creating electronic musical instruments would be “the task of my life time.”
His dream was put on hold when WWII broke out in 1939. Despite the dire conflict, and the spiritual sickness at work in his country, Bode counted himself lucky for being able to go into the electronics industry. The only other choice was active military duty. He still did make things for the German project, but he wasn’t a foot soldier, and worked on their submarine sound and wireless communications efforts.
In the aftermath of WWII he was newly married and moved from Berlin to a small village in southern Germany where he tinkered on his next invention up in the attic lab of the home where he had started a family. The result was the first iteration of the Melochord in 1947.
The Melochord was a two-tone melody keyboard instrument. Its most interesting features were the controls for shaping formants that included various filters to attenuate the sound, ring modulation for harmonics, and the ability to generate white noise and apply attack and decay envelopes. The Melochord was promoted on the radio and in the newspapers, where it was praised for its clear and resonant tones.
Werner Meyer-Eppler got wind of the Melochord and started to use it in his experiments at the Bonn. There was a lot of skill that went into playing the Melochord, and while Meyer-Eppler experimented, Bode set his sights on making a more user friendly version called the Polychord that became that first in a series synthesis type organs that Bodes took on his path of continued electronic creation.
Genesis of the Studio for Electronic Music
Just as the GRM had been built around a philosophy of the transformation of sound, so too was the Studio for Electronic Music of the West German Radio (WDR) built around a philosophy of the synthesis of sound. Werner Meyer-Eppler was the architect of the strategies to be employed in this laboratory, and the blueprint was his book, Elektronische Klangerzeugung: Elektronische Musik und Synthetische Sprache (Electronic Sound Generation: Electronic Music and the Synthetic Speech). This philosophy placed the emphasis on building up the sounds from scratch, out of oscillators and lab equipment. This was in contrast to the metamorphic, transformational approach purveyed by Schaeffer and Henry with musique concrete. Tape, however, remained an essential lifeblood for both studios.
Meyer-Eppler was still lecturing at the Institute for Phonetics and Communication Research of Bonn University while he wrote his book. In his book he had made an inventory of the electronic musical instruments which had so far been developed. Then Meyer-Eppler experimented at the Bonn with what became a basic electronic music process, composing music directly onto tape. One of the instruments Meyer-Eppler had used in his experiments was Harold Bode’s Melochord, and he also used vocoders. He encouraged his students to hear the sounds from the vocoder mixed with the sounds from the Melochord as a new kind of music.
The genesis of the Studio for Electronic Music came in part from the transmission and recording of a late-night radio program about electronic music on October 18, 1951. A meeting of minds was held in regards to the program broadcast on the Nordwestdeutscher Rundfunk. At the meeting were Meyer-Eppler, and his colleagues Herbert Eimert, and Robert Beyer among others. Beyer had long been a proponent of a music oriented more towards its timbre than other considerations. Eimert was a composer and musicologist who had published a book on atonal music in the 1920s while still at school at the Cologne University of Music. He had also written a twelve-tone string quartet as part of his composition examination. For these troubles, his teacher Franz Bölsche had Eimert expelled him from the class. Eimert was devout when it came to noise, twelve tone music and serialism and he became a relentless advocate who organized concerts, events, radio shows and wrote numerous articles on this subject of his passion. He eventually did graduate with a doctorate in musicology in 1931 despite the attempts by Bölsche to thwart his will. Fritz Enkel who had also been at the meeting, was a skilled technician, and he designed a framework around which a studio for electronic music could be built. The station manager, Hans Hartmann, heard a report of the meeting and gave the go ahead to establish an electronic music studio.
Creating such a studio would give national prestige to Western Germany. After the war Western Germany took great pains to be seen as culturally progressive, and having a place where the latest musical developments could be explored and created by their artists was a part of showing to the world that they were moving forward. Another reason to develop the studio was to use its output for broadcasting. At the time WDR was the largest and wealthiest broadcaster in West Germany and they could use their pool of funds to create something that would have been cost prohibitive for most private individuals and companies.
Before they even got the equipment, when they felt the studio might not even get off the ground and become a reality, they made a demonstration piece to broadcast and show the possibilities of what else might be able to be achieved. Studio technician Heinz Schütz was tapped to make this happen, even though he didn’t consider himself a composer or musician. The fact that a non-musician was the first to demonstrate the potential of making music in an electronic studio is apropos of the later development of the field when people like Joe Meek and Brian Eno, who also didn’t call themselves musicians, none-the-less made amazing music with the studio as their instrument. The piece by Schütz was titled Morgenröte (The Red of Dawn) to signify the beginning of their collective efforts. The piece was made with limited means, using just what they had available, and its producer considered its creation to be at most, accidental.
The piece by Schütz was typical of what came out of the studio before funding was secured. They didn’t have much to work with except tape, test equipment, and recordings of Meyer-Eppler’s previous work with the Melochord and vocoders. Eimert and Beyer “remixed” these experiments while they got their set-up established. The process of working with the tapes and test equipment gave them the experience and confidence they needed for further work in their laboratory of sound creation.
Eimert and Beyer eventually put together some other sound studies as the studio came together piece by piece. These largely followed a “pure audio criteria” and were premiered at the Neues Musikfest (New Music Festival) presentation on May 26, 1953 at the broadcasting studio of the Cologne Radio Centre. The event marked the official opening of the WDR studio. Put together quickly, the pieces played did not live up to the standards Eimert had set for the studio, and this caused a falling out between him and Beyer, who thought they were adequate enough. The next year Beyer resigned.
Eventually Bode’s Melochords and Trautwein’s Monochord were acquired, and each was modified specifically for use in the studio. Once in place the studio really got cooking. Next to these they used electronic laboratory equipment such as noise and signal generators, sine wave oscillators, band pass filters, octave filters, and pulse and ring modulators, among others. Oscilliscopes were used to look at sounds. Mixers were used to blend them together. There was a four-track tape recorder they used to synchronize sounds that had been recorded separately and join them in musical union. It could be used to overdub sounds on top of each over as one tape was being copied to another, a then-new technique developed from Meyer-Eppler’s ideas. The mixer had a total of sixteen channels divided into two groups of eight. There was a remote control to operate the four track and the attached octave filter. A cross-plug busbar panel served as a central locus where all the other inputs and outputs met. Connections could be switched with ease between instruments and sound sources, as if one were transferring a call at a telephone switchboard.
Soon one of the early pieces of electronic music was transmuted from the raw electrons forged within its crucible of equipment into an enduring classic that showcased Karlheinz Stockhausen’s burgeoning genius.
Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic.
Schütz, Heinz, Gottfried Michael Koenig, Konrad Boehmer, Karlheinz Stockhausen, György Ligeti, Mauricio Kagel, and Rolf Gehlhaar. 2002. "Erinnerungen 2: Studio für Elektronische Musik". In Musik der Zeit, 1951–2001: 50 Jahre Neue Musik im WDR—Essays, Erinnerungen, Dokumentation, edited by Frank Hilberg and Harry Vogt, 147–54. Hofheim: Wolke.
[Read Part I]
Milton Babbit: The Musical Mathematician
Though Milton Babbitt was late to join the party started by Luening and Ussachevsky, his influence was deep. Born in 1916 in Philadelphia to a father who was a mathematician, he became one of the leading proponents of total serialism. He had started playing music as a young child, first violin and then piano, and later clarinet and saxophone. As a teen he was devoted to jazz and other popular forms of music, which he started to write before he was even a teenager. One summer on a trip to Philadelphia with his mother to visit her family, he met his uncle who was a pianist studying music at Curtis. His uncle played him one of Schoenberg’s piano compositions and the young mans mind was blown.
Babbitt continued to live and breathe music, but by the time he graduated high school he felt discouraged from pursuing it as his calling, thinking there would be no way to make a living as a musician or composer. He also felt torn between his love of writing popular song and the desire to write serious music that came to him from his initial encounter with Schoenberg. He did not think the two pursuits could co-exist. Unable or unwilling to decide he went in to college specializing in math. After two years of this his father helped convince him to do what he loved, and go to school for music.
At New York University he became further enamored with the work of Schoenberg, who became his absolute hero, and the Second Viennese School in general. In this time period he also got to know Edgar Varese who lived in a nearby apartment building. Following his degree at NYU at the age of nineteen, he started studying privately with composer Roger Sessions at Princeton University. Sessions had started off as a neoclassicist, but through his friendship with Schoenberg did explore twelve tone techniques, but just as another tool he could use and modify to suit his own ends. From Sessions he learned the technique of Schenkerian analysis, a method which uses harmony, counterpoint and tonality to find a broader sense and a deeper understanding of a piece of music. One of the other methods Sessions used to teach his students was to have them choose a piece, and then write a piece that was in a different style, but used all the same structural building blocks.
Sessions got a job from the University of Princeton to form a graduate program in music, and it was through his teacher, that Babbitt eventually got his Masters from the institution, and in 1938 joined the faculty. During the war years he got pressed into service as a mathematician doing classified work and divided his time between Washington D.C., and back at Princeton teaching math to those who would need for doing work such being as radar technicians. During this time he took a break from composing, but music never left his mind, and he started focusing on doing musical thought experiments, with a focus on aspects of rhythm. It was during this time period when he thought deeply on music that he thoroughly internalized Schoenbergs system. After the war was over he went back to his hometown of Jackson and wrote a systematic study of the Schoenberg system, “The Function of Set Structure in the Twelve Tone System.” He submitted the completed work to Princeton as his doctoral thesis. Princeton didn’t give out doctorals in music, only in musicology, and his complex thesis wasn’t accepted until eight years after his retirement from the school in 1992.
His thesis and his other extensive writings on music theory expanded upon Schoenberg’s methods and formalized the twelve tone, “dodecaphonic”, system. The basic serialist approach was take the twelve notes of the western scale and put them into an order called a series, hence the name of the style. It was called a tone row as well. Babbitt saw that the series could be used to order not only the pitch, but dynamics, timbre, duration and other elements. This led him to pioneering “total serialism” which was later taken up in Europe such as Pierre Boulez and Olivier Messiaen, among others.
Babbitt treated music as field for specialist research and wasn’t very concerned with what the average listener thought of his compositions. This had its pluses and minuses. On the plus side it allowed him to explore his mathematical and musical creativity in an open-ended way and see where it took him, without worrying about having to please an audience. On the minus side, not keeping his listeners in mind, and his ivory tower mindset, kept him from reaching people beyond the most serious devotees of abstract art music. This tendency was an interesting counterpoint from his years as teenager when he was an avid writer of pop songs and played in every jazz ensemble he could. Babbitt had thought of Schoenberg’s work as being “hermetically sealed music by a hermetically sealed man.” He followed suit in his own career. In this respect Babbitt can be considered as a true Castalian intellectual, and Glass Bead Game player. Within the Second Viennese School there was an idea, a thread taken from both 19th century romanticism and adapted from the philosophy of Arthur Schopenhauer, that music provides access to spiritual truth. Influenced by this milieu Babbitt’s own music can be read and heard as connecting the players and listeners to a platonic realm of pure number.
Modernist art had already moved into areas that many people did not care about. And while Babbitt was under no illusion that he ever saw his work being widely celebrated or popular, as an employee of the university, he had to make the case that music was in itself a scientific discipline. Music could be explored with the rigors of science, and that it could be made using formal mathematical structures. Performances of this kind of new music was aimed at other researchers in the field, not at a public who would not understand what they were listening to without education. Babbitt’s approach rejected a common practice, in favor of what would become the new common practice: many different ways of investigating, playing, working with and composing music that go off in different directions.
During WWII Babbitt had met John Van Neumann at the Institute for Advanced Studies. His association with Neumann caused Babbitt to realize that the time wasn’t far off when humans would be using computers to assist them with their compositional work. Unlike some of the other composers who became interested in electronic music, Babbitt wasn’t interested in new timbres. He thought the novelty of them was quick to wear off. He was interested in how electronic technology might enhance human capability with regards to rhythms.
In 1957 Luening and Ussachevsky wrote up a long report for the Rockefeller Foundation of all that they had learned and gathered so far as pioneers in the field. They included in the report another idea: the creation of the Columbia-Princeton Electronic Music Center. There was no place like it within the United States. In a spirit of synergy the Mark I was given a new home at the CPEMC by RCA. This made it easier for Babbitt, Luening, Ussachevsky and the others to work with the machine. It would however soon have a younger, more capable brother nicknamed Victor, the RCA Mark II, built with additional specifications as requested by Ussachevsky and Babbitt.
There were a number of improvements that came with Victor. The number of oscillators, had been doubled for starters. Since tape was the main medium of the new music, it also made sense that Victor should be able to output to tape instead of the lathe discs. Babbitt was able to convince the engineers to fit it out with multi-track tape recording on four tracks. Victor also received a second tape punch input, a new bank of vacuum tube oscillators, noise generating capabilities, additional effect processes, and a range of other controls.
Conlon Nancarrow, who was also interested in rhythm as an aspect of his composition, bypassed the issue of getting players up to speed with complex and fast rhythms by writing works for player-piano, punching the compositions literally on the roll. Nancarrow had also studied under Roger Sessions, and he and Babbitt knew each other in the 1930s. Though Nancarrow worked mostly in isolation during the 1940s and 1950s in Mexico City, only later gaining critical recognition in the 1970s and onwards, it is almost certain that Babbitt would have at least been tangentially aware of his work composing on punched player piano rolls. Nancarrow did use player pianos that he had altered slightly to increase their dynamic range, but they still had the all the acoustic limitations of the instrument.
Babbitt, on the other hand, found himself with a unique instrument capable of realizing his vision for a complex, maximalist twelve-tone music that was made available to him through the complex input of the punched paper reader on the RCA Mark II and it’s ability to do multitrack recording. This gave him the complete compositional control he had long sought after. For Babbitt, it wasn’t so much the new timbres that could be created with the synth that interested him as much as being able to execute a score exactly in all parameters. His Composition for Synthesizer (1961-1963) became a showcase piece, not only for Babbitt, but for Victor as well. His masterpiece Philomel (1963-1964) saw the material realized on the synth accompanied by soprano singer Bethany Beardslee and subsequently became his most famous work. In 1964 he also created Composition for Synthesizer. All of these are unique in the respect that none of them featured the added effects that many of the other composers using the CPEMC availed themselves of; these were outside the gambit of his vision.
Phonemena for voice and synthesizer from 1975 is a work whose text is made up entirely of phonemes. Here he explores a central preoccupation of electronic music, the nature of speech. It features twenty-four consonants and twelve vowel sounds. As ever with Babbitt, these are sung in a number of different combinations, with musical explorations focusing on pitch and dynamics.
A teletype keyboard was attached directly to the long wall of electronics that made up the synth. It was here the composer programmed her or his inventions by punching the tape onto a roll of perforated paper that was taken into Victor and made into music. The code for Victor was binary and controlled settings for frequency, octave, envelope, volume and timbre in the two channels. A worksheet had been devised that transposed musical notation to code. In a sense, creating this kind of music was akin to working in encryption, or playing a glass bead game where on kind of knowledge or form of art, was connected to another via punches in a matrix grid.
Wired for Wireless
Babbitt’s works were just a few of the many distilled from the CPEMC. Not all were as obsessed with complete compositional control as Babbitt, and utilized the full suite of processes available at the studio, from the effects units to create their works, and their works were plenti-ful. The CPEMC released more recorded electronic music out into the world than from anywhere else in North America.
During the first few years of its operation, from 1959 to 1961 the capabilities of studio were explored by Egyptian-American composer and ethnomusicologist Halim El-Dabh, who had been the first to remix recorded sounds using the effects then available to him at Middle East Radio in Cairo. He had come to the United States with his family on a Fulbright fellowship in 1948 and proceeded to study music under such composers as Ernst Krenek and Aaron Copland, among a number of others. In time he settled in Demarest, New Jersey. El-Dabh quickly became a fixture in the new music scene in New York, running in the same circles as Henry Cowell, Jon Cage, and Edgard Varèse.
By 1955 El-Dabh had gotten acquainted with Luening and Ussachevsky. At this point his first composition for wire recorder was eleven years behind him, and he had kept up his experi-mentation in the meantime. Though he had been assimilated into the American new music milieu, he came from outside the scenes in both his adopted land the and European avantgarde. As he had with the Elements of Zaar, El-Dabh brought his love of folk music into the fold. His work at the CPEMC showcased his unique combinations that involved his extensive use of percussion and string sounds, singing and spoken word, alongside the electronics. He also availed himself of Victor and made extensive use of the synthesizer. In 1959 alone he produced eight works at CPEMC. These included his realization of Leiyla and the Poet, an electronic drama.
El-Dabh had said of his process that it, "comes from interacting with the material. When you are open to ideas and thoughts the music will come to you." His less abstract, non-mathematical creations remain an enjoyable counterpoint to the cerebral enervations of his col-leagues. A few of the other pieces he composed while working the studio include Meditation in White Sound, Alcibiadis' Monologue to Socrates, Electronics and the World and Venice.
El-Dabh influenced such musical luminaries as Frank Zappa and the West Coast Pop Art Experimental Band, his fellow CPEMC composer Alice Shields, and west-coast sound-text poet and KPFA broadcaster and music director Charles Amirkhanian.
In 1960 Ussachevsky received a commission from a group of amateur radio enthusiasts, the De Forest Pioneers, to create a piece in tribute to their namesake. In the studio Vladimir composed something evocative of the early days of radio and titled it "Wireless Fantasy". He recorded morse code signals tapped out by early radio guru Ed G. Raser on an old spark generator in the W2ZL Historical Wireless Museum in Trenton, New Jersey. Among the signals used were: QST; DF the station ID of Manhattan Beach Radio, a well known early broadcaster with a range from Nova Scotia to the Caribbean; WA NY for the Waldorf-Astoria station that started transmitting in 1910; and DOC DF, De Forests own code nickname. The piece ends suitably with AR, for end of mes-sage, and GN for good night. Woven into the various wireless sounds used in this piece are strains of Wagner's Parsifal, treated with the studio equipment to sound as if it were a shortwave transmis-sion. In his first musical broadcast Lee De Forest had played a recording of Parsifal, then heard for the first time outside of Germany.
From 1960 to 1961 Edgard Varese utilized the studio to create a new realization of the tape parts for his masterpiece Deserts. He was assisted in this task by Max Mathews from the nearby Bell Laboratories, and the Turkish-born Bulent Arel who came to the United States on a grant from the Rockefeller Foundation to work at CPEMC. Arel composed his Stereo Electronic Music No. 1 and 2 with the aid of the CPEMC facilities. Daria Semegen was a student of Arel’s who composed her work Electronic Composition No. 1 at the studio. There were numerous other composers, some visiting, others there as part of their formal education who came and went through the halls and walls of the CPEMC. Lucio Berio worked there, as did Mario Davidovsky, Charles Dodge, and Wendy Carlos just to name a few.
Modulation in the Key of Bode
Engineer and instrument inventor Harold Bode made contributions to CPEMC just as he had at WDR. He had come to the United States in 1954, setting up camp in Brattleboro, Ver-mont where he worked in the lead development team at the Etsey Organ Corporation, eventually climbing up to the position of Vice President. In 1958 he set up his own company, the Bode Electronics Corporation, as a side project in addition to his work at Etsey.
Meanwhile Peter Mauzey had become the first director of engineering at CPEMC. Mauzey was able to customize a lot of the equipment and set up the operations so it became a comfortable place for composers. When he wasn’t busy tweaking the systems in the studio, Mauzey taught as an adjunct professor at Columbia University, all while also doing working en-gineer work at Bell Labs in New Jersey. Robert Moog happened to be one of Mauzey’s students while at Columbia, under whom he continued to develop his considerable electrical chops, even while never setting foot in the studio his teacher had helped build.
Bode left to join the Wurlitzer Organ Co. in Buffalo, New York when it hit rough waters and ran around 1960. It was while working for Wurlitzer that Bode realized the power the new transistor chips represented for making music. Bode got the idea that a modular instrument could be built, whose different components would then be connected together as needed. The instrument born from his idea was the Audio System Synthesiser. Using it, he could connect a number of different devices, or modules, in different ways to create or modify sounds. These included the basic electronic music components then in production: ring modulators, filters, re-verb generators and other effects. All of this could then be recorded to tape for further pro-cessing.
Bode gave a demonstration of his instrument at the Audio Engineering Society in New York, in 1960. Robert Moog was there to take in the knowledge and the scene. He became in-spired by Bodes ideas and and this led to his own work in creating the Moog.
In 1962 Bode started to collaborate with Vladimir Ussachevsky at the CPEMC. Working with Ussachevsky he developed ‘Bode Ring Modulator’ and ‘Bode Frequency Shifter’. These became staples at the CPEMC and were produced under both the Bode Sound Co. and licensed to Moog for inclusion in his modular systems. All of these effects became widely used in elec-tronic music studios, and in popular music from those experimenting with the moog in the 1960s.
In 1974 Bode retired, but kept on tinkering on his own. In 1977 he created the Bode Vo-coder, which he also licensed to Moog, and in 1981 invented his last instrument the Bode Bar-berpole Phaser.
.:. .:. .:.
Read part I.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Holmes, Thom. Electronic and Experimental Music. Sixth Edition.
Music of the 20th Century Avant-Garde: A Biocritical Sourcebook
Columbia-Princeton Electronic Music Center 10th Anniversary, New World Records, Liner Notes, NWCRL268 , Original release date: 1971-01-01
Babbitt, Milton. Words About Music. University of Wisconsin Press. 1987
Otto Luening and Vladimir Ussachevsky
In America the laboratories for electronic sound took a different path of development and first emerged out of the Universities and the private research facility of Bell Labs. It was a group of composers at Columbia and Princeton who had banded together to build the Columbia-Princeton Electronic Music Center (CPEMC), the oldest dedicated place for making electronic music in the United States. Otto Luening, Vladimir Ussachevsky, Milton Babbit and Roger Sessions all had their fingers on the switches in creating the studio.
Otto Luening was born in 1900 in Milwaukee, Wisconsin, to parents who had emigrated from Germany. His father was a conductor and composer and his mother a singer, though not in a professional capacity. His family moved back to Europe when he was twelve, and he ended up studying music in Munich. At age seventeen he went to Switzerland and it was at the Zurich Conservatory where he came into contact with futurist composer Ferruccio Busoni. Busoni was himself a devotee of Bernard Ziehn and his “enharmonic law.” This law stated that “every chord tone may become the fundamental.” Luening picked this up and was able to put it under his belt.
Luening eventually went back to America and worked at a slew of different colleges, and began to advocate on behalf of the American avant-garde. This led him to assisting Henry Cowell with the publication of the quarterly New Music. He also took over from Cowell New Music Quarterly Recordings which put out seminal recordings from those inside the new music scene. It was 1949 when he went to Columbia where for a position on the staff in the philosophy department and it was there he met Vladimir Ussachevsky.
Ussachevsky had been born in Manchuria in 1911 to Russian parents. In his early years he was exposed to the music of the Russian Orthodox Church and a variety of piano music, as well as the sounds from the land where he was born. He gravitated to the piano and gained experience as a player in restaurants and as an improviser providing the live soundtrack to silent films. In 1930 he emigrated to the United States, went to various schools, served in the army during WWII, and eventually ended up under the wing of Otto Luening as a postdoctoral student at Columbia University, where he in turn ended up becoming a professor.
In 1951 Ussachevsky convinced the music department to buy a professional Ampex tape recorder. When it arrived it sat in its box for a time, and he was apprehensive about opening it up and putting it to use. “A tape-recorder was, after all, a device to reproduce music, and not to assist in creating it,” he later said in recollection of the experience. When he finally did start to play with the tape recorder, the experiments began as he figured out what it was capable of doing, first using it to transpose piano pitches.
Peter Mauzey was an electrical engineering student who worked at the university radio station WKCR, and he and Ussachevsky got to talking one day. Mauzey was able to give some technical pointers for using the tape recorder. In particular he showed him how to create feedback by making a tape loop that ran over two playback heads, and helped him get it set up. The possibilities inherent in tape opened up a door for Ussachevsky, and he became enamored of the medium, well before he’d ever heard of what Pierre Schaeffer and what his crew were doing in France, or what Stockhausen and company were doing in Germany.
Some of these first pieces that Ussachevsky created were presented at a Composers Forum concert in the McMillan Theater on May 9, 1952. The following summer Ussachevsky presented some of his tape music at another composers conference in Bennington, Vermont. He was joined by Luening in these efforts. Luening was a flute player, and they used tape to transpose his playing into pitches impossible for an unaided human, and added further effects such as echo and reverb.
After these demonstrations Luening got busy working with the tape machine himself and started composing a series of new works at Henry Cowell’s cottage in Woodstock, New York, where he had brought up the tape recorders, microphones, and a couple of Mauzey’s devices. These included his Fantasy in Space, Low Speed, and Invention in Twelve Tones. Luening also recorded parts for Ussachevsky to use in his tape composition, Sonic Contours.
In November of 1952 Leopold Stokowski premiered these pieces, along with ones by Ussachevsky, in a concert at the Museum of Modern Art, placing them squarely in the experimental tradition and helping the tape techniques to be seen as a new medium for music composition. Thereafter, the rudimentary equipment that was the seed material from which the CPEMC would grow, moved around from place to place. Sometimes it was in New York City, at other times Bennington or at the MacDowell Colony in New Hampshire. There was no specific space and home for the equipment.
The Louisville Orchestra wanted to get in on the new music game and commissioned Luening to write a piece for them to play. He agreed and brought Ussachevsky along to collaborate with him on the work which became the first composition for tape-recorder and orchestra. To fully realize it they needed additional equipment: two more tape-recorders and a filter, none of which were cheap in the 1950s, so they secured funding through the Rockefeller Foundation. After their work was done in Louisville all of the gear they had so far acquired was assembled in Ussachevsky’s apartment where it remained for three years. It was at this time in 1955 they sought a permanent home for the studio, and sought the help of Grayson Kirk, president of Columbia to secure a dedicated space at the university. He was able to help and put them in a small two-story house that had once been part of the Bloomingdale Asylum for the Insane and was slated for demolition.
Here they produced works for an Orson Welles production of King Lear, and the compositions Metamorphoses and Piece for Tape Recorder. These efforts paid off when they garnered the enthusiasm of historian and professor Jacques Barzun who championed their efforts and gained further support. With additional aid from Kirk, Luening and Ussachevsky eventually were given a stable home for their studio inside the McMillin Theatre.
Having heard about what was going on in the studios of Paris and Germany the pair wanted to check them out in person, see what they could learn and possibly put to use in their own fledgling studio. They were able to do this on the Rockefeller Foundation’s dime. When they came back, they would soon be introduced to a machine, who in its second iteration, would go by the name of Victor.
The Microphonics of Harry F. Olson
One of Victor’s fathers was a man named Harry Olson (1901-1982), a native of Iowa who had the knack. He became interested in electronics and all things technical at an early age. He was encouraged by his parents who provided the materials necessary to build a small shop and lab. For a young boy he made remarkable progress exploring where his inclinations led him. In grade school he built and flew model airplanes at a time when aviation itself was still getting off the ground. When he got into high school he built a steam engine and a wood-fired boiler whose power he used to drive a DC generator he had repurposed from automobile parts. His next adventure was to tackle ham radio. He constructed his own station, demonstrated his skill in morse code and station operation, and obtained his amateur license. All of this curiosity, hands on experience, and diligence served him well when he went on to pick up a bachelors in electrical engineering. He next picked up a Masters with a thesis on acoustic wave filters, and topped it all off with a Ph.D in physics, all from his home state University of Iowa.
While working on his degrees Olson had come under the tutelage of Dean Carl E. Seashore, a psychologist who specialized in the fields of speech and stuttering, audiology, music, and aesthetics. Seashore was interested in how different people perceived the various dimensions of music and how ability differed between students. In 1919 he developed the Seashore Test of Music Ability which set out to measure how well a person could discriminate between timbre, rhythm, tempo, loudness and pitch. A related interest was in how people judged visual artwork, and this led him to work with Dr. Norma Charles Meier to develop another test on art judgment. All of this work led Seashore to eventually receive financial backing from Bell Laboratories.
Another one of Olson’s mentors was the head of the physics department G. W. Stewart, under who he did his work on acoustic wave filters. Between Seashore and Stewart’s influence, Olson developed a keen interest in the areas of acoustics, sound reproduction, and music. With his advanced degree, and long history of experimentation in tow, Olson headed to the Radio Corporation of America (RCA) where he became a part of the research department in 1928. After putting in some years in various capacities, he was put in charge of the Acoustical Research Laboratory in 1934. Eight years later in 1942 the lab was moved from Camden to Princeton, New Jersey. The facilities at the lab included an anechoic chamber that was at the time, the largest in the world. A reverberation chamber and ideal listening room were also available to him. It was in these settings that Olson went on to develop a number of different types and styles of microphone. He developed microphones for use in radio broadcast, for motion picture use, directional microphones, and noise-cancelling microphones. Alongside the mics, he created new designs for loudspeakers.
During WWII Olson was put to work on a number of military projects. He specialized in the area of underwater sound and antisubmarine warfare, but after the war he got back to his main focus of sound reproduction. Taking a cue from Seashore, he set out to determine what a listeners preferred bandwidth of sound actually was when sound had been recorded and reproduced. To figure this out he designed an experiment where he put an orchestra behind a screen fitted with a low-pass acoustic filter that cut off the high-frequency range above 5000 Hz. This filter could be opened or closed, the bandwidth full or restricted. Audiences who listened, not knowing when the concealed filter was opened or closed had a much stronger leaning towards the open, all bandwidth listening experience. They did not like the sound when the filter was activated. For the next phase of his experiment Olson switched out the orchestra, whom the audience couldn’t see anyway, with a sound-reproduction system with loudspeakers located in the position of the orchestra. They still preferred the full-bandwidth sound, but only when it was free of distortion. When small amounts of non-linear distortion were introduced, they preferred the restricted bandwidth. These efforts showed the amount of extreme care that needed to go into developing high-fidelity audio systems.
In the 1950s Olson stayed extremely busy working on many projects for RCA. One included the development of magnetic tape capable of recording and transmitting color television for broadcast and playback. This led to a collaboration between RCA and the 3M company, reaching success in their aim in 1956.
The RCA Mark I Synthesizer
Claude Shannon’s 1948 paper “A Mathematical Theory of Communications,” was putting the idea of information theory into the heads of everyone involved in the business of telephone and radio. RCA had put large sums of money into their recorded and broadcast music, and the company was quick to grasp the importance and implications of Shannon’s work. In his own work at the company, Olson was a frequent collaborator with fellow senior engineer Herbert E. Belar (1901-1997). They worked together on theoretical papers and on practical projects. On May 11, 1950 they issued their first internal research report on information theory, "Preliminary Investigation of Modern Communication Theories Applied to Records and Music." Their idea was to consider music as math. This in itself was not new, and can indeed be traced back to the Pythagorean tradition of music. To this ancient pedigree they added the contemporary twist in correlating music mathematically as information. They realized, that with the right tools, they could be able to generate music from math itself, instead of from traditional instruments. On February 26, 1952 they demonstrated their first experiment towards this goal to David Sarnoff, head of RCA, and others in the upper echelons of the company. They made the machine they built perform the songs “Home Sweet Home” and “Blue Skies”.
The officials gave them the green light and this led to further work and the development of the RCA Mark I Synthesizer. The RCA Mark I was in part a computer, as it had simple programmable controls, yet the part of it that generated sound was completely analog. The Mark I had a large array of twelve oscillator circuits, one for each of the basic twelve tones of the muscial scale. These were able to be modified by the synths other circuits to create an astonishing variety of timbre and sound.
The RCA Mark I was not a machine that could make automatic music. It had to be completely programmed by a composer. The flexibility of the machine and the range of possibilities gave composers a new kind of freedom, a new kind of autocracy, total compositional control. This had long been the dream of those who had been bent towards serialism. The programming aspect of the RCA Mark I hearkened back to the player pianos that had first appeared in the 19th century, and used a roll of punched tape to instruct the machine what to do. Olson and Belar had been meticulous in all of the aspects that could be programmed with their creation. These included pitch, timbre, amplitude, envelope, vibrato, and portamento. It even included controls for frequency filtering and reverb. All of this could be output to two channels and played on loudspeakers, or sent to a disc lathe where the resulting music could be cut straight to wax.
It was introduced to the public by Sarnoff on January 31, 1955. The timing was great as far as Ussachevsky and Luening were concerned, as they first heard about it after they had returned from a trip to Europe where they had visited the GRM, WDR, and some other emerging electronic music studios. The trip had them eager to establish their own studio to work electronic music their own way. When they met Schaeffer he had been eager to impose his own aesthetic values on the pair, and when they met Stockhausen, he remained secretive of his working methods and aloof about their presence. Despite this, they were excited about getting to work on their own, even if exhausted from the rigors of travel. They made an appointment with the folks at RCA to have a demonstration of the Mark I Synthesizer.
The RCA Mark I far surpassed what Luening and Ussachevsky had witnessed in France, Germany and the other countries they visited. With its twelve separate audio frequency sources the synth was a complete and complex unit, and while programming it could be laborious, it was a different kind of labor than the kind of heavy tape manipulation they had been doing in their studio, and the accustomed ways of working at the other studios they got to see in operation.
The pair soon found another ally in Milton Babbit, who was then at Princeton University. He too had a keen interest in the synth, and the three of them began to collaborate together and share time on the machine, which they had to request from RCA. For three years the trio made frequent trips to Sarnoff Laboratories in Princeton where they worked on new music.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Holmes, Thom. Electronic and Experimental Music. Sixth Edition.
Music of the 20th Century Avant-Garde: A Biocritical Sourcebook
Columbia-Princeton Electronic Music Center 10th Anniversary, New World Records, Liner Notes, NWCRL268 , Original release date: 1971-01-01
Babbitt, Milton. Words About Music. University of Wisconsin Press. 1987
Linear Predictive Coding
The elements of Linear Predictive Coding (LPC) were built on the basis of some of Norbert Wiener’s work from the 1940’s when he developed a mathematical theory for calculating the optimal filters for finding signals in noise. Claude Shannon quickly followed Wiener with his breakthrough work A Mathematical Theory of Communication, that included a general theory of coding. [For more on Wiener and Shannon see Chapter 3.] With new mathematical tools in hand, researchers started exploring predictive coding. Linear Prediction is a form of signal estimation and it was soon applied to speech analysis.
In signal processing, communications and related fields the term “coding” generally means putting a signal into a format where it will be easier to handle a given task. A coding scheme, like morse code for instance, is when an encoder takes the signal and puts into a new format. The decoder takes it out of its new format and puts it back into the old one.
The “predictive” aspect of coding has been used for in numerous scientific theories and engineering techniques. What they have in common is that they predict future observations based on past observations. Joined together the term “predictive coding” was coined by information theorist Peter Elias in 1955 in his two papers on the subject.
In LPC samples from a signal are predicted using a linear function from previous samples. In math a linear function that has either one or two variables without exponents or it is a function that graphs to the straight line. The error between a predicted sample and the actual sample is also transmitted along with the coefficients. This works with speech because the samples from nearby correspond to each other to a high degree. The error is also transmitted because if the prediction is good the error will be small and take up less bandwidth. In this sense, LPC becomes a type of compression based on source codes.
Towards the end of the 1960’s Fumitada Itakura, and Bishnu S. Atal and Manfred Schroeder independently discovered, as in the case of the telegraph and telephone, the elements of LPC. Later, Paul Lansky applied it making delightful music exploring the spectrum between music and speech.
Fumitada Itakura was interested in math and radio from an early age, and he had been an amateur radio operator in his youth. His elementary school happened to be just a mile from the radio laboratory at Nagoya University where his father knew some of the professors, so he had occasion to visit it and ask questions.
As an undergraduate he became interested in the theoretical side of math and started to learn about stochastic processes. As he extended his ability ever further, he eventually became involved in the mathematical aspect of signal processing. His research paper for his bachelor in electrical communication was on the statistical analysis of whistlers, a very low frequency electromagentic radio wave produced by lightning, and capable of being heard as audio on radio receivers. To study it he built a bank of analog filters to do the signal processing, and made digital circuits to try and find patterns in the time-frequency of the whistlers. It wasn’t easy work, but he persevered. In analyzing the whistler signal he had to work on filtering out a lot of the other noisy material that comes in from the magneto-ionosphere. The work required him to use band-pass filters and the sound spectrogram that had originally been designed for speech analysis.
This eventually led to further work with statistics and audio. When he went to graduate school he studied applied mathematics under Professor Kanehisa Udagawa. At Udagawa’s lab he became a part of a group studying pattern recognition and he started a project to recognize hand written characters in 1963. When professor Udagawa died of a heart attack he had to find someone else to study under to continue his course. This led him to work at the NTT.
Dr. Shuzo Saito had been a graduate of Nagoya University and was looking for someone to work with in speech research. Saito’s friend professor Teruo Fukumura suggested Itakura. Saito had an interest in speech recognition and encouraged Itakura to get involved. Fukumura began teaching him the basic principles of speech using using Gunnar Fant's Acoustic Theory of Speech Production. Itakura started making sound spectrograms of his voice speaking vowels. His voice was high and husky so it didn’t make as clean of a spectrogram as it would have with someone who had a regular voice. In this there was a hidden gift. He realized if they could do good analysis on a signal that had more random characteristics, they could do even better when analyzing regular speech. From this point, he went and applied statistics to speech classification, based on a paper he had read by J. Hajek. Reading math papers had been a hobby of his and it led to his work on Linear Predictive Coding.
Dr. Saito suggested to Itakura that he look for practical results based on his theory, so he started working with a vocoder and got some initial results on his idea, and wanted to go further. Dr. Saito suggested he look at pitch detection, as vocoders often had trouble recognizing voices because of their poor ability in this area. He conceived of a new method of pitch detection that used an inverse filter and oscillation. From this he proposed integrating the linear predictive analysis with his new pitch detection method to create a new vocoder system. In late 1967 he succeeded in synthesizing speech from the vocoder and brought the results to Dr. Saito. From then on Itakura has worked on vocoding.
Of the many modes in which speech is produced, the way vowels sound is very important, as it relies on the periodic opening and closing of the vocal cords. Air from the lungs gets converted by them into a wideband signal filled with harmonics containing many properties. This signal resonates the vocal cavities before leaving the mouth where the final sounds are shaped.
This speech signal gets analyzed, the signal of the formants estimated and removed in a process called inverse filtering. The rest of the remaining sounds, called the buzz, are also estimated. The signal that remains after the buzz is subtracted is called the residue. Numbers which represent the formants, the buzz and the residue, can be stored or transmitted elsewhere. The speech is then synthesized through a reversal of the original stripping process. The parameters of the buzz and residue are used to create a signal, and the information stripped from the formants is recreated to create a new filter. The process is done in short chunks of time.
Taking speech apart and putting it together on the other end was a huge technical feat that saves tons of bandwidth. Speech synthesis could fit five calls onto the same channel that regular voice took up with one.
Mafred Schroeder and Bishnu S. Atal
At Bell Labs he met up with Manfred Schroeder who had come from Germany. Schroeder was born in 1926 and came of age during WWII. During the war Schroeder had built a secret radio transmitter that spooked his parents. Transmitting radio was risky business because it was the province of spies and people who wanted to communicate outside the country. When Schroeder saw members of the army or SS outside his house with radio direction finding equipment, he shut off the transmitter for a month. He also listened to the BBC for news, and the American Forces Network transmitting from England, then illegal to listen to. Many people had been sent to concentration camps just for listening to foreign stations, and spreading news to others. The Nazi powers attempted to keep tight control on all information going in and out of the country. A special radio was even manufactured by the state, the People's Radio or Volksempfänger, that was built in such a way that it only could receive approved German stations whose programs were under the directorship of Joseph Goebbels.
He excelled at school and was often ahead of even the teachers, and during the war was drafted to a radar team to track incoming aircraft flights and do other work, where he gained extensive experience with the technology.
Schroeder was also a math fanatic, like Itakura was, and when he did go to university, always took extra math classes on the side of his physics work. He had been fascinated by crypto math and he loaded up on function theory and probability classes. Eventually Schroeder got a job offer from Bell Labs in 1954, based on previous work he had done experimenting with microwaves and he emigrated to the United States.
Bell Labs wanted him to continue his research with microwaves, but he thought he’d switch gears and get into the study of speech instead. For two years he worked on speech synthesizers, and didn’t have much luck in getting them to sound good, so then turned his attention to speakers and room acoustics. Many researchers who were following the dictates of their own curiosity and inclination were left alone to pursue their studies, and see what came out of them and where it took them.
John Peirce at Bell Labs wanted Schroeder to use Dudley’s vocoding principles to send high fidelity voice calls over the phone system. This caused Schroeder to hit up against the same issue as Itakura had, the problem of pitch. Part of the issue was extracting the fundamental frequencies from telephone lines not known for superb sound quality. As Schroeder investigated he realized he could take the baseband signal, or those frequencies that have not been modulated, and distort it non-linearly to generate frequencies that the vocoder would then give the right amplitude. This ended being a success. This became voice excited vocoding and the speech that came out of the other end was the most human sounding of any speech synthesis up to that point.
In 1961 Schroeder hired Dr. Bishnu S. Atal to work with him at Bell Labs. Atal was born in 1933 in Kanpur, Uttar Pradesh, India. He studied physics at the University of Lucknow and received his degree in electrical communications engineering from the Institute of Science in Bangalore, India in 1955, before coming to America to study for his Ph.D at the Brooklyn Polytechnic Institute. He returned to his home country to lecture on acoustics from 1957 to 1960 before he was lured back to the U.S. by Schroeder to join him in his investigations in speech and acoustics.
In 1967 Schroeder was pacing around the Lab with Atal, and they were conversing about needing to do more with vocoder speech quality. His work on pitch had improved the quality of vocoding, but it wasn’t yet what it could be. What they needed to do, they realized as they talked, was to code speech so no errors were present. As they talked the idea of predictive coding came up.
They realized that as speech became encoded they could predict the next samples of speech based on what had just come before. The prediction would be compared with the actual speech. Alongside this the errors, or residuals, would be transmitted. In decoding the same algorithm was used to reconstruct the speech on the other end of the transmission. Schroeder and Atal called this adaptive predictive coding, with the name later changed to linear predictive coding. The quality of speech was as good as that which came out of his voice excited vocoder. They wrote a paper on the subject for the Bell System Journal and presented on it at a conference in 1967, the same year Itakura succeeded with his technique.
Since 1970's most of the technology around speech synthesis and coding has been focused on LPC and it is now the most widely used form. When it first came out the NSA were among the first to get their paws on it because LPC can be used for secure wireless with a digitized and encrypted voice sent over a narrow channel. The early example of this is Navajo I, a telephone built into a briefcase to be used by government agents. About 110 of these were produced in the early 1980s. Several other vocoder systems were used by the NSA for the purpose of encryption.
LPC has become essential for cellphones, and is a Global System for Mobile Communications (GSM) standard protocol for cellular networks. GSM uses a variety of voice codecs that implement the technology to put 3.1 kHz of audio into 6.5 and 13 kbit/s of transmission. LPC is also used in Voice Over IP, or VoIP, such as is used on Skype and Zoom calls and meetings.
A 10th order derivative of LPC was used in the popular 1980s Speak & Spell educational toy. These became popular to hack by experimental musicians in a process known as circuit bending, where the toy is taken apart and the connections re-soldered to make sounds not originally intended by the manufactures. [For more on Ghazala and circuit bending, see chapter 7.]
Vocoding technology is also utilized in the Digital Mobile Radio (DMR) units that are currently gaining popularity among hams around the world. DMR is an open digital mobile radio standard. DMR radios use a proprietary AMBE+2 vocoder that works with multi-band excitation for its speech coding and compression to achieve a 6.2 kHz bandwidth. Again the compression and the digital codecs often result in sound artifacts and glitching to occur while talking. Besides it's use in DMR the AMBE+2 is also used in D-Star, Iridium satellite telephone systems, and OpenSky trunked radio systems.
Paule Lansky: notjustmoreidlechatter
Since LPC allows for the separation of pitch and speed and the pitch contours of the speech can be altered independently of the speed, it can also be used by the creative thinker for musical composition. Paul Lansky was one such thinker and he used LPC to great effect in a series of compositions exploring synthesis and the qualities of speech.
Paul Lansky was born in 1944 in New York and counted George Perle and Milton Babbit as among his teachers. Lansky got his Ph.D in music from Princeton in 1973. Like many others of his generation, Lansky started off being schooled in the school of serialism. His teacher Perle had developed an iconoclastic twelve tone modal system, and Lansky used this to write a piece. For his dissertation he continued to explore Perle’s methodology and used linear algebra as a way to create a model of his teachers system. His interest then extended to take in electronics and computers as a way of exploring the mathematical possibilities inherent within serialism.
His first foray into electronic composition was on Mild und Leise from 1973. Proper old school, it was composed using a series of punch cards. Learning the mechanics of the system to achieve his desired outcome was as much a part of the procedure as the composition. For it he used the he Music360 computer language written by Barry Vercoe on an IBM 360/91. The output from the computer went to a 1600 BPI digital tape which that had to be carried over to a basement lab in the engineering quadrangle at Princeton to listen to. It used FM synthesis which had just been worked out at Stanford [for FM Synthesis see Chapter 4.] The harmonic language came from Perle’s system. The result is very emotionally resonant pure electronic music. Lansky has ever been keen to foreground the music in front of the technology used to make the music, and that is true here. The piece was later sampled by Radiohead in their song idioteque on their Kid A album.
1979 saw Lansky beginning to work with LPC as a part of his computer music programming practice, and it was put to use in a series of compositions starting with Six Fantasies on a Poem by Thomas Campion. James Moorer at Stanford University had begun
Linear Predictive Coding based derivatives were pioneered by James Moorer at Stanford University in the 1970’s. His wife Hannah McKay reads the poem and LPC techniques and a variety of processing and filtering methords are used to alter and transform the reading in fabulous ways.
In his notes to the recording of Six Fantasies, he writes about how it has become common to view speech and song as distinct categories. Lansky thought that “they are more usefully thought of as occupying opposite ends of a spectrum, encompassing a wealth of musical potential. This fact has certainly not been lost on musicians: sprechstimme, melodrama, recitative, rap, blues, etc., are all evidence that it is a lively domain.”
Thomas Campion as composer and poet became an archetype emblematic of the “musical spectrum spanned by speech and song.” The poem used by Campion was his Rose cheekt Lawra which was embedded within his 1602 treatise Observations in the Art of English Poesie. Here Campion offered his attempt at a quantitative model for English poetry, where meter is determined by the quantity of vowels rather than by rhythm, as was done in ancient Latin and Greek poetry. Lansky describes the poem as a “wonderful, free-wheeling spin about the vowel box. It is almost as if he is playing vowels the way one would play a musical instrument, jumping here and there, dancing around with dazzling invention and brilliance, carefully balancing repetition and variation. The poem itself is about Petrarch's beloved Laura, whose beauty expresses an implicit and heavenly music, in contrast to the imperfect, all too explicit earthly music we must resign ourselves to make. This seemed to be an appropriate metaphor for the piece.”
Lansky continued to explore the continuum between speech and song with his pieces, Idle Chatter, just_more_idle_chatter, and, Notjustmoreidlechatter. Though clearly connected by theme, they are not a suite, but independent works. Idle Chatter from 1985 also continues with the use of his wife as vocalist, and the IBM 3081 as the means of transforming her voice, and again using a mix of LPC, stochastic mixing, and granular synthesis with a bit of help from the computer music language Cmix. If you like glossolalia, and if you ever wanted to try to hear what it sounded like at the Tower of Babel, these recordings are an opportunity.
Of Idle Chatter, Lansky wrote, ““The incoherent babble of Idle Chatter is really a pretext to create a complicated piece in which you think you can `parse the data’, but are constantly surprised and confused. The texture is designed to make it seem as if the words, rhythms and harmonies are understandable, but what results, I think, is a musical surface with a lot of places around which which your ear can dance while you vainly try to figure out what is going on. In the end I hope a good time is had by all (and that your ears learn to enjoy dancing).”
People had a strong reaction to the piece, and in response to their reaction, Lanksy wrote, just_more_idle_chatter in 1987. He gave the digital background singers more of a role in the piece, but the words still only approach intelligibility and never really reach a stage where the listener can comprehend what is being said, only that something is being spoken. The next saw his “stubborn refusal to let a good idea alone” with the realization of Notjustmoreidlechatter. Here again the chatter almost becomes something that can be discerned as a word before slipping back down into the primordial soup of linguistic babble. The last two of these pieces were made using the DEC MicroVaxII computer.
Over time, though Lansky wrote many more computer music pieces, and settings for traditionl instrumentation, he couldn’t just let the words just be. For the pieces on his Alphabet Book album he conducted further investigations in a magisterial reflection on the building blocks of thought: the alphanumerics, the letters and numbers, that allow for communication, the building up of knowledge, and contemplation.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
Fumitada Itakura, an oral history conducted in 1997 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA.
Manfred Schroeder, an oral history conducted in 1994 by Frederik Nebeker, IEEE History Center, Piscataway, NJ, USA.
Charles Dodge: Speech Songs
Charles Dodge was another early computer musician who got in on the speech synthesis game. Born in Iowa in 1942 he was in his early twenties when he first became interested in the possibilities of computer music. As a graduate student at Columbia University he studied composition under Richard Hervig, Chou Wen-chung, and the electronic musician Otto Luening. When he met Godfrey Winham of at Princeton University, he began to think seriously about composing his own works with computers. Winham was an influential music theorist whose wife was a singer whose wife Bethany Beadslee was the voice for much new music, including Milton Babbit’s Philomel.
In the sixties Bell Labs was one of the very few places computer music was being made, and it was one of the few places to go to hear how it sounded. Max Matthews encouraged musicians who were making music on university computers to come to Bell Labs to convert it into sound, in the evening after the primary work at the Labs was finished. Charles Dodge was one of these composers, and when he came to listen to his work he became mesmerized by the fascinating sounds of the speech research going on down the hall, and often thought it was more interesting than the sounds he’d created using the computer.
In the early 70s he had the opportunity to create some new works at Bell Labs with access to programs written by Dr. Joseph Olive for speech synthesis. Olive was a leading researcher in the area of text-to-speech. Olive was one of those people who had an intense mathematical mind. He had received a physics PhD from the University of Chicago, but he was also interested in music.
With help from Olive and some poems written and given to him by his friend Mark Strand, Dodge went about creating Speech Songs. He writes, “I'd never been able to write very effective vocal music and here was an opportunity to make music with words. I was really attracted to that. It wasn't singing in the usual sense. It was making music out of the nature of speech itself. With the early speech-synthesis computers, you could do two things: you could make the voice go faster or slower than the speed in which it was recorded at the same pitch or you could shift the pitch independent of the speech rhythm. That was a kind of transformation that you couldn't make in the usual way of making tape music. It was fascinating to put my hands on two ways of modifying sound that were completely, newly available.”
To synthesize the electronic voices for the poems he used called speech-by-analysis. Only words that had put into the computer before using an an analog-to-digital converter could be synthesized. The recorded speech is analyzed by the computer to pull out the various parameters from the spoken word in short segments. Then speech can be recreated by the artificial voice using the same parameters as had been analyzed. For musical purposes, though, those parameters can be altered to change aspects of the sound such as shifting the pitch contour of a phrase or word into a melodic line. Change the speed without altering the pitch is another possibility. Formants and resonance are other aspects that can be changed by the programmer-composer.
The poems themselves are humorous and surrealistic, and the way the artificial voice reads them adds to the effect. Dodge was specifically interested in humor, because as he wrote in the liner notes, “Laughter at new music concerts, especially in New York, is rare these days.” He was delighted when audience members laughed at his creation. For a type of music that is so often cerebral and conceptual, its good when some belly laughs can be had.
Another piece on the album, The Story of Our Lives, also used techniques of speech synthesis. In this case instead of replacing the recorded human with an artificial voice, they changed the program so that it took from a bank of 64 sine tones that glissandoed at different rates. To create the effect of more than one voice being heard at a time, the different voices were mixed together on the digital computer.
Speech Songs came out in 1972 and in 1978 he put together a he made a recording of the radio In Casando by Samuel Beckett, where the musical aspect was two computer synthesized audio channels. This was also when he founded the center for computer music at CUNY’s Brooklyn College and began teaching for their graduate program. His 1970 composition, Earth’s Magnetic Field will be explored in chapter 8 of this book.
.:. .:. .:.
Read the rest of the Radiophonic Laboratory: Telecommunications, Electronic Music, and the Voice of the Ether.
1. In industrial culture, children want to know about stuff their parents often don’t want to talk to them about, namely sex and death, two of the most natural things in the world. While Halloween has long had the association with death, the association with sex has come about in its later decades, as the holiday has continued in popularity as a party night for adults. Risque costumes became just as common as the ghastly, and the two elements combined in a lurid display of those powers still that are still repressed in our so-called "enlightened and open" society. Halloween allows for death to come to the cultural conversation, where it would otherwise just be shuttered up in a hospital or old folks home.
2. Even in darkness there is something to see. Our society has been cut off from the dark. Electric lightbulbs, one of the first forms of electronic media, have cast their glow onto corners and streets that once contained mysteries after the sun went down. In the darkness there is music. In the darkness there is magic. In the darkness our imagination begins to see. Halloween marks a deepening point in the progression of the dark half of the year. That darkness needs expression and finds it in the popular custom.
3. Tales of ghosts have an ancient pedigree in the traditions of human storytelling. In the twentieth century films were one of the main mediums of storytelling in industrial nations and horror films were among the first moving pictures ever to be made. In 1898 George Mellies made “Le Manoir du Diable,” sometimes called the “The Haunted Castle” in English or “The House of the Devil.” The tradition of the horror film has been kept up ever since, and they are among the most popular forms of all films. As industrial culture dies its own death, horror will still continue to have an outlet in other forms of popular storytelling, the short story and the novel, where the genre had already long had a home.
4. Witchcraft is real. However much rational minded progressive people wanted to cast magic out, it has remained. Even in a world of full of (cue sarcasm) wondrous iPhones, magic, both benefic and malefic, is practiced, explored, studied, spelled. Halloween is a time when the black cat that is the reality of magic can be let out of the bag. Because many people fear magic, the malefic aspect of the art and science is what gets projected out by the collective into the public celebration of Halloween.
5. Magic involves and cultivates the imagination. The imagination involves and cultivates a sense of wonder. For children especially, the sense of wonder and imagination has not yet been squashed. In the liminal time of Halloween those children who are allowed to play and wonder in the dark, to dress in a costume, and see others in costume, become filled with the sense of wonder that is already easy for them.
6. The sense of wonder has become diminished the further corporate media imagery has been inculcated in children. Once they dressed up as folkloric spooks, devils and witches, with costumes they made at home. Now they as often as not dress as characters from cartoons, comic books, or other media being sold to them, with costumes sold to them at stores.
7. There are no treats without tricks. There is something in the quality of the American soil, something deep in the consciousness and the bedrock of the land, that lends itself to tricks and trickery. Some might call it the trickster spirit. Now the trickster spirit isn’t all fun and games, though to trickster it might all be fun and games. But without the trickster, there is no change. As Halloween evolved on this continent the trickster used it as a lively vehicle for the transmission of trickery and tricksterism. Children playing tricks on children. Adults playing tricks on children. Children playing tricks on adults. All the kinds of fun if mischievous shenanigans that can ensue have a way of releasing a lot of pressure off the industrialized human. Old man coyote strikes back at those who have been at war with the wild. Sometimes Coyote plays dress up to disguise who he really is.
8. A little sugar maketh the heart merry. In times when it was scarce it was a real treat. The Halloween stash was meted out little by little over the coming weeks. In times when it has become hard to avoid, the sugary Halloween stash becomes another opportunity to binge, just like the adults do at their Halloween parties. Bingeing itself can be seen as a way to blow off steam. Cutting loose in a society where the girders of mind control in the form of the spectacle have been arrayed against everyday people is one way to shake the chains and rattle the cage. The unfortunate side effect however, is sickness in the morning.
9. These days, adults seem to love Halloween almost more than kids. The eponymous Halloween party has become a staple of the calendar year. Though drinking a few pumpkin ales, or a few too many is a part of it, the adults who still love Halloween are searching for that sense of wonder, that sense of magic and phantasy, they’ve missed out on since childhood. Dressing up, believing in ghosts, ghouls and goblins, even if only for a night, is a way to recapture that sense, even if the needs behind the activity remain unconscious.
10. Haunted houses exists. Belief or disbelief is not required. The experience of the haunted house is commensurate with the experience of urban decay. Also, everyone has heard bad stories of dysfunctional families, of wife beaters, and child abusers. Those who live in this unfortunate reality abide in an everyday haunted house, and there are many of them all across America. Sometimes they leave behind ghosts.
11. We are surrounded by the Walking Dead. This may sound harsh, but its true. A softer term would be sleep walkers. Those who are only barely awake to their potential, subsisting on base appetites, wanting to eat everyone else’s brain. At least on Halloween, if you aren’t one of the zombies, you can pretend to be a mad scientist searching for the antidote that will cure this abysmal condition.
12. Things aren’t always what they seem. What is on the outer does not always show the truth of what is on the inner. The old scary witch may hide decades of wisdom behind her wrinkled pockmarked face. The monster pieced together from disparate body parts may be kinder and gentler than the soul who aimed to give him life.
13. In its current American incarnation Halloween allows people the chance to “choose their own adventure” to role-play, and see who they yet might be. This life that we don is temporary, worn like a mask over that which is eternal. While here in this costume of flesh and bone, we each have a unique part to play. We may belong to families, communities, tribes, and societies, but if life were a costume contest, surely one of the top prizes would be the one for “most original”.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.