Institute for Research and Coordination in Acoustics/Music
Back in 1966 Boulez had proposed a total reorganization of French musical life to André Malraux, the Minister of Culture. Malraux rebuffed Boulez when he appointed Marcel Landowski, who was much more conservative in his tastes and programs, as head of music at the Ministry of Culture. Boulez, who had been known for his tendency to express himself as an epic jerk, was outraged. In an article he wrote for the Nouvel Observateur he announced that he was "going on strike with regard to any aspect of official music in France."
As author John Michael Greer has noted, in French intellectual life, the pose of the philosopher, artist or thinker who dismisses the work of everyone else with a sneer is a familiar pose, and Boulez was accustomed to playing out this role, in his voluminous writings, talks, artistic rivalries with his contemporaries and barbed wire criticisms designed to prick at the flesh of the musicians he worked with. The French knew not to take this game too seriously, whereas Americans tended to be put off him and have their feelings hurt.
When confronted about this aspect of his reputation later in life Boulez said, "Certainly I was a bully. I'm not ashamed of it at all. The hostility of the establishment to what you were able to do in the Forties and Fifties was very strong. Sometimes you have to fight against your society."
So when Boulez was asked by the current French president Georges Pompidou to set up an institute dedicated to researching acoustics, music, and computer technology, he was quick to recant his strike with regards to official music in France, accept the offer, and get busy with work. This was the beginning of the Institut de recherche et coordination acoustique/musique, IRCAM. The space was built next to, and linked institutionally to the Centre George Pomidou, official work started in 1973.
Boulez took inspiration from the Bauhaus and used them as a model for the institute. The Bauhaus had been an interdisciplinary art school that provided a meeting ground for artists and scientists, and this was the aspect he sought to emulate. His vision for the institute was to bring together musicians, composers, scientists and developers of technology.
In a publicity piece for IRCAM he wrote, “The creator’s intuition alone is powerless to provide a comprehensive translation of musical invention. It is thus necessary for him to collaborate with the scientific research worker… The musician must assimilate a certain scientific knowledge, making it an integral part of his creative imagination...at educational meetings scientists and musician’s will become familiar with one another’s point of view and approach. In this way we hope to forge a kind of common language that hardly exists at present.”
To bring his vision into reality he needed the help of those at the forefront of computer music. To that end he brought Max Mathews on board as a scientific advisor to the IRCAM project, and he served in that capacity for six years between 1974 and 1980. Mathews old friend Jean-Claude Risset was hired to direct IRCAM’s computer department, which he did between 1975 and 1979. The work that their colleague John Chowning was doing back in California was crucial to the success of the institute and he was tapped as a further resource.
The Center for Computer Research and Musical Acoustics
Putting together IRCAM was a project that went on for almost a decade before it was fully up and running, and from 1970 to 1977 most of the work done was the preliminary planning, organization, and building of the vessel that would house the musical laboratory. It did not have the advantage of being part of an existing institution, such as the BBC or the West German Radio. Everything, including the space, had to be built from scratch. There were several existing templates for electronic music and research that IRCAM could have followed and it chose the American template, modeled on the work done at Bell Labs, when Max Mathews was asked to be the scientific director of IRCAM in 1975. He in turn took the advanced work with computer music being done at the Center for Computer Research and Musical Acoustics (CCRMA) at Stanford as his model and resource for state of the art computer music, based in no small part on his own MUSIC programs.
John Chowning had founded the CCRMA at Stanford officially in 1974, though the basis for it had already begun inside of SAIL. The other founding members were Leland Smith, John Grey, Andy Moore, and Loren Rush. The first course in computer composition had already been given at Stanford in 1969, taught by Chowning, Max Mathews, Leland Smith and George Gucker. Having shared the space and valuable computer time with other researchers at SAIL it was soon time for those interested in the specifics of composing with computers to have their own department at Stanford.
In 1975 Boulez spent two weeks at the CCRMA studying what they were getting up to. The connection continued, and there was a lot of contact between the staff at IRCAM. One of the results was that the computer systems used at each ended up being compatible with each other. A lot of American computer workers ended up in France helping to set up IRCAMs initial system until the French had enough people trained in the technology themselves. There was also extensive back and forth visiting between CCRMA and IRCAM staff. James Moorer did a residence, and Chowning went on to become a guest artist there in 1978, 1981, and in 1985.
Chowning composed his piece Phoné at CCRMA but the piece later had its premiere at IRCAM. In Phoné Chowning expanded upon his previous compositions in FM synthesis to give the work the feeling and texture of the human voice. It came together from work he started doing with his student Michael McNabb on using FM synthesis to produce vocal sounds in 1978. Chowning went to work at IRCAM in late 1979 and stayed into the next year, where he stayed until early 1980. While at IRCAM Chowning was shown the work of Johann Sundberg, and his research into vocal formants. This in turn led to the creation of algorithms used for vocal synthesis. The work Sundberg was doing went on to be the seed from which the CHANT program grew.
All of this work led to Chowning seizing on the goal of synthesizing vocal sounds from computers that mimicked the human voice as close as possible. A number of characteristics particular to speech needed to be implemented to deliver the goods, and these marked difficult technical hurdles. Some of the people who worked CCRMA and IRCAM were perceptual scientists, and Chowning noticed also that there was an indeterminate perceptual aspect with regards to the timbres of voice and instrument. One of the sounds he was experimenting with was that of a bell, and he became fascinated with transforming that bell sound into other sounds.
His piece Phoné was written with all of this in his mind. The title comes from the ancient Greek word for “voice,” the same word used to denote one of the main tools in telecommunications. Using FM synthesis Chowning was able to transform the voice of the bell into a number of different timbres, including that of a human voice with simulated formants.
Intercontemporary Underground Music
Much of the space for IRCAM was built below ground, beneath the Place Igor-Stravinsky, where the boisterous noise of the city streets above does not penetrate. The underground laboratories were first inaugurated 1978 and contains eight recording studios, and eight laboratories, an anechoic chamber, plus various offices and departments spaces. Though it has since be reorganized with the passing of the years, it was first arranged into five departments, each under its own composer-director, with Boulez as the tutelary head. These departments were Electro-Acoustics, Pedagogy, Computers, Instruments and Voice, as well as a department called Diagonal that coordinated between the other departments, who for the most part followed their own research and creative interests. Lucio Berio headed up the Electro-Acoustic department at the beginning.
The piece de resistance at IRCAM is the large Espace de Projection, also known as Espro, a modular concert hall whose acoustics can be changed according to the temperament and design of the composers and musicians working there. The Espro space was created under the direction of Boulez and features a system of “boxes in boxes” to create the variable acoustics. When the space was first opened Boulez said of it was “really not a concert hall, but it can project sound, light, audiovisual events, all possible events that are not necessarily related to traditional instruments.” The position of the ceilings can be moved to change the volume of the room. The walls and ceilings have panels that are mode of rotatable prismatic modules that each have three faces, one for absorbing, another reflecting, and one for diffusing sound. These are called periacts and can be changed on the spot.
Boulez was busy as all get out in the seventies. If developing IRCAM and conducting the BBC Symphony Orchestra from 1971 to 1975, and the New York Philharmonic from 1971 to 1978, was not enough, he also founded the Ensemble intercontemporain (EIC) in 1976. The EIC was built up with support from Minister of Culture Michel Guy, and the British arts administrator Nicholas Snowman. EIC filled a gap in contemporary music by providing an ensemble available to play chamber music. He also wanted to cultivate a group of musicians dedicated to performing contemporary music. EIC would have a strong working relationship with IRCAM so that musicians were available to play compositions made in conjunction with the institute inside the Espro, as well as tour and make recordings. This of course included Boulez’s own compositions as he had the energy to return to writing music as his conductive activities slowed down.
Though Boulez had made a piece of musique concrète at GRM, and had experimented with tape music with Poesie Pour Pouvoir, these were not his main interests in avantgarde music. What concerned Boulez was the live transformation of acoustic sound electronically. He felt that recordings, played in a concert hall, was like going to listen to a dead piece of music. The live transmformation of live sound was what held promise. While the possibility for the live transformation of acoustic sounds had been explored by Stockhausen and Cage, these did not have the same precision that was now available with the computers and programs created at CCRMA and IRCAM.
Répons was composed in various versions between 1980–1984 once IRCAM was up and running and his conductive activity had slowed enough to give him time to compose. The instrumental ensemble is placed in the middle of the hall. Six soloists are place around the audience at various points. These include two pianos, harp, cimbalom, vibraphone and glockenspiel or xylophone, and it these instruments that give Répons much of its color.
The instrumental music gets transformed by computer electronics and projected through the space. The harp, vibraphone and piano create glittering sparkles that illuminate the space fulfilling Boulez’s dream of the live electronic transformation of acoustic sound.
Once IRCAM got into a groove it started pushing out a steady stream of compositions, papers, and software from its many scientific and artistic residents and collaborators. Boulez’s vision of a “general school or laboratory” where scientists and sound artists mixed and mingled had come to fruition. One of its most famous outputs is the software suite Max/MSP.
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP. Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.
Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.
Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.
Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.
Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today.
Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.
It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
Max Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
The first Digital Signal Processing (DSP) workstation computer, the 4A, was built at IRCAM. The computer was used by Xavier Rodet, Yves Potard and Jean-Baptiste Barrière to create the CHANT program that was originally made for the analysis and synthesis of the singing voice. They developed an algorithm known as Fonction d’Onde Formantique (FOF) to emulate the human voice. By using four or five FOF generators in parallel the program is able to model the formants created by the human vocal tract. The flexibility of their program also allows for the synthesis of instrumentals sounds and noises, such as those of bells and cymbals among many others.
CHANT’s creators made subprograms to use with CHANT for specific types of singing or utterance, such as bel canto voice for western style soprano singing, and Tibetan chant. In the bel canto subprogram they used a phase vocoder to analyze the same pitch as interpreted by a number of different singers. With this data they were able to obtain the precise frequencies of the first eight formants used by the singer. In writing the code for the algorithm they kept the frequencies of the last six formants. For the first two they revealed a relationship between the frequencies of the formants and the pitch of the note that was sung. They next created a rule where the first and second formants were placed on the first and second harmonics except in cases where the frequency obtained is below a fixed threshold. This allowed them to create uniform vocal color over a range of two octaves. Next they programmed other rules for various parameters of singing.
For the Tibetan chant subprogram their main concern was to develop a system for voice emulation that accounted for noise and strange harmonics, in contrast to the typical voice of the trained western singer who tries to eliminate randomness and regional accents. When using CHANTS basic presets, noise is controlled by rules that are dependent on the formant. For the Tibetan subprogram noise was approached from the random aspects microfluctuations within the fundamental and frequencies of the formant. For timbre they added separate amplitude controls for the even and odd harmonics with additional envelopes for random variation. They also tooled the articulation, modeling the consonants and constructing them in “the form of transitions from one vowel to another, affecting the amplitude, the fundamental, and the formant trajectories, that is, the frequency of each formant as a function of time.” They used the length of phonemes, fundamental frequency, vibrato and vocal effort (or the way a person speaks to another based on proximity to each other) to create rules around rhythm and stress. All of this was done in the effort to synthesize a non-western style of vocal art.
CHANT was equipped with a number of basic parameters for relative ease of use, but those who sought total compositional control could use an extended version of the program that allowed for different models to be implemented, including the non-vocal models. CHANT began with analyzing and mimicking vocal behavior, but was capable of going beyond vocal behavior into other areas of sound, including that of granular textures that opened up a variety of possibilities for spectral exploration.
The CHANT team created a number of different models to encompass all the traditional instruments and some non-traditional. With the models in place, composers can work with these definitions to create “imaginary hybrid instruments” that give them and listeners a chance to explore new timbral spaces. Some of these possibilities offered by CHANT have been explored by a number of different composers including Jonathan Harvey, Jukka Tiensuu and Tod Machover among a number of others.
Jonathan Harvey’s Ritual Melodies
Jonathan Harvey was a British composer born in 1939 who liked to jump across the boundaries of genre within contemporary classical music. He had begun his studies with Benjamin Britten who advised him to learn also from Erwin Stein and Hans Keller. Like many other composers in his general age group, he fell under the spell of Karlheinz Stockhausen and attended his composition courses at Darmstadt in 1966 and 1967. In 1969 he got a Harkness Fellowship at Princeton University where he was able to study under Milton Babbitt.
The Baleriac islands have a history of being good for music, and Harvey wrote his 1973 piece Inner Light I, while staying in Menorca. It is an electroacoustic work for seven instruments and tape dedicated to Benjamin Britten on the occasion of his 60th birthday. He realized the tape portion when sequestered away inside the studios of Swedish Radio, Stockholm and at University College in Cardiff, Wales. This electronic portion features ring modulation and varispeed tape.
Unlike many of his fellow composers on the experimental end of the spectrum, some of Harvey’s work are played with frequency, rather than just being concerned with frequency. This is in part due to his early religious affiliations with the Church of England, and his own time as a chorister at St. Michaels, Tenbury. Harvey loved choral music and wrote pieces for the British cathedral choirs. His I, Love the Lord (1976) and The Angels (1992) are thus the most recorded and performed of his music.
Harvey followed the path of many other 20th century composers and went on to teach composition, working at Southampton and Sussex Universities, while doing stints as a guest lecturer in the United States. He was happy to encourage his students, and help them develop in their own ways, rather than demanding anyone adhere to a particular school of musical thought. He hadn’t, so why should they?
Throughout his career he would flit between electroacoustic works, purely electronic pieces, and orchestral pieces that utilized live electronics. A number of works he wrote concerned the nature of speech, whether sung, spoken, or synthesized and its relationship to song.
Mortuous Plango, Vivos Voco is a short work for eight-channel tape. It uses concrète sounds of his son singing, who was then a chorister in the Winchester Choir, and the recorded sounds of the largest bell of Winchester Cathedral, transformed in various ways by the use of MUSIC V and CHANT. Other synthesized sounds were also used. The piece also uses phonetics, linguistic analysis, proportions from the golden ratio, and the judicious use of spatialization and a sonorous reverb that gels it all together.
The voice of the bell is strong in this work. The title was taken from the latin words inscribed on the bell, that translate as “I lament the dead, I call the living.” The work is one of ethereal and genius and recalls the similar use of concrète voices and electronic techniques used in Gesang der Junglinge.
Like Stockhausen, Harvey was completely open about his mysticism, and his belief in spiritual realities shines through in his music. In spiritual matters he was also as eclectic as he was in his compositions. He had a pronounced interest in Eastern religions which he seemed to be as comfortable writing music about as he was within the Christian milieu.
Bhakti was written in 1982 as a commission from IRCAM and is a piece for 15 instruments and tape. The structure of the close to hour long composition is based arounds texts from the Hindu Rig Veda, which give it a meditative and contemplative aspect. Twelve short movements, each varied three times, give it thirty-six subsections, each of these defined by a certain grouping of instruments playing a particular pitch cell. Showing his serialist leanings, Bhakti explores the partials of a single pitch, a quarter-tone above G, below A440. The series are made from proportional intervals above below that frequency, with space for what Harvey calls “glossing” or allowing for improvisation in devising the pitch cells. The tape part of Bhakti was made using sounds from the instrumental ensemble mixed and transformed by the computer. At the end of each movement a quotation from the Rig Veda is heard. Harvey considered these 4,000 year old hymns “keys to consciousness.”
Harvey used synthesized voices and instruments again in his 1990 electronic piece Ritual Melodies. Realized at IRCAM with the help of Jan Vandenheede and the program Formes, which had been designed originally as Computer Assisted Composition environment for the synthesis program CHANT. Vandenheede created a number of sounds using the program. These included voices again, both western plainchant and Tibetan style chant. The other instrument sounds were all decidedly eastern, and included a Vietnamese koto, and Indian oboe, Japanse shakuhachi, and a Tibetan bell. Listening, they do not sound at all artificial. Voice synthesis had come a long way since the days of Daisy Bell. All of the instruments used are for ritual or religious purposes in different cultures, but Harvey wanted to bring them together in a way that wouldn’t normally happen real world rituals. Here Harvey composed 16 melodies that seamlessly move between the different synthesized instruments, and form an intertwined circular chain with each other when other melodies are introduced and morph into each other. He writes of the piece that, “Each melody uses the same array of pitches, which is a harmonic series omitting the lowest 5 pitches. Each interval, therefore is different from every other interval. So the piece as a whole reflects the natural acoustic structure of the instruments and voices.” The bell sounds are used to mark different
Harvey was as happy to work with traditional instruments and timbres as he was making purely electronic works or purely choral works. He was also happy to mix and match. He liked variety and drew his influences from a diverse grouping of musicians and teachers. All of these influences are present in his own diversity of work. His ability to work back and forth between modes gave him a lot of freedom, even if it made critics hard to pigeonhole his music.
Between 2005 and 2008 Jonathan Harvey was a composer in residence with the BBC Scottish Symphony Orchestra. Three major works known together as the “Glasgow Trilogy” came out of this period. The trilogy begins with Towards a Pure Land… (2005), continues with Body Mandala (2006), and finishes with the masterpiece Speakings (2008). All three pieces combine orchestral instruments with electronics, and all three are inspired by the Buddhist side of his spiritual inclinations, but it is Speakings where Harvey once again looks into the correlations between speech and song. Within that same time span Harvey wrote Sprechgesange (2007) for Oboe, Cor Anglais and Ensemble, and it is these two pieces that we will look at here.
Harvey ties his purely instrumental piece Sprechgesange to the earlier efforts of Schoenberg and Berg by using this word as its title. Harvey’s idea for Sprechgesange came from musing on the psychological roots of speech and sound and how those are so often connected to the cooing voice and talking and singing of the mother to her baby child, who experiences these first in the womb and then as a newborn and baby in the very early process of learning to speak and sing themselves. Halfway through the piece Harvey inserts a Wagner reference. He says this is a “a moment when Parsifal 'hears' the long-forgotten voice of his dead mother call the name, his own name, that he had forgotten - an action of the shamanistic Kundry. From this awakening, this healing, comes the birth of song from the meaningless chatter of endless human discourse. 'Speech' with deep meaning...”
Speakings was commissioned in part by IRCAM and Radio France who helped with the electronic side of things, again using programs to synthesize speech. He makes use of the orchestral palette to make further voicings that mimic the utterance of phonemes, building on the techniques he had used in Sprechgesange. From the slow beginning the organic and the digital merge together into a gradual towering babble of enunciation by the pieces second movement. The tracery of vocoded signals is laced into the chaos of linguistic polyphony.
Harvey writes, “The orchestral discourse, itself inflected by speech structures, is electro-acoustically shaped by the envelopes of speech taken from largely random recordings. The vowel and consonant spectra-shapes flicker in the rapid rhythms and colours of speech across the orchestral textures. A process of 'shape vocoding', taking advantage of speech's fascinating complexities, is the main idea of this work.” Different instruments had the “shape vocoding” applied to them through the judicious use of microphones.
The third and final movement begins with bell rings and horn blasts weaving between each other in a way that is reminiscent of how mantras are intoned with full vibration. The listener is now in a sacred place, a cathedral or temple, and the voices here chant an incantatory song, along single monodic lines reverberating through space. Here we return to what Harvey says is the “womb of all speech”, the Buddhist mantra OM-AH-HUM, which in the mythology of India is said to be half-song, half-speech. This is pure speech. The original tongue. In Judaeo-Christian terms it be likened to the original language spoken by Adam and Eve, and before the time of Babel when humanities tongues were shattered and split into multiplicity.
In Buddhist mythology from India there is a notion of original, pure speech, in the form of mantras - half song, half speech. The OM-AH-HUM is said to be the womb of all speech.
Read the rest of the Radio Phonics Laboratory: Telecommunications, Speech Synthesis and the Birth of Electronic.
IRCAM, CCRMA, Intercontemporary Underground Music,
Georgina, Born. Rationalizing culture: IRCAM, Boulez, and the Institutionalization of the Musical Avantgarde. Berkeley, CA.: University of California Press, 1995.
IRCAM. “How Well Do You Know Espro?” <https://manifeste.ircam.fr/en/article/detail/connaissez-vous-lespace-de-projection/>
Krämer, Reiner. “X: An Analytical Approach to John Chowning’s Phoné.” < https://ccrma.stanford.edu/sites/default/files/user/jc/phone_kraemer_analysis_0.pdf>
National Public Radio. “IRCAM: The Quiet House Of Sound” <https://www.npr.org/templates/story/story.php?storyId=97002999#:~:text=IRCAM%20was%20created%20more%20than,20th%20century's%20pre%2Deminent%20composers.>
Smith, Richard Langham, Potter, Caroline, ed. French Music Since Berlioz. Burlington, VT. Ashgate Publisher, Inc., 2006.
Tingen, Paul. “IRCAM: Institute For Research & Co-ordination in Acoustics & Music.” <https://www.soundonsound.com/people/ircam-institute-research-co-ordination-acoustics-music>
CHANT, Jonathan Harvey’s Ritual Melodies, Speakings
Anderson, Julie. “Jonathan Harvey Dies Aged 73.” <https://www.takte-online.de/en/portrait/article/artikel/ircam-und-kathedralchor-zum-tode-jonathan-harveys/index.htm>
Bresson, Jean, Agon, Carlos. “Temporal Control over Sound Synthesis Processes.” Sound and Music Computing (SMC’06), 2006, Marseille, France.
Chamorro, Gabriel José Bolaños Chamorro. “An Analysis of Jonathan Harvey’s Speakings For Orchestra and Electronics.” Ricercare No. 13, 2020.
Faber Music. “Jonathan Harvey's masterpiece trilogy at Edinburgh International Festival.” < https://www.fabermusic.com/news/jonathan-harveys-masterpiece-trilogy-at-edinburgh-international-festival-252>
Harvey, Jonathan. “Inner Light 1 (1973)” < https://www.wisemusicclassical.com/work/7644/Inner-Light-1--Jonathan-Harvey/>
Harvey, Jonathan. “Ritual Melodies.” <https://www.fabermusic.com/music/ritual-melodies-1504>
Harvey, Jonathan. “Sprechgesang.” < https://www.fabermusic.com/music/sprechgesang-3850>
Harvey, Jonathan. “Speakings.”< https://www.fabermusic.com/music/speakings-5282>
Harvey, Jonathan, Lorrain, Denis, Barrière, Jean-Baptiste, Haynes, Stanley. “Notes on the Realization of ‘Bhakti.” Computer Music Journal, Vol. 8, No. 3 (Autumn, 1984), pp. 74-78 (5 pages)
Holmes, Thom. Electronic and Experimental Music: Technology, Music, and Culture. New York, NY.: Routledge, 2020.
Manning, Peter. Electronic and Computer Music. Oxford, UK.: Clarendon Press, 1993.
Rodet, Xavier, Potard, Yves, Barriere, Jean-Baptiste. “The CHANT Project: From the Synthesis of the Singing Voice to Synthesis in General.” Computer Music Journal, Vol. 8, No. 3 (Autumn, 1984), pp. 15-31)
Service, Tom. “A Guide to Jonathan Harvey’s Music.”< https://www.theguardian.com/music/tomserviceblog/2012/sep/17/jonathan-harvey-contemporary-music-guide>
Leave a Reply.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.