Sothis Medias
  • Home
  • The Radiophonic Laboratory
  • Down Home Punk
  • Seeds from Sirius
  • About
  • Publications
  • Home
  • The Radiophonic Laboratory
  • Down Home Punk
  • Seeds from Sirius
  • About
  • Publications

Voice of the Aether

3/10/2021

2 Comments

 
Picture
​The ancient philosophers and mystics of this world proposed the theory of the five elements and this theory is still seen at play, though transformed, in the science of the present day. From air, fire, water and earth we have gases, energy and heat, liquids and matter. The fifth element is the aether, the quintessence crowning the four other elements. And though science seems to have discarded the aether it is yet everywhere around us.

The early Ionian cosmologists thought there was an infinite and unbegotten divine substance, neither created nor ever to be destroyed, permeating the entire universe. Empedocles used the term elements and roots interchangeably, and the four classical elements had their roots in the divine everlasting substance. Combined in various ratios these four elements make up the physical universe.

Later Plato writing Timaeus of the air element said "there is the most translucent kind which is called by the name of aether.” His student Aristotle continued to explore the four elements, and introduced the fifth element in his book On the Heavens. Aristotle posited that there was another element located in the heavenly and celestial realm of the stars and planets. Aristotle considered this new element to be the first element, in that the other four elements had their origin and root in it. In his book he did not give it a name, but later writers commenting on his work started referring to this element as the aether, or fifth element.

The heavenly element of the aether was not the same as the four terrestrial elements. Aristotle held that it could not move outside of the natural circles made by the stars in their spheres. He related this idea of aethereal spheres to his observation of the planets and stars in their perfect orbits.  The scholastic philosophers of the medieval era thought that the aether might change and fluctuate in density, as they reasoned the planets and stars were denser than the universal substance permeating the universe.
The theory of the five elements continued to spread throughout medieval times, transmitted and passed in particular among the alchemists who embraced the idea as part of their secret lore. The Latin name for the fifth element was the quintessence and this word can be found throughout the many alchemical treatises penned over the centuries. The idea of the quintessence became especially popular among the medical alchemists for whom aetheric forces became part of healing substances and elixirs.

Robert Fludd, the great 17th century hermetic philosopher, Rosicrucian, natural magician and follower of Paracelsus, claimed that the nature of the aether was “subtler than light”. In this he started to point to later ideas of the aether as a kind of catch all for a variety of electromagnetic phenomena. Fludd cited the view of Plotinus from the 3rd century who thought the aether was non-material and interpenetrated the entire universe of manifest reality and its various forms.

Isaac Newton, himself a devoted alchemist, used the idea of the aether as a way to explain his observations of the strict mechanical rules he was writing about in his works on physics. In turn the physicists of the 18th century developed a number of models for various physical phenomena that came to be known as aether theories, used to explain how gravitational forces worked and how electromagnetic forces propagated.

19th century scientist and successful business magnate Baron Dr. Carl von Reichenbach took up the study of the field of psychology in 1839 after making important discoveries in the fields of geology, chemistry, and metallurgy. If it hadn’t been for Reichenbach’s research in the physical sciences and his study of the properties of coal we wouldn’t have creosote, paraffin, or phenol which he developed the process for extracting. When he set out to tackle the field of psychology after striking it rich from his many patents and factories he discovered that people he termed “sensitives” were able to pick up on things the rest of us couldn’t. This often led the sensitive person to develop emotional and mental problems. But he also noticed these sensitives could sometimes see a force field around such things as a magnet.

This led Reichenbach to the works of Franz Anton Mesmer who had already been deemed a heretic by people like Benjamin Franklin and other members of the scientific establishment of the time. What Mesmer called Animal Magnetism, Reichenbach called Odic Force. Reichenbach was in turn denounced for his studies of this force which he observed as behaving in ways similar to yet distinct from magnetism, electricity, and heat. He wouldn’t be the last to be called a crank and a catamount for his investigation of the life force.

The two terms of Animal Magnetism and Odic Force would both have been recognized by metaphysicians, occultists and philosophers as the aether.

By the time Albert Einstein had introduced special relativity the aether theories used by physicists wer discarded among the scientific intelligentsia of the time. Einstein had shown that Maxwell’s equations, which form the mathematical foundation for form the foundation of classical electromagnetism, classical optics, and electric circuits, did not need the idea of the aether for the transmission of these forces. Yet even Einstein admitted that his own theory could be thought of as an aether theory because it seemed to show that there were physical properties in the seemingly empty space between objects.

As the 20th century rolled on the idea of the aether continued to be propagated among theosophists, adherents of the new thought movement, and various other occultists. In 1907 the French philosopher Henri Bergson spoke of the Élan vital in his book Creative Evolution. Bergson used this concept as an explanation for evolution and development of organisms, which he linked closely with consciousness.

Psychologist Wilhelm Reich made his own discovery of the life force in the 1930s, which he called orgone. As a direct student of Freud, his concept of orgone was the result of work on the psycho-physiology of libido, of which he took an increasingly bio-energetic view. After Reich emigrated to the United States his attention increasingly turned to speculation about the nature of the universe, and ideas about biological development and evolution, even the weather. Reich was more at home in the mode of “natural philosopher” or “natural scientist” than in the ideologically strict compartmentalization that had occurred in the field of psychology.

Despite his documentation of the successful effects of orgone therapy, and his devices such as the orgone accumulator and cloud buster, Reich remained a heretic among doctors and scientists.  He lost his teaching position at the New School in 1941 after telling the director he had saved several lives using orgone therapy. Due to his associations as a socialist he was arrested by the FBI after the bombing of Pearl Harbor. He continued to be persecuted throughout the 1950s. It’s an interesting story and too long to tell in detail for the present purposes, but suffice it to say through various injunctions the FDA destroyed his orgone accumulators and later burned six tons of his journals, books, and papers.

Then he was thrown in jail where he died. All because he was audacious enough to believe in, study, and experiment with the life force, what he called orgone, and what the ancients have called aether.

​

Those who haven’t been afraid to stand on the fringe and hang out in the margins, have continued to research and investigate the nature of the aether and various means for utilizing it.  There is a lot of work and experimentation to be done, and the relationship between musical healing modalities, electronics and the aether promises to be an area full of vitality.

As a wellspring of creativity the aether continues to inspire musicians and composers. Robert Ashley asked the question “Will something of substance replace the Aether? Not soon. All the parts are in disarray.”

Ashley also said “Aether fills the void, as in not knowing when you might get a chance to hear somebody make music, or where is the nearest town where something might be going on… or whether you got the idea that wakes you up at night from the hard-to-hear part of what comes over the radio, or from something you read about in a magazine about electricity, or from something you just dreamed up.”

Artists, writers and musicians such as him have continued to think of the aether and tap into it as a prime source. The music of the spheres continues to inspire those of us down here on earth who do their best to translate it into new compositions. Musicians continue to look up to the stars as a source of creativity. They take that aetheric light from the stars into themselves to create new works that show our relationship with the rest of the cosmos.
​
Where do ideas come from? Transmitted over the aether they spill into the head of the artist, who is the vessel. They give voice to the aether. With the tools of radio, telecommunications, images and data from satellites and the sonic possibilities opened up by electricity, they have a lot of rich source material to translate the voice into compositions. This chapter explores some of these works inspired by the celestial realms. ​
Picture
Picture
​Do you like what you have read here? Then consider signing up for Seeds from Sirius, the monthly webzine from Sothis Medias. It rounds up any blog posts here as well as containing much additional material, news of various shortwave and community FM transmissions, music,  deindustrial fiction, strange meanderings and more: 
http://www.sothismedias.com/seeds-from-sirius.html

2 Comments

Trawling the Megahertz

10/23/2020

0 Comments

 
Picture
Not all the musicians who use radios to make music take the output from the transmission directly into the input of the mixing board or microphone to capture the voice of the aether. And not all of them used it as a source of direct audio sampling either. Some have trawled the megahertz and found inspiration in the voices they heard on the radio talk in shows, in the banter to be heard on the citizens band, and in the back and forth between hams in long distance rag chews over the shortwaves.
​
Paddy McAloon found so much inspiration listening to the radio, he created an entire album and the based the lyrical elements off of the various conversations he had heard and taped at his listening post.  Paddy had been writing songs since he turned 13, but in 1999 at the age of 42, the ease with which he could write songs suddenly changed. Not from the level of his mastery of melody, hooks, and poetic pop lyricism, but on a physical level, when he suffered from the detachment of both his retinas one right after the other. Suddenly blind, he was bound to the house with nothing but free time.
Picture
Not only had Paddy been writing songs since he was 13, but he’d been in the habit of making chart topping albums with his band Prefab Sprout, started with his brother Martin, in Witton Gilbert, County Durham, England. The band played in a down on its heels gas station owned by their father, and were joined by a friend down the street, Michael Salmon drums, forming in 1977. Five years later after forging some musical chops they went into the studio to record their first single, Lions in My Own Garden (Exit Someone) with a b-side called Radio Love. The lyrics are seemingly innocuous but hide a tragic undercurrent, and it’s hard not to read an eerie prescience into the tune for Paddy’s later album Trawling the Megahertz. It starts with the static and whine of a shortwave set, and ends with the same, and the voice of a distant announcer.
​
“Requests for everyone / Love is on / Radio love is strong / Radio love / Shortwave for everyone / It was on the news, someone had drowned / She keeps hearing it over / All night long / All night long”. 
All the years spent listening to everything from David Bowie, to Igor Stravinsky, to T. Rex, put the band in good stead as Paddy continued to refine his craft of songwriting. Having written most of his songs using guitar Paddy had a crisis around the instrument, thinking he’d exhausted it, picked up a Roland synth, and started using that to write songs with just as they were poised to start making albums. It was around this time that vocalist Wendy Smith was recruited for the band.

In 1984 came Swoon followed by 85’s Steve McQueen. This was followed by another string of album leading to Jordan in 1990. The band then went on hiatus until work began on Andromeda Heights, the last album to feature Wendy as vocalist. It was released in 1997.
Two years later Paddy’s retina detached, possibly from congenital factors. Repairing his eyes required extensive surgery and he was left blind and stuck in the house. Composing hunched over the keyboard had become impossible, and he was starting to twitchy, unable to work on new songs, and unable to read. Radio became his solace.

“I found all this frustating as I've been writing songs since 1971, and am subject to itchy, unpleasant withdrawal symptoms if I cannot work. So, unable even to read, I passed the time by listening to and taping all kinds of T.V and radio programmes, concentrating on phone-ins, chat shows, citizen's band conversations, military encryptions - you name it, I was eavesdropping on it.”

McAloon found a lot of what he taped to be boring and banal, but within all the day to day chit chat of people talking on the air, he caught glimpses of the sublime, and started having moments of inspiration. In his mind he began to edit what he had heard into the spoken word lyrics for what would become his next album.

"Odd words from documentaries would cross-pollinate with melancholy confidences aired on late night phone-ins; phrases that originated in different time zones on different frequencies would team up to make new and oddly affecting sentences. And I would change details to protect the innocent (or guilty), to streamline the story that I could hear emerging, and to make it all more...musical, I suppose."
Using the snippets of radio conversation he had recorded, and further riffing off "mental edits" he’d made of these, he found the poetic moments within the plaintive complaints he heard on the  radio and mixed these with things he had heard on various documentaries. A specific word like "ether" or "anesthetic" would strike him, and he started using these as launch points for his own writing. All the radio transmissions had been like a fertilizer, seeding his imagination.

“After awhile I got enough of these sentences and radio thoughts, and I thought, well, I’m not going to be able to finish the thought by listening to radio to find the words I need, so sometimes I’ll fill them in.”

He started writing musical parts to go with the words on his 1987 era Atari computer. Paddy had developed a philosophy of not wanting to use all the latest gear. “You find a piece of software you can use, you do it well, and then someone will tell you the computer you've got will break down, it's old now, you'll need to go over to a Mac. Let me tell you - I still use an Atari computer from 1987. I didn't like where the software went after that. Even on the Mac. I don't care how sophisticated it got - I knew how to use the old software in my limited way. And, finally, my eyes are not great. So I resent the learning curve with new equipment. I don't have Garage Band. I don't have a Mac. That’s what it is with me and old technology. I can't be bothered. Nor do I have the money to spend in the way I used to have. I don't have a massive guaranteed advance from a record company. I work very slowly by myself. BUT - I have a message on my studio wall that says: ‘Imagine that you crash landed on a desert island, but you've survived, you've walked away, and there's a small town there, with a recording studio, the recording studio is very old-fashioned. How thrilled would you be, having survived your plane crash and how thrilled you'd be for the most basic recording equipment?’ That's me. That's me in my home studio full of this old gear that's out of date that other people can laugh at.”

Working with the Atari computer to compose the title track on I Trawl the Megahertz, the limitations of the software gave the piece a form to materialize within and determined the length of the title track. “I spent a long time working on that just as a computer piece, using the same old rubbishy synth sounds. Do you know why it is as long as it is? This is a terrible thing to tell you! 22 minutes of music is the length you'll get on an Atari! That's a bad reason for it. But in the end when I figured out the structure of it was just gonna fall within what an Atari could do.”
​
The piece ends up being something of a movie to watch with your eyes closed, a narrative to listen to if you have been left without sight. Culled from the airwaves, it is also perfect piece to be played on the radio. While Paddy is mostly known for his pop songs, this long player of a track, is in a way akin to the kind of storytelling heard in the music Laurie Anderson and in the operas of Robert Ashley. It is so perfectly suited for transmission itself. While not a radio drama, it can be listened to as a radio drama, these kind of works could form the basis for revivification of radio drama, infused with specially composed music, and a delight to people to near and far, who happen to tune, out of the blue and right on schedule.
And though written on the Atari, the album proper ended up being recorded with a classical crossover ensemble, Mr. McFalls Chamber. Co-producer Calum Malcolm and composer David McGuinness helped Paddy to take his original MIDI versions and produce scores from them for the final recordings. The final result is an breathtaking excursion into neo-romantic chamber pop. Echoes of Claude Debussy, Maurice Ravel, and Leonard Bernstein swirl and coalesce with the tender reading of his poetic text by vocalist Yvonne Connors.
​
On the second side there are eight more tracks, mostly instrumental. I’m 49 is the only one to use samples of the actual recordings he’d made off the air to deliver a melancholic meditation on one man’s post-divorce mid-life crisis. At a time when Paddy had been suffering from the trials and travails of his own life, and the curveballs it had thrown at him, he plumbed the depths of our shared human condition, and found companionship and comfort in the voices that called out to him across the expansive aether. 
Special thanks to One Deck Pete for reminding me of this story. 

Read the rest of the RADIOPHONIC LABORATORY series. 

REFERENCES:

Paddy McAloon, I Trawl the Megahertz , Liberty EMI, 2003

Prefab Sprout, I Trawl the Megahertz (Remastered), Sony Music 2019


https://www.theguardian.com/culture/2020/jun/30/paddy-mcaloon-thomas-dolby-how-we-made-steve-mcqueen-album

https://www.hotpress.com/music/interview-prefab-spout-paddy-mcaloon-trawl-megahertz-tales-22809556

https://www.irishtimes.com/culture/music/prefab-sprout-s-paddy-mcaloon-like-gandalf-on-his-way-to-work-in-the-house-of-lords-1.3765658

https://archive.org/details/PaddyMcAloonPaddyMcAloonITrawlTheMegahertzInterview

http://www.hanspeterkuenzler.com/paddy-mcaloon.html



0 Comments

Data Sonification: From Mographs to Codes of Soil

10/14/2020

0 Comments

 
Picture
Another way Information Theory has been used in the making of music is through the sonification of data. It is the audio equivalent of visualizing data as charts, graphs, and connected plot points on maps full of numbers. Audio, here meaning those sounds that fall outside of speech categories, has a variety of advantages to other forms of conveying information. The spatial, tempo, frequency and amplitude aspects of sound can all be used to relay different messages.  

            One of the earliest and most successful tools to use sonification has been the Geiger counter from 1908. Its sharp clicks alert the user to the level of radiation in an area and are familiar with anyone who is a fan of post-apocalyptic sci-fi zombie movies. The faster the tempo and number of clicks the higher the amount of radiation detected in an area.   
A few years after the Geiger counter was invented Dr. Edmund Fournier d'Albe came up with the optophone, a system that used photosensors to detect black printed typeface and convert it into sound. Designed to be used by blind people for reading, the optophone played a set of group notes: g c' d' e' g' b' c. The notes corresponded with positions on the reading area of the device and a note was silenced if black ink was sensed. These missing notes showed the positions where the black ink was and in this way a user could learn to read a text via sound. Though it was a genius invention the optophone didn’t catch on.
Other areas where sonification did get used include pulse oximeters (a device that measures oxygen saturation in the blood), sonar, and auditory displays inside aircraft cockpits, among others.

In 1974 a trio of experimental researchers at Bell Laboratories conducted the earliest work on auditory graphing; Max Mathews, F.R. Moore, and John M. Chambers wrote a technical memorandum called “Auditory Data Inspection.” They augmented a scatterplot -a mathematical diagram using Cartesian coordinates to display values for two or more variables in a data set- using a variety of sounds that changed frequency, spectral content, and amplitude modulation according to the points on their diagram.

Two years later the technology and science philosopher Don Ihde wrote in his book, Listening and Voice: phenomenologies of sound, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." Ihde pointed to using the tool of sonification for creativity, so that we might in effect, be able to listen to the light of the stars, the decomposition of soil, the rhythm of blood pulsing through the veins, or to make a composition out of the statistics from a series of baseball games.

It wasn’t long before musical artists headed out to carve a way through the woods where Ihde had suggested there might be a trail. 
Picture
Sonification Techniques
           
There are many techniques for transforming data into audio dada. The range of sound, its many variables and a listener’s perception give ample parameters for transmitting information as audio. Increasing or decreasing the tempo, volume, or pitch of a sound is a simple method. For instance, in a weather sonification app temperature could be read as the frequency of one tone that rises in pitch as temperature and lowers as it falls.  The percentage of cloud cover could be connected to another sound that increases or decreases in volume according to coverage, while wind speed could be applied as a resonant filter across another tone. The stereo field could also be used to portray information with a certain set of data coming in on the left channel, and another set on the right.

            The audio display of data is still in a wild west phase of development. No standard set of techniques has been adopted across the board. Do to the variables of information presented, and the setting of where it is presented, researchers in this field are working towards determining which set of sounds are best suited for particular applications. Programmers are writing programs or adapting existing ones to be able to parse streams of information and render it according to sets of sonification rules.

            One particular technique is audification. It can be defined as a "direct translation of a data waveform to the audible domain." Data sequences are interpreted and mapped in time to an audio waveform. Various aspects of the data correspond to various sound pressure levels. Signal processing and audio effects are used to further translate the sound as data. Listeners can then hear periodic components as frequencies of sound. Audification thus requires large sets of data containing periodic components.

            Developed by Greg Kramer in 1992 the goal was to allow listeners to be able to hear the way scientific measurements sounded. Audification has a number of applications in medicine, seismology, and space physics. In seismology, it is used as an additional method of earthquake prediction alongside visual representations. NASA has applied audification to the field of astrophysics, using sounds to represent various radio and plasma wave measurements. There are many musicians who are finding inspiration in using the sets of data culled from astronomy and astrophysics in the creation of new works. It’s an exciting development in the field of music.

            American composer Gordon Mumma had been inspired by seismography and incorporated it into his series of piano works called Mographs. A seismic wave is the energy moving through the Earth's layers caused by earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions. All of these events give out low-frequency acoustic energy that can be be picked up by a seismograph. A seismogram has wiggly lines going all across it. These are all the seismic waves that the seismograph has recorded. Most of the waves are small because no one felt them, little tiny waves called microseisms can even be caused by ocean waves hitting the beach, heavy traffic of rumbling semi-trucks, and other things that might cause the seismograph to shake. Little dots along the graph show the minutes so the seismic waves can be seen in time. When there is seismic activity the P-wave is the first wave to be bigger than the small normal microseisms. P waves are the fastest moving seismic wave and these are usually the first to be recorded by a seismograph. The next set of waves on the seismogram are the S-waves. S-waves have a higher frequency than the P-waves and appear bigger on the seismogram.

Mumma based the structure and activity of each Mograph around data derived from seismogram recordings of earthquakes and underground nuclear explosions. The seismograms he was looking at were part of cold-war research that attempted to verify the differences between various seismic disturbances. The government wanted to know if it was a nuke that had hit San Francisco or just another rumbling from the earth. For Mumma, the structural relationships between the way the patters of P-waves and S-waves traveled in time, and their reflections, had the “compositional characteristics of musical sound-spaces”. One of the strategies he used to sonify the seismograms into music was to limit the pitch-vocabulary and intervals in each work. This gave Mumma the ability draw attention to the complexity of time and rhythmic events within each Mograph. 
​
With these themes in mind, listening to the Mograph is like hearing tectonic plates being jostled around, here hitting each other abruptly, and there in a slow silence that grinds as two plates meet. It is the sound of very physical waves rumbling through earth and stone and dirt, and beneath concrete, as interpreted by the piano, or pairs of pianos used in some arrangements. In making these pieces from seismograph data Gordon Mumma sketched a process for others to use in future works of sonification. 
By the Code of Soil

Another down to earth sonification project deals with the soil beneath our feet. It started out as a commission for artist Kasia Molga from the GROW Observatory, a citizen science organization working to take action on climate change, build better soil and grow healthier food, while using data provided by the European Space Agencies Copernicus satellites to achieve their goals.

Kasia began her project by analyzing the importance and meaning of soil, and she looked at what is happening to the soil now and how that impacts farmers, urbanites, and well, everyone. She listened to the concerns of the scientists at GROW and spent a chunk time parsing the data from the GROW sensors and the Sentinel-1A satellite that is used to asses soil moisture across Europe.

In the course of her background work Kasia wondered how she could get important information about soil health out there to the largest number of people and she hit upon the idea of using a computer virus. The resulting project, By the Code of Soil, ended up working with peoples computers and smart phones. The program didn’t install any malware, self-replicate, or actually infect anyone’s computer, but rather worked as a way to interrupt those people who spend most of their time in front of screens and remind them of the real analog world underneath their feet.

She recruited a few other people to work with her on the project, tech artists Erik Overmeire and Dan Hett, and musician Robin Rimbaud, aka Scanner.  Their project turns soil data into digital art that appears on a participants’ computer (downloaded as an app) whenever land-mapping satellite Sentinel-1A passes overhead.

The Sentinel satellite missions include radar and super-spectral imaging for land, ocean and atmospheric monitoring. Each Sentinel mission is based on a constellation of two satellites that fulfill and revisit the coverage requirements for each individual mission. This provides a robust dataset for researchers to access here on Earth. Sentinel-1 provides all-weather, day and night radar imaging for land and ocean services. GROW Observatory has gotten involved by deploying thousands of soil sensors all across Europe to improve the accuracy of the observations from the orbiting birds.
​
Kasia designed the video art for the piece. Twice a day the Sentinel-1 passes overhead in Europe and the artwork and sounds change in real time as driven by the data.
Picture
Kasia writes, “The artwork takes control of user’s computer for a minute or two in full screen mode. It manifests itself in a quite unexpected manner – that is it only will become visible on the computer when the Sentinel-1A satellite passes by the computer’s location – approximately twice within 24 hours but never at the same time of the day.” This is how it reacts like a virus, erupting unexpectedly (unless you happen to be tracking the movement of the satellite).

To portray the soil data visually Kasia started with a pixel and a matrix. She thought of these as single grains of soil, from which something else can be created and emerge. She used visual white noise, like that of a TV on station with a channel with no broadcast,  to show a signal coming out of the noise when the satellite passes, activating the algorithm written for the piece. “Various configurations of the noise – its frequencies, shapes, speed of motion and sizes – reflect the moisture, light, temperature and texture of the land near to the participant’s computer based on its IP address.”

Meanwhile Scanner handled the sound design for the project. He took a similar approach as Kasia and looked at the granular aspects of sound. “Trying to score data was a seemingly impossible task. How to soundtrack something that is ever changing, ever developing, ever in flux, refusing to remain still. Most times when one accompanies image with sound the image is locked, only to repeat again and again on repeated viewing. By the Code of Soil refuses to follow this pattern. Indeed it wasn’t until I watched the work back one evening, having last seen it the previous morning, that I realized how alive data can really be.

The only solution sonically was to consider sound, like soil, as a granular tool. The sound needed to map the tiniest detail of alterations in the data received so I created sounds that frequently last half a second long and map these across hundreds of different possibilities. It was a like a game of making mathematics colorful and curiously one can only hear it back by following the App in real time. I had to project into the future what I felt would work most successfully, since I never knew how the data would develop and alter in time either. As such the sound is as alive as the images, as malleable as the numbers which dictate their choices. Data agitates the sound into a restless and constantly mutable soundscape.”

He spent many hours designing a library of sounds with Native Intstruments Reaktor and GRM Tools and then mapping them into families. These families of sound were in turn mapped onto various aspects of the data. With the data coming into the satellite from the sensors, and the data collected from the sensors feeding into the program, different sets of sounds and visuals were played according to the system.
​ 
The success of this project for Kasia Molga and Scanner has led to them working together again in creating another multimedia work, Ode to Dirt, using soil data as a source code, for content, and inspiration. In this piece “(de)Compositions bridges the source (input) and the data (output) through inviting viewers to take part in a multi sensory experience observing how the artwork - a fragment of the ‘land’ - changes through time - its form, sound and even smell - determined by the activities of the earthworms.”
Picture
READING MUSIC: LISTENING AS INFORMATION EXTRACTION

Many musicians know how to read sheet music. For composers it’s a basic tool. But what if average people learned how to read music, that is, listen to a composition and extract information from it as if it were a couple of paragraphs of text, or for really long works, a whole book? 

It strikes me that this is a distinct possibility as the field of sonification grows. Just as we have learned to signify and interpret letters and words, we may eventually come to have another shared grammar of sound that allows people to listen to the music of data and interpret that text with our ears.
​
This new way of reading music as information has the possibility of transforming the field of radio as the imagination is opened up to new ways of receiving knowledge. It would be interesting to create radio that included sonified data as a regular part of news stories.
This project of mapping knowledge to sound is implicit in Hesse’s description of the Glass Bead Game. Sonification is another way to bring it about as a reality. Yet to make the most of this listening opportunity, to listen to music in a way analogous to reading a book, we will have to grow new organs of perception. Pauline Oliveros started the work of carving out new pathways for the way we perceive the world in her Deep Listening workshops, concerts and work in general. This work is being continued by her partner Ione, and others trained in the skills of Deep Listening. Kim Cascone has also taught workshops on the subject of what he calls Subtle Listening. Through a variety of meditation and other exercises Kim teaches his students how to “grow new organs of perception”. Perhaps through techniques such as these we may learn to listen to data in a way that engages the imagination and transforms it into knowledge. 

REFERENCES:

Listening and Voice: A phenomenology of sound by David Idhe, State University of New York, 2007

David Tudor & Gordon Mumma, Rainforest / 4 Mographs, Sections X and 7 from Gestures, New World Records, 2006

https://archive.growobservatory.org/code-of-soil.html

https://sentinel.esa.int/web/sentinel/missions/sentinel-1

https://vertigo.starts.eu/calls/2017/residencies/ode-from-the-dirt/detail/

Robin Rimbaud (project documentation sent in personal communication, September 29 2020)

http://www.studiomolga.com/codeofsoil/
http://scannerdot.com/
https://vertigo.starts.eu/article/detail/by-the-code-of-soil-in-greece/
https://sonicfield.org/2014/03/subtle-listening-how-artists-can-develop-new-perceptual-circuits/

https://www.deeplistening.rpi.edu/deep-listening/
​
​
Read the rest of the RADIOPHONIC LABORATORY series. 
0 Comments

The System of LICHT

9/30/2020

0 Comments

 
Picture
Karlheinz Stockhausen’s opera cycle LICHT is many things and as a great work of art it is subject to multiple, if not endless, interpretations. These interpretations are multiple because the opera is made up of living symbols. As Carl Jung taught, it is possible to distinguish between a symbol and a sign. A symbol is the best possible expression for something that is unknown, whereas a sign is something specific, such as the insignia worn by a military officer showing his specific rank.

 For this work the specific and very rich symbolism of LICHT will be set aside to look at it from a structural and systems point of view. The way Stockhausen gave his work specific limitations shaped the work in unique ways. His adept and intuitive grasp of combinatorial procedures within the limits of the system gave him a wide ranging freedom to play with the materials he had chosen, shaping the raw ingredients into an astonishing and sensual feast of sound, color, and movement.
​
Opening up the lid of the opera cycle it’s possible to see how its individual components create a musical engine whose individual circuits sync together in a series allowing for a dynamic flow of energies and psychoacoustic forces. Let’s look under the hood of LICHT to see how its various pieces fit together.
Conception of LICHT: Formula & Super Formula   

            Great ideas often come as revelatory seeds into the mind of those who are prepared. By the mid-seventies Stockhausen had been composing for a quarter of a century and he had already explored a vast territory of sound implementing new ideas for the arrangement of music in time and space. He had played with intuitive music, aleatory processes, and had mastered new electronic music techniques in the studios of WDR, just for starters. The soil of his mind and spirit were fertile, waiting for the next big idea to be planted.

Another tactic basically invented by Stockhausen was formula composition and it came out of his deep engagement with serialism. It involves the projection, expansion and ausmultiplikation of either a single melody-formula, or a two- or three-voice contrapuntal construction. In serial music the structuring features remain basically abstract but in formula composition properties such as duration, pitch, tempo, timbre, and dynamics are also specified from the formula. By using concise and specific tone succession based on the single melody formula both the macro structure and micro details of the composition can be derived.
​
The roots of his method of formula composition can be traced back to his once withdrawn orchestral piece Formel where the first basic pattern of notes are gradually transformed over the course of the work. The central pitch is first broadened out before the notes are removed leaving only the low and high extremes. He continued to use serial operations on his next batch of works, Kreuzspiel and Punkte, and then introduced musical pointillism into the methods as explored in Kontrapunkte and Gruppen.
​Then for a time he moved on to other musical tactics and explorations but came back to the practice with ferocity in Mantra from 1970. Written for two ring modulated pianos, the pianists are also required to play a chromatic cymbals and a wood block. One of the players also has a short-wave radio tuned to a station sending morse code, or when CW isn’t readily available live on the air, a tape recording of morse code is played. It was the first composition that he wrote where he used the term formula, and was one of many watershed moments in his musical thinking. The formula involved the expansion and contraction of counterpointed melodies. 
His next piece to use formula composition was Inori from 1974. By this time Stockhausen had already been working extensively with writing music that incorporated elements of theater. Inori took it to another level and he had the insight that he could use the formula, not just for music, but as a way to compose gestures. This was another component that would become essential in LICHT.
​
Inori is a long work with performances lasting around seventy minutes. The formula for the piece is made up of fifteen notes using 5, 3, 2, 1 and 4 pitches respectively. When the formula is used on the macros scale for the work these five phrases are split into five segments Stockhausen to create a narrative sequence. Robin Maconie says it “lead[s] from pure rhythm . . . via dynamics, melody, and harmony, to polyphony: —hence, a progression from the primitive origin of music to a condition of pure intellect. The entire work is a projection of this formula onto a duration of about 70 minutes”
In 1977 Stockhausen went back to Japan to work on a commission for the National Theater of Tokyo. The idea for intermodulation of music had come to him in his first Japanese commission with Telemusik and he had played his music alongside nineteen ensemble musicians in the special spherical chamber designed for him at the World Fair in Osaka in 1970 for about five and a half hours a day, 183 days in a row. Japan had been a good country for his musical expression. The piece he came to work on when LICHT was conceived was to being written for traditional Gagaku orchestra and Noh actors. The dramatic elements for the production however came to him in a dream, just one of many dreams that gave him direct inspiration for compositions. While composing what became Der Jahreslauf, (Course of the Years), he had a revelation about a way to represent different levels of time by different instrument groups: millenniums are depicted as three harmoniums, centuries by an anvil and three piccolos, decades by a bongo and three saxophones, and years by a bass drum, harpsichord and guitar. These instrument groups became representations of vast forces and scales of time.

            This idea of composing music around the theme of various increments of time stayed with the composer for the rest of his life. While working on this commission, another idea was also transmitted into his mind, the super-formula that became the basis for LICHT. In a flash a small seed became the basis for a work of cosmic proportions. Subsequently he used Der Jahreslauf as the first act of Dienstag aus LICHT (Tuesday from Light). 

            In LICHT he realized his formula technique could be considerably expanded. The entire cycle of seven operas is based on three counterpointed melody formulas. Each of these is associated with one of the three principal characters that make up the dramatic element of the production. (Stockhausen himself said the formulas are the characters.) The melodies then define the tonal center and durations of scenes, and zooming in, give detailed melodic phrasing to more refined elements. The three characters are Eve, Lucifer, and Michael, and they are each associated with a specific instrument, bassett horn, trombone, and trumpet in turn.
​
This explains formula composition, but what about a super-formula? 
Picture
In 1977 Stockhausen had been composing for just over twenty-five years.  In the super-formula he synthesized nearly all of his musical ideas into a musical tool that would occupy him for the next twenty-seven years until 2003 when the last bars for Sonntag aus LICHT were drying on the staff paper.

He had the insight to take the three formulas he had come up with for Eve, Lucifer and Michael and layer them horizontally on top of each other to make the super-formula. Now they existed as one, each with their own layer, named after the character, or force, in question. The super-formula then gets subdivided again, vertically, into seven portions, of two to four measures each. These seven vertical rows form the days of the week.

Combined the horizontal and vertical rows make up the rich matrix out of which the overall structure of LICHT is built. To expand the formula in time, every quarter note of the super-formula is equal to 16 minutes of music. This is how the maestro -or magister- used it determine the durations of the opera cycles various acts and scenes.

Stockhausen also decided to create a kind of skeleton key, bare bones version of the super formula for each of the three characters. These he called “nuclear formulas” (kernformel) and consisted of just the pitches, duration and dynamics. Boiling the bones down even further provides the broth that the music is bathed in. When the nuclear formulas are reduced to just the notes what is left is essentially a serialist tone row. These are known as the kernels, central tones, or nuclear tones. Nuclear, because they form the very atoms of the music.

With all of this in place the fun has a chance to begin. The super-formula can now be used in all manner of ways. Sometimes Stockhausen employed it in an inverted or retrograde fashion (upside down or backwards). It is very often stretched out across the time frame of scenes and whole acts. Other times it is transposed vertically. Once the listener becomes familiar with each of the formulas for the characters or forces, it is possible to pick out those forces at work in the music even though the formula is not really used as a recurring theme in the typical sense of classical music. Rather, as Ed Chang said, “In LICHT, the MICHAEL, EVE and LUCIFER formulas are used more as structural forces whose tonal characteristics exert a kind of planetary gravity over the surrounding musical ether.”

            LICHT is a complete system. The superformula, nuclear kernels, and nuclear tones form the mathematical and musical parts of the systems ecology. The content of the opera, its symbolism based around the days, and the spiritual realities of Eve, Michael, and Lucifer are another aspect of the system. All of this gave Stockhausen the raw material out of which to craft his magnum opus. The music and symbolism mix together and all are now subject to a remarkable game of combination and recombination. The system of LICHT forms the matrix of possibilities, and displayed within that matrix are an extraordinary blending and synthesis of constituent forms.

The idea of ausmultiplikation, which can be translated as "multiplying-out" bears further looking at in terms of how formula composition creates musical forms mirrored on the macro and micro scales. Stockhausen described the technique as when a long note is replaced by shorter "melodic configurations, internally animated around central tones". This bears a strong resemblance to the Renaissance musical technique of diminution or coloration, where long notes are divided into a series of shorter, frequently melodic, values. But Stockhausen also used the term to refer to when he substituted a complete or partial formula for a single long tone, often as background layer projections of the formula. Formula composition and its various components like ausmuliplikation can be seen as  Stockhausen’s way of creating a way to practice the Glass Bead Game in music.
​
Robin Hartwell had the insight that when this is done at more than one level results resemble those of a fractal. If the formula compositions are fractal like, and he also used the idea of spirals throughout his work, one way of looking at LICHT is as a composed fractal music. Zooming in and out, the same structure is played in both minutely on the microscopic level, and at large on the macroscopic across the range of an entire work. Having boiled down of the musical components to microscopic levels, and having diluted them out to the macro, was one way Stockhausen prevented signal loss and maximized the transmission of his musical information. The super-formula is present and exists on every level and in every moment of LICHT.
Picture
Modular Music

Another way Licht can be seen as a musical system is by how it is structured in component modules. First of all, it should be considered that each of the operas is a work capable of being appreciated and understood unto itself, without having to hear or see the other sections. While listening to the whole cycle certainly enhances the experience of individual parts, those individual parts can also be enjoyed one at a time in and of themselves. Each opera, act, scene is self-sufficient. Even some parts of scenes can be extracted as solitary works. Certain other extra-curricular or auxiliary works have also been extrapolated out of the formulas of LICHT and its modular structure. All of these contain the essence of LICHT and give the listener one of many ways of enjoying the various elements of the cycle.  
           
This was all made possible due to the practical aspects of Stockhausen’s life as a composer. After he began LICHT, when he received a commission for a new work from this or that person or cultural institution, prescribed for this or that choir group, string quartet, or other group of instrumentation, he would incorporate the work on that commission into LICHT. It was an elegant solution that allowed him to finish the massive project.
           
​Some of the examples of modular works that can be extracted from LICHT include Klavierstucke XII and Michael’s Reise from Donnerstag; Weltraum is an assemblage of the electronic greetings and farewells of Freitag; Kathinka’s Chant for flute and electronics is an extract from Samstag; Angel Procession’s for choir comes from Sonntag; Ypssilon for flute and Xi for basset horn from Montag; the electronic layer from the second act of Dienstag becomes the piece Oktophonie; and the infamous Helicopter String Quartet is a section from Mittwoch. These are just a few of the pieces he was able to write in a modular fashion to fulfill a commission and thus complete a section of LICHT. Alternately he was able to adapt an already written section of LICHT as a module to fulfill a commission and thereby create a smaller chamber type work. 
Ars Combinatoria
            These smaller modules, extracts and auxiliary works from LICHT represent another fractal like aspect of the cycle as a system. They are separate and yet also a part of the system. The formula and super-formula interact with themselves, alongside the set symbolism of the days of the week, to produce an array of combinations perceived and permutated through Stockhausen’s intuitive imagination. Through this thoroughly disciplined act of creation and applied artistry Stockhausen has shown himself to be a “Magister Ludi” or master of the Glass Bead Game.

            He has fused mathematics and music together and along these strands and placed connecting beads from the various religious and mystical traditions of the world. He used traditional correspondences, such as in Samstag for instance, associated with Saturday, and the planet Saturn, and it’s symbolism of contraction, limitation, and death. In Samstag he wrote the section Kathinka’s Gesang as Lucifer’s Requiem. Thus the mysteries of death become a main feature of this section of the work. In this piece the flautist performs a ritual with six percussionists. The ritual consists of twenty-four exercises based on Stockhausen’s study of the Tibetan Book of the Dead. It was written as a chant protecting the soul of the recently departed (in this case Lucifer) by means of musical exercises regularly performed for 49 days after the death of the body, and lead the recently deceased into to the light of clear consciousness. For these exercises he permutated the Lucifer formula into a showstopper of extended flute techniques of deft virtuosity.
​
            And the piece may really be used by the living, and played for 49 days after the departure of a loved one to help assist them in their afterlife transition. 
The entire cycle is filled with this plentitude of subtle correspondences between music, science and various world cultures. These become the raw data for his applied musical calculus that is dancing in an elaborate play upon all these correspondences, inside a defined system, to express in multiplexed forms, that which is universal. 
​               
            After finishing the 29 hours of Licht, a feat some of his critics never expected him to complete, Stockhausen begin writing a series of chamber pieces called Klang, with the intent of writing one for each of the twenty-four hours of the day. Having conceived the musical forces of the days of the week, he was zooming in again to explore the musical forces behind each hour of the day. Formula composition gave him the tool he needed to explore these hours. Having written 21 of the pieces the cycle was unfinished at the time of the composer’s unexpected death in 2007 when he voyaged forth into the greater harmonies of cosmic space and time.
Picture
Read the rest of the Radiophonic Laboratory series.

References:
Other Planets: The Complete Works of Karlheinz Stockhausen 1950–2007, by Robin Maconie,Rowman & Littlefield Publishers, Maryland, 2016. 

Ed Chang's website in general has been super helpful in understanding the super-formula. It is a great journey through the Space of Stockhausen. 
http://stockhausenspace.blogspot.com/2014/08/a-brief-guide-to-licht-pt-1-drama-and.html
http://stockhausenspace.blogspot.com/2014/09/a-brief-guide-to-licht-pt-2-super.html

Threats and Promises: Lucifer, Hell, and Stockhausen's Sunday from Light" by Robin Hartwell in Perspectives of New Music 50, nos. 1 & 2 (Winter–Summer): 393–424.

Into the Middleground: Formula Syntax in Stockhausen's 
Licht" by Jerome Kohl in Perspectives of New Music 28, no. 2 (Summer): 262–91.

0 Comments

Cybernetic Systems

9/24/2020

0 Comments

 
Picture
Shannon wasn’t the only one looking at the way signals were transmitted. The same year he published his breakthrough paper, another mathematician published a book that would leave a lasting impression on a number of different fields, electronic music among them. The man was Norbert Wiener and his book was Cybernetics: or control and communication in animal and machine. Wiener defined cybernetics as "the scientific study of control and communication in the animal and the machine".

Wiener was a child prodigy. Born to Polish and German Jewish immigrants, on his fathers side Nobert was related to Maimonides, the famous rabbi, philosopher and physician from Al Andalus. The predisposition to intellectual greatness was hardwired into his system. Norbert’s father Leo was a teacher of Germanic and Slavic languages and avid reader and book hound who put together an impressive personally library which his son devoured. His father also had a gift for math and gave his son additional instructions in the subject.
At age 11 Norbert graduated Ayer Highschool in Massachussettes and then began attending Tufts College where he received a BA in mathematics at the age of 14. From there he went on to study zoology at Harvard before transferring to Cornell to pursue philosophy, where he graduated at the ripe old age of 17 in 1911, when his classmates from Ayer were probably just entering college if they went at all. Then he went back to Harvard where he wrote a dissertation on mathematical logic, comparing the works of Ernst Schröder with Bertrand Russel and Albert North Whitehead. His work showed that ordered pairs could be defined according to elementary set theory. His Ph.D. was awarded before he turned twenty. Later that same year he went to Cambridge and studied under Russel, as well as at the University of Göttingen where to learn from Edmund Husserl.

After a brief period teaching philosophy at Harvard, Wiener eventually found a position at MIT that would become permanent.  In 1926, Wiener returned to Cambridge and Göttingen as a Guggenheim scholar, on a trip that would have important implications for his future work. He spent his time there investigating Brownian motion, the Fourier integral, Dirichlet's problem, harmonic analysis, and the Tauberian theorems.

​Harmonic analysis and Browninan motion in particular would go on to have a key role in the development of cybernetics.
Picture
Harmonic analysis is a branch off the great tree of math that is concerned with analyzing and describing periodic and recurrent phenomena in nature, such as the many forms of waves: musical waves, tidal waves, radio waves, alternating current, the motion and vibration of machines. And it branched off the research of French mathematician Joseph Fourier (1768-1830). Fourier was interested in the conduction of heat and other thermal effects, a trail later followed by Nyquist in his own investigations of thermal noise. 

According to the Encyclopedia Brittanica the motions of waves  “can be measured at a number of successive values of the independent variable, usually the time, and these data or a curve plotted from them will represent a function of that independent variable. Generally, the mathematical expression for the function will be unknown. However, with the periodic functions found in nature, the function can be expressed as the sum of a number of sine and cosine terms.” The sum of these is known as a Fourier series. The determination of the coefficients of these terms is became known as harmonic analysis.

Brownian motion or movement relates to a variety of physical phenomena where some quantity of substance undergoes small and constant but random fluctuations. When those particles that are subject to Brownian motion are moving inside a given medium, and there is no preferred direction for these random oscillations to go, the particles will over time, spread out evenly in the substance.

Both Browninan motion and harmonic analysis can be considered stochastic processes. A stochastic process is, at its core, a process that involves the operation of chance. It is a process where values change in a random way over time. Markov chains are another important form of stochastic process that has been applied to music. Stochastic process can also be used to study noise, and Wiener was a student of this mathematical noise.

Amidst the conflicts of WWII Norbert was called upon to use his prodigious brain for solving technical problems associated with warfare. He attacked the problem of automatic aiming and firing of anti-aircraft guns. This required the development and further branching of even more specialized math. It also introduced statistical methods into the recondite area of control and communications engineering, which in turn led to his formulation of the cybernetics concept.

His concept of cybernetics was eerily close to Claude Shannon’s information theory. What they both had in common was knowledge of the influence of noise and the desire to communicate or find signals in, above, and around the noise. One of the ways Wiener figured out how to do this was through filtering. Enter the Wiener filter. It works by computing statistical estimates of unknown signals using a related signal as an input and filtering that to produce an estimated output. Say a signal has been obscured by the addition of noise. The Wiener filter removes the added noise from the signal to give an estimate of the original signal.

Cybernetics is also related to systems theory, and studied in particular the idea of feedback, or a closed signaling loop. Wiener originally referred to the way information or signals effect relationships in system as “circular causal”. Feedback occurs when some action within the system triggers a change in the environment. The environment in turn effects another change in the system when it feeds back the now transformed signal into the originating source.  Wiener, through his study of zoology was applicable to biological and social systems, as well as the mechanical ones his research had originally grown out of. Cognitive systems could also be understood in terms of these circular causal chains of action and reaction feeding back in on itself.

Cybernetic’s essential idea of feedback was also directly applicable to the new electronic musical systems defined by the advent of the microphone, amplifier, and speaker. When these devices are connected together in a circuit audio feedback is one possible result stemming from holding the mic close to the speaker. Everyone has experienced the unintentional noise when a PA is being tested. Musicians quickly adapted the idea of using intentional feedback, and distortion (noise on a signal) to give their recordings and live performances a new sound.

Cybernetics is not limited to mapping the flow of information, distorted or otherwise, in and out of systems. It also includes concepts of learning and adaption, social control, connectivity and communication, efficiency, efficacy, and emergence.
​
The related fields of information theory, cybernetics and systems theory would have huge impacts on music and the arts, as the theories trickled down from places like Bell Labs, the Macy Conferences with their focus on communication across scientific disciplines, and the success of Wiener’s book outside of strictly scientific circles.
Picture
The word cybernetics sounds kind of cold and inhuman. It conjures up the chrome clad computerized villains made famous by Doctor Who, the cybermen who speak only in monotone and whose overriding program is to delete organic life. Yet the word cybernetics itself comes from the Greek kybernḗtēs, or "steersman, governor, pilot, or rudder.” Human systems require a guide, someone to steer them. Wiener had picked up the word from the French mathematician and physicist André-Marie Ampère who coined the word "cybernetique" in an 1834 essay on science and civil government. Governments and other systems of human invention require steersman and guides with a firm hand on the rudder to give direction and control the effects of feedback.
​
The creation of systems is a human trait, and their guidance, via our input, doesn’t have to be cold. It can be done with intuition, insight, and artistic flair. Writing on systems in the world of art for the 1968 Cybernetic Serendipity art and music show at the ICA gallery in London, Jasia Reichardt wrote, "The very notion of having a system in relation to making paintings is often anathema to those who value the mysterious and the intuitive, the free and the expressionistic, in art. Systems, nevertheless, dispense neither with intuition nor mystery. Intuition is instrumental in the design of the system and mystery always remains in the final result."
Picture
Picture
The Discreet Music of Brian Eno
​

Designing musical systems can result in extraordinary beauty. In the mid-1960s while attending Ipswich Art School Brian Eno had his first encounter with cybernetics. It would go on to have a lasting influence. Under the mentorship of Roy Ascott who had developed the controversial “Groundcourse” curriculum adopted by a number of other art colleges Eno absorbed Ascott’s philosophy of systems learning, making mind maps, and playing mental games.

Eno started thinking of the music studio and groups of musicians in terms of cybernetic systems. Making great musical compositions started with designing the parameters, limits, inputs and outputs that would give a composition its ultimate form. Creating these systems and letting them run was how many of his first, and the first, ambient music records were made. 

The liner notes for Eno’s 1975 album Discreet Music contain a block diagram of the system he created for the music. He had been given an album of 18th century harp music to listen to while laying in the bed in the hospital, where he was recovering from a car accident injury. A friend who had been visiting put the record on for him before she left but the volume was turned down too low. Outside it was raining and he listened to “these odd notes of the harp that were just loud enough to be heard above the rain.” The experience “presented what was for me a new way of hearing music—as part of the ambience of the environment just as the color of the light and the sound of the rain were parts of that ambience.”

Eno connected this experience to Erik Satie’s idea of “furniture music” that was intended to blend into the ambient atmosphere of the room, and not be something focused on directly. Furniture music could mix and combine with the sounds of forks, knives, tinkling glasses and conversation at a dinner.

After Eno’s listening experience in the hospital he set out to make his own ambient music, setting off a musical cascade and defining and kick-starting a genre that at the time of this writing is now forty-five years old.

In the liner notes to Discreet Music, Eno wrote these now famous lines, “Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part. That is to say, I tend towards the roles of the planner and programmer, and then become an audience to the results.”

The liner notes also contain a block diagram of the system he set up. Eno had wanted to create a background drone for guitarist Robert Fripp to play along with. He was working with an EMS Synthi AKS with built-in memory and a tape delay system. He kept being interrupted in his musical work by knocks on the door and phone calls. He says, “I was answering the phone and adjusting all this stuff as it ran. I almost made that without listening to it. It was really automatic music.”

Discreet music started with two melodic phrases of differing lengths played back from the digital recall of the synth. That signal was then ran through a graphic equalizer to change its timbre. After the EQ the audio went into an echo unit and the output of that was recorded to a tape machine. That tape runs to the take-up reel of  a second tape machine, whose output is fed back into the first machine which records the overlapping signals and sounds. When Fripp came by the next day to have a listen Eno accidentally played the recording back at half-speed. Eno says of the result “it was probably one of the best things I’d ever done and I didn’t even realize I was doing it at the time.”


Autonomous Dynamical Systems

Another example of musical systems in practice comes from the work of David Dunn. David is a composer, sound artist, bioacoustics researcher and an expert at making audio recordings of wildlife. A deep interest in acoustic ecology informs his work. Ecological thinking and systems thinking go hand in hand and this sensibility is present in many of David’s works.

His 2007 album Autonomous Dynamical Systems touches on ecology, fractals and chaos theory, graphic imagery to sound conversions, and feedback loops. The album consists of four compositions. Lorenz from 2005 is a collaboration with chaos scientist James Crutchfield. James has a long history of work in the areas of  nonlinear dynamics, solid-state physics, astrophysics, fluid mechanics, critical phenomena and phase transitions, chaos, and pattern formation, having published over 100 papers in his field of mathematics and physics.

The Lorenz attractor was first studied by meteorologist Edward Lorenz in 1963. He derived the math from a simplified model of convection in the earth's atmosphere and is most frequently expressed as a set of three coupled non-linear differential equations. In popular culture the idea of the “butterfly effect” comes from the physical implications of the Lorenz attractor. In any deterministic nonlinear system one small change, even the small disturbances in air made by the flight of a butterfly, can result in huge differences to the system at a later time. This shows that systems can be deterministic and unpredictable at the same time. When the Lorenz attractor is plotted out graphically it has two large interconnected oval shapes resembling a butterfly or a pair of wings.

For the piece Lorenz, David Dunn used a piece of software written by Crutchfield called MODE (Multiple Ordinary Differential Equations) plugged into the interface program OSC (Open Sound Control), a networking protocol that allows synthesizers, computers, and other multimedia devices. OSC is then in turn fed into sound synthesis program. The sound synthesis program is then fed back into OSC and again into MODE. The entire piece is a feedback loop originating from chaos controlled sound. As such its structure embodies the very principles it seeks to express as music. Another piece on the album, Nine Strange Attractors from 2006 steps up the game even further in its creative use of mathematics to explore feedback loops.
​
Another piece uses feedback loops in a different way. Autonomous Systems: Red Rocks from 2003 used environmental field recordings fed into computer systems. Saved in the memory a chaos generator program chooses from among the sounds in a non-linear fashion and plays them back, sometimes electronically transformed, other times not. The composition is done, not by performing live, but by setting up and programming the system, then stepping away, sitting back, and listening to the results.   
John Cage said, “My compositions arise by asking questions.”  The music of systems proceeds from this same curious spirit. When designing new electronic works the composer must begin by asking questions of herself. Then systems can be designed to ask that question in different ways and to find out different answers. 
Picture
Wobbly and his Smart Phone System

Wobbly, aka Jon Leidecker, a solo artist, member of Negativland, and now host of radio show Over the Edge after the death of Don Joyce has also made a very interesting album by working with systems.

Between 2015 and 2018 Wobbly worked on an album called Monitress, released in 2019. He created an innovative system leveraging musical pitch tracking apps and synthesizers on a group of mobile phones and other mobile devices. Each of the devices was sent an audio signal. This was picked up by the pitch tracking app and coverted to MIDI data used to drive the synth. The resulting sound is then fed into an analog mixer. Once the signal is going into the mixer it can be routed and fed back into another mobile device also running a pitch tracking app and synth. The resulting effect is a cascade of sound between the devices.

​As Jon writes in the liner notes for the album, “ Feedback loops similar to acoustic or electrical feedback occur when you close the circle. The pitch-tracking apps are prone to errors, especially when presented with complex multiphonics or polyphonies;  they get quite a few notes fascinatingly wrong.  But more striking is the audible reality of their listening to each other.  Unison lines are an elemental sign of musical intelligence; we are entrained to emotional reactions when hearing multiple voices attempting the same melody.  These machines may not meet our current criterion for consciousness, but every audience I’ve played this piece in front of quickly realizes they're not listening to a solo…

The technology used to create these sounds existed before the mobiles, but this music would not have been made on earlier equipment -- it's a result of the relationship developed with a machine that is always present, and always listening. This was the project I dug into as we woke up to the true owners of these tools, a frame to make the relationship between ourselves and our machines audible while we think about the necessary steps to take next.”
The textures on this album are sublime, the kind of things that could only be heard through this a cascade of forces, each triggered by the preceding and affecting the whole in tandem. Wobbly did do post production editing of this work, but the initial results he captured once the process was set in motion is where the real magic lies. This is the kind of music that can’t be predicted. It couldn’t be written by a composer note for note. Rather the job of the composer is to design systems capable of eliciting beauty.
​
The three examples of systems music explored here are only a few of many. Musical systems is a large category within the new common practice generally. Other ways of thinking about it is in terms of modular set ups, various configurations of test equipment, systems of feedback in the way guitar pedals are arranged, and more. I don’t know if Norbert Wiener ever thought of music as one of the places where cybernetics would take flight. To hear the music made with its principles is an artistic way of exploring the rich ecology of sound.  
Read the rest of the Radiophonic Laboratory  series. 

References: 

The Information: a history, a theory, a flood by James Gleick, Pantheon, 2011

A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman, Simon & Schuster, 2018


Encyclopedia Britannica: https://www.britannica.com/science/harmonic-analysis

Brian Eno, Discreet Music, Obscure Records, 1975

David Dunn,  Autonomous and Dynamical Systems, New World Records 2007

Wobbly, Monitress: https://hausumountain.bandcamp.com/album/monitress



0 Comments

Information Theory: When Data becomes Dada

9/1/2020

2 Comments

 
Picture
From the ice cold farms and fields of Michigan to the halls of MIT and then onwards to Bell Labs at Murray Hill, Claude Shannon was a mathematical maverick and inveterate tinkerer. In the 1920s, in those places where the phone company had not deigned to bring their network, around three million farmers built their own by connecting telegraph keys to the barbed wire fences that stretched between properties. As a young boy Shannon rigged up one of these “farm networks so he and one his friend who lived half a mile away could talk to each other at night in Morse code. He was also the local kid people in the town would bring their radios to when they needed  repair and he got them to work. He had the knack.
 
He also had an aptitude for the more abstract side of a math and his mind could handle complex equations with ease. At the age of seventeen he was already in college at the University of Michigan and had published his first work in an academic journal, a solution to a math problem presented in the pages of American Mathematical Monthly. He did a double major in school and graduated with degrees in electrical engineering and mathematics then headed off to MIT for his masters.
​
While there he got under the wing of Vannevar Bush. Vannevar had followed in the footsteps of Lord Kelvin, who had created one of the world’s first analog computers, the harmonic analyzer, used to measure the ebb and flow of the tides. Vannevar’s differential analyzer was a huge electromechanical computer that was the size of a room. It solved differential equations by integration, using a wheel-and-disc mechanisms to perform the integration.
​
At school he was also introduced to the work of mathematician George Boole, whose 1854 book on algebraic logic The Laws of Thought laid down some of the essential foundations for the creation of computers. George Boole had in turn taken up the system of logic developed by Gottfried Wilhelm Leibniz. Might Boole have also been familiar with Leibniz’s book De Arte Combinatoria? In this book Leibniz proposed an alphabet of human thought, and was himself inspired by the Ars Magna of Ramon Lull. Leibniz wanted to take the Ars Magna, or “ultimate general art” developed by Lull as a debating tool that helped speakers combine ideas through a compilation of lists, and bring it closer to mathematics and turn it into a kind of calculus. Shannon became the inheritor of these strands of thought, through their development in the mathematics and formal logic that became Boolean algebra.  

​Between working with Bush’s differential analyzer and his study of Boolean algebra, Shannon was able to design switching circuits. This became the subject of his 1937 master thesis, A Symbolic Analysis of Relay and Switching Circuits. 
Picture
 Shannon was able to prove his switching circuit could be used simplify the complex and baroque system of electromechanical relays used in AT&T’s routing switches. Then he expanded his concept and showed that his circuits could solve any Boolean algebra problem. He finalized the work with a series of circuit diagrams.

In writing his paper Shannon took George Boole’s algebraic insights and made them practical. Electrical switches could now implement logic. It was a watershed moment that established the integral concept behind all electronic digital computers. Digital circuit design was born.

Next he had to get his PhD. It took him three more years, and his subject matter showed the first signs of multidisciplinary inclination that would later become a dominant feature of information theory. Vannevar Bush compelled him to go to Cold Spring Harbor Laboratory to work on his dissertation in the field of genetics. For Vannevar the logic was that if Shannon’s algebra could work on electrical relays it might also prove to be of value in the study of Mendelian heredity. His research in this area resulted in his work An Algebra for Theoretical Genetics, for which he received his PhD in 1940.

The work proved to be too abstract to be useful and during his time at Cold Spring Harbor he was often distracted. In a letter to his mentor Vannevar he wrote, “I’ve been working on three different ideas simultaneously, and strangely enough it seems a more productive method that sticking to one problem… Off and on I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intelligence, including telephony, radio, television, telegraphy, etc…”

With a doctorate under his belt Shannon went on to the Institute of Advanced Study in Princeton, New Jersey where his mind was able to wonder across disciplines and where he rubbed elbows with other great minds, including on occasion, Albert Einstein and Kurt Gödel. He discussed science, math and engineering with Hermann Weyl and John Von Neumann. All of these encounters fed his mind.
​
It wasn’t long before Shannon went elsewhere in New Jersey, to Bell Labs. There he got to rub elbows with other great minds such as Thornton Fry and Alan Turing. His prodigious talents were also being put to work for the war effort. 
Picture
 It started with a study of noise. During WWII Shannon had worked on the SIGSALY system that was used for encrypting voice conversations between Franklin D. Roosevelt and Winston Churchill. It worked by sampling the voice signal fifty times a second, digitizing it, and then masking it with a random key that sounded like the circuit noise so familiar to electrical engineers.

Shannon hadn’t designed the system, but he had been tasked with trying to break it, like a hacker, to see what its weak spots were, to find out if it was an impenetrable fortress that could withstand the attempts of an enemy assault.
​
Alan Turing was also working at Bell Labs on SIGSALY. The British had sent him over to also make sure the system was secure. If Churchill was to be communicating on it, it needed to be uncrackable. During the war effort Turing got to know Claude. The two weren’t allowed to talk about their top secret projects, cryptography, or anything related to their efforts against the Axis powers but they had plenty of other stuff to talk about, and they explored their shared passions, namely, math and the idea that machines might one day be able to learn and think.

Are all numbers computable? This was a question Turing asked in his famous 1937 paper On Computable Numbers. He had shown the paper to Shannon. In it Turing defined calculation as a mechanical procedure or algorithm.

This paper got the pistons in Shannon’s mind firing. Alan had said, “It is always possible to use sequences of symbols in the place of single symbols.” Shannon was already thinking of the way information gets transmitted from one place to the next. Turing used statistical analysis as part of his arsenal when breaking the Enigma ciphers. Information theory in turn ended up being based on statistics and probability theory.

The meeting of these two preeminent minds was just one catalyst for the creation of the large field and sandbox of information theory. Important legwork had already been done by other investigators who had made brief excursions into the territory later mapped out by Shannon.

Telecommunications in general already contained within it many ideas that would later become part of the theories core. Starting with telegraphy and Morse code in the 1830s common letters expressed with the least amount of variation, as in E, one dot. Letters not used as often have a longer expression, such as B, a dash and three dots. The whole idea of lossless data compression is embedded as a seed pattern within this system of encoding information. 
Picture
In 1924 Harry Nyquist published the exciting Certain Factors Affecting Telegraph Speed in the Bell System Technical Journal. Nyquist’s research was focused on increasing the speed of a telegraph circuit. One of the first things an engineer runs into when working on this problem is how to transmit the maximum amount of intelligence on a given range of frequencies without causing interference in the circuit or others that it might be connected to. In other words how do you increase speed and amount of intelligence without adding distortion, noise or create spurious signals?

In 1928, Ralph Hartley, also at Bell Labs, wrote his paper the Transmission of Information. He made it explicit that information was a measurable quantity. Information could only reflect the ability of the receiver to distinguish that one sequence of symbols had been intended by the sender rather than any other, that the letter A means A and not E.

Jump forward another decade to the invention of the vocoder. It was designed to use less bandwidth, compressing the voice of the speaker into less space. Now that same technology is used in cellphones as codecs to compress the voice and so more lines of communication can be used on the phone companies allocated frequencies.

WWII had a way of producing scientific side effects, discoveries that would break on through to affect civilian life after the war. While Shannon worked on SIGSALY and other cryptic work he continued to tinker on other projects. Shannon’s paper was one of the things he tinkered and had profound side effects. Twenty years after Hartley addressed the way information is transmitted, Shannon stated it this way, "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."

In addition to the ideas of clear communication across a channel Information theory also brought the following ideas into play:

-The Bit, or binary digit. One bit is the information entropy of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.

-The Shannon Limit: A formula for channel capacity. This is the speed limit for a given communication channel.
​
-Within that limit there must always be techniques for error correction that can overcome the noise level on a given channel. A transmitter may have to send more bits to a receiver at a slower rate but eventually the message will get there.
Picture


His theory was a strange attractor in a chaotic system of noisy information. Noise itself tends to bring diverse disciplinary approaches together, interfering in their constitution and their dynamics. Information theory, in transmitting its own intelligence, has in its own way, interfered with other circuits of knowledge it has come in contact with.

A few years later psychologist and computer scientist J.C. R. Licklider said, “It is probably dangerous to use this theory of information in fields for which it was not designed, but I think the danger will not keep people from using it.”

Information theory encompasses every other field it can get its hands on. It’s like a black hole, and everything in its gravitational path gets sucked in. Formed at the spoked crossroads of cryptography, mathematics, statistics, computer science, thermal physics, neurobiology, information engineering, and electrical engineering it has been applied to even more fields of study and practice: statistical inference, natural language processing, the evolution and function of molecular codes (bioinformatics), model selection in statistics, quantum computing, linguistics, plagiarism detection. It is the source code behind pattern recognition and anomaly detection, two human skills in great demand in the 21st century.
 
I wonder if Shannon knew when he wrote ‘A Mathematical Theory of Communication’ for the 1948 issue of the Bell Systems Technical Journal that his theory would go on to unify, fragment, and spin off into multiple disciplines and fields of human endeavor, music just one among a plethora.

Yet music is a form of information. It is always in formation.  And information can be sonified and used to make music. Raw data becomes audio dada. Music is communication and one way of listening to it is as a transmission of information. The principles Shannon elucidated are form of noise in the systems of world knowledge, and highlight one way of connecting different fields of study together. As information theory exploded it was quickly picked up as  a tool among the more adventurous music composers.

Information theory could be at the heart of making the fictional Glass Bead Game of Herman Hesse a reality. Herman Hesse also dropped several hints and clues in his work that connected it with the same thinkers whose work served as a link to Boolean algebra, namely Athanasius Kircher, Lull and Leibniz who were all practitioners and advocates of the mnemonic and combinatorial arts. Like its predecessors, Information Theory is well suited to connecting the spaces between different fields. In Hesse’s masterpiece the game was created by a musician as a way of “represent[ing] with beads musical quotations or invented themes, could alter, transpose, and develop them, change them and set them in counterpoint to one another.” After some time passed the game was taken up by mathematicians. “…the Game was so far developed it was capable of expressing mathematical processes by special symbols and abbreviations. The players, mutually elaborating these processes, threw these abstract formulas at one another, displaying the sequences and possibilities of their science.”

Hesse goes on to explain, “At various times the Game was taken up and imitated by nearly all the scientific and scholarly disciplines, that is, adapted to the special fields. There is documented evidence for its application to the fields of classical philology and logic. The analytical study had led to the reduction of musical events to physical and mathematical formulas. Soon after philology borrowed this method and began to measure linguistic configurations as physics measured processes in nature. The visual arts soon followed suit, architecture having already led the way in establishing the links between visual art and mathematics. Thereafter more and more new relations, analogies, and correspondences were discovered among the abstract formulas obtained this way.”

In the next sections I will explore the way information theory was used and applied in the music of Karlheinz Stockhausen.

Read the rest of the Radiophonic Laboratory series.

REFERENCES:
A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman, Simon & Schuster, 2018

The Information: a history, a theory, a flood by James Gleick, Pantheon, 2011

The Glass Bead Game by Herman Hesse, translated by Clara and Richard Winston, Holt, Rinehart and Winston, 1990

Information Theory and Music by Joel Cohen, Behavioral Science, 7:2
(1962:Apr.) 

​Information Theory and the Digital Age by Aftab, Cheung, Kim, Thakkar, Yeddanapudi

​Logic and the art of memory: the quest for a universal language, by Paolo Rossi, The Athlone Press, University of Chicago, 2000.



Picture
2 Comments

GAMES OF DICE AND GAMES OF GLASS

8/31/2020

0 Comments

 
“There is more in man and in music than in mathematics, but music includes all that is in mathematics.”—Peter Hoffman
Picture
​Infotainment is usually thought of as light entertainment peppered with superficial “facts” and forgettable news. Yet another kind of infotainment exists, a musical kind that is based on mathematical algorithms. It is true entertainment that is filled with true information and though it is mathematically modeled none of it is fake.

In the twentieth century interest in the multidisciplinary fields of Information Theory and Cybernetics led to dizzy bursts of creativity when their ideas were applied to making new music. These disciplines applied rigorous math to the study of communication systems and how a signal transmitted from one person can cut through the noise of other spurious signals to be received by another person. They also made explicit the role of feedback inside of a system, how signals can amplify themselves and trigger new signals. All of this was studied complex equations and formulas.

Yet there is nothing new about the relationship between music and math.    

Algorithmic music has been made for centuries. It can be traced all the way back to Pythagoras, who thought of music and math as inseparable. If music can be formalized in terms of numbers, music can also be formalized as information or data.  The “data” the ancients used to drive their compositions was the movement of the stars. Ptolemy is known to us most for his geocentric view of the cosmos and the ordered spheres the celestial bodies traveled on. Besides being an astronomer Ptolemy was also a systematic musical theorist. He believed that math was the basis for musical intervals and he saw those same intervals at play in the spacing of heavenly bodies, each planet and body corresponding to a certain modes and notes.

Ptolemy was just one of many who believed in the reality of the music of the spheres. Out of these ancient Greek investigations into the nature of music and the cosmos came the first musical systems. The musician who used them was thus a mediator between the cosmic forces of the heavens above and the life of humanity here below. 

Picture
Western music went through myriad changes across the intervening centuries after Ptolemy. World powers rose and fell, new religions came into being. Out of the mystical monophonic plainchant uttered by Christian monks in candlelit monasteries polyphony arose, and it called for new rules and laws to govern how the multiple voices were to sing together. This was called “canonic” composition. A composer in this era (15th century) would write a line for a single voice. The canonic rule gave the additional singers and voices the necessary instruction. For instance one rule would be to for a second voice to start singing the melody begun by one voice again after a set amount of time. Other rules would denote inversions, retrograde movement, or other practices as applied to the music. 

From this basis the rules, voices, and number of instruments were enlarged through the renaissance until the time of the era of “Common Practice”, roughly between 1650 to 1900. This period encompassed baroque music, and the classical, romantic and impressionist movements. The 20th and 21st century are now giving birth to what Alvin Curran has called the New Common Practice.

In the Common Practice Era tonal harmony and counterpoint reigned supreme, and a suite of rhythmic and durational patterns gave form to the music. These were the “algorithmic” sand boxes composers could play in.

The New Common Practice, according to Curran encompasses, “the direct unmediated embracing of sound, all and any sound, as well as the connecting links between sounds, regardless of their origins, histories or specific meanings; by extension, it is the self guided compositional structuring of any number of sound objects of whatever kind sequentially and/or simultaneously in time and in space with any available means.” I’ve begun to think of this New Common Practice as embracing the entire gamut of 20th and 21st century musical practices:  serialism, atonality, musique concrete, electronics, solo and collective improvisation, text pieces, and the rest of it.

One vital facet of the New Common Practice is chance operations, or the use of randomizing procedures to create compositions. Chance operations have a direct relation to information theory, but this approach can already be seen making cultural inroads in the 18th century when games of chance had a brief period of popularity among composers and the musical and mathematically literate. These are a direct precursor to the deeper algorithmic musical investigations that have started to flourish in the 20th century.   
Picture
Much of this original algorithmic music work was done the old school way, with pencil, sheets of paper, and tables of numbers. This was the way composers plotted voice-leading in Western counterpoint. Chance operations have also been used as one way of making algorithmic music, such as the Musikalisches Würfelspiel or musical dice game, a system that used dice to randomly generate music from tables of pre-composed options. These games were quite popular throughout Western Europe in the 18th century and a number of different versions were devised. Some didn’t use dice but just worked on the basis of choosing random numbers.

 In his paper on the subject Stephen Hedges wrote how the middle class in Western Europe were at the time enamored with mathematics, a pursuit as much at home in the parlors of the people as in the classroom of professors. "In this atmosphere of investigation and cataloguing, a systematic device that would seem to make it possible for anyone to write music was practically guaranteed popularity.”

​The earliest known example was created by Johann Philipp Kirnberger with his "The Ever-Ready Minuet and Polonaise Composer" in 1757.   C. P. E. Bach's came out with his musical dice game "A method for making six bars of double counterpoint at the octave without knowing the rules" five years later in 1758.  In 1780 Maximilian Stadler published "A table for composing minuets and trios to infinity, by playing with two dice". Mozart was even thought to have gotten in on the dice game in 1792 when an unattributed version made an appearance from his music publisher a year after the composer’s death. This has not been authenticated to be by the maestro’s hand, but as with all games of possibility, there is a chance.
These games may have been one of the many inspirations behind The Glass Bead Game by Herman Hesse. This novel was one of the primary literary inspirations and touchstones for the young Karlheinz Stockhausen. The Glass Bead Game portrays a far future culture devoted to a mystical understanding of music. It was at the center of the culture of the Castalia, that fictional province or state devoted to the pursuit of pure knowledge.

As Robin Maconie put it the Glass Bead Game itself appears to be “an elusive amalgam of plainchant, rosary, abacus, staff notation, medieval disputation, astronomy, chess, and a vague premonition of computer machine code… In terms suggesting more than a passing acquaintance with Alan Turing’s 1936 paper ‘On Computable Numbers’, the author described a game played in England and Germany, invented at the Musical Academy of Cologne, representing the quintessence of intellectuality and art, and also known as ‘Magic Theater’.”

Hesse wrote his book between 1931 and 1943. The interdisciplinary game at the heart of the book prefigures Claude Shannon’s explosive Information Theory which was established in his 1948 paper A Mathematical Theory of Communication. His paper in turn bears a debt to Alan Turing, whom Shannon met in 1942. Norbert Wiener also published his work on Cybernetics the same year as Shannon. All of these ideas were bubbling up together out of the minds of the leading intellectuals of the day. Ideas about computable numbers, the transmission of information, communication, and thinking in systems, all of which would give artists practical tools for connecting one field to another as Hesse showed was possible in the fictional world of Castalia.
​
Robin Maconie again had the insight to see the connection between the way Alan Turing visualized “a universal computing machine as an endless tape on which calculations were expressed as a sequence of filled or vacant spaces, not unlike beads on a string”.
As the Common Practice era of western music came to an end at the close of the 19th century, the mathematically inclined serialism came into its own, and as the decades wore on games of chance made a resurgence, defining much of the music of the 20th century. With the advent of computers the paper and pencil method have taken a temporary backseat in favor of methods that introduce programmed chance operations.
Composers like John Cage took to the I Ching with as much tenacity as the character Elder Brother did in Hesse’s book. Karlheinz Stockhausen meanwhile used his music as means to make connections between myriad subjects and to create his own unique ‘Magic Theater’. Cybernetics and Information Theory each contributed to thinking of these and other composers. 
Picture
REFERENCES:
Dice Music in the Eighteenth Century, pp. 184–185, Music and Letters 59: 180–87. 

Conceptualizing music: cognitive structure, theory and analysis, by Lawrence M. Zbikowski, Oxford, 2002

The New Common Practice by Alvin Curran
http://www.alvincurran.com/writings/common.html

Other planets: the complete works of Karlheinz Stockhausen 1950–2007, Rowman & Littlefield Publishers​, 2016

Note:

A set of musicians dice have been made that offer up numerous possibilities for the practicing musician. Using random process doesn't just have to be for avant-garde composers anymore!

Musicians Dice: 
"The Musician’s Dice are patented, glossy black 12-sided dice, engraved in silver with the chromatic scale. They can be used in any number of ways – they bring the element of chance into the musical process. They're great for composing Aleatory and 12 tone-music, and as a basis for improvisation – they’re really fun in a jam session. They also make an effective study tool: they can be used as “musical flash cards” when learning harmony, and their randomness makes for fresh and challenging exercise in sight-singing and ear training. Plus, they look really cool on the coffee table, and give you a chance to throw around words like "aleatory.""

Below two musicians play around with using these dice.
Read the rest of the Radiophonic Laboratory series.
0 Comments

The Bell Sound 2: Taking it to the Max

7/29/2020

0 Comments

 
Picture
At Bell Labs Max Mathews was the granddaddy of all its music makers. If you use a computer to make or record music with, he is your granddaddy too. In 1957 Max wrote a program for a digital computer called Music I. It was a landmark demonstration in the ability to write code to command a machine to synthesize music. Computers can do things and play things that humans alone cannot. Music I opened up a world of new timbral and acoustic possibilities. This was a perfect line of inquiry for the director of Bell Laboratories Behavioral and Acoustic Research Center where Mathews explored a spectrum of ideas and technologies between 1955 and 1987. Fresh out of MIT where he received a Sc.D in electrical engineering Mathews was ready to get to work and Music I was only the beginning of a long creative push in technology and the arts.    
Max’s corner of the sprawling laboratory in Murray Hill, New Jersey carried out research in speech communication, speech synthesis, human learning and memory, programmed instruction, the analysis of subjective opinions, physical acoustics, industrial robotics and music.

Max followed the Music I program with II, III, IV and V, each iteration taking its capabilities further and widening its parameters. These programs carried him through a decade of work and achievement. As noted in the chapter on the Synthesis of Speech, Max had created the musical accompaniment to “Daisy: A Bicycle Built for Two” later made famous by the fictional computer HAL in Stanley Kubrick’s 2001: A Space Odyssey. 
Starting in 1970 he started working with Richard Moore to create the GROOVE system. It was intended to be a “musician-friendly” computer environment. The other programs broke incredible new ground, but the use of them leaned more towards those who could program computers and write code in their esoteric languages, than the average musician or composer of the time. GROOVE was the next step in bringing it to its potential users. It was a hybrid digital-analog system that stood for Generating Realtime Operations On Voltage-controlled Equipment.

Max notes, “Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
​
But Chowning hadn’t discovered FM Synthesis at the time GROOVE was being created. It was still the 70’s and affordable computers and synthesizers had yet to make it into the homes outside of the most devoted hobbyists. GROOVE was a first step to making computer music in real time. The set up included an analog synth with a computer and monitor. The computer’s memory made it appealing to musicians who could store their manipulations of the interface for later recall. It was a clever workaround the limitations of each technology. The computer was used for its ability to store the musical parameters while the synth was used to create the timbres and texture without relying on digital programming. This set up allowed creators to play with the system, fine tune what they wanted it to do, for later re-creation. 
Picture
Bell Labs had acquired a Honeywell DDP224 computer from MIT to use specifically for sound research. This is what GROOVE was built on. The DDP-24 was a 24-bit transistor machine that used magnetic core memory to store data and program instructions. That it had disk storage also meant it was possible for libraries of programming routines to be written. This allowed the users to create customized logic patterns. A composition could be tweaked, adjusted and mixed in real time on the knobs, controls, and keys. In this manner a piece could be reviewed as a whole or in sections and then replayed from the stored data.
When the system was first demonstrated in Stockholm at the 1970 conference on Music and Technology organized by UNESCO, music by Bartok and Bach was played. A few years later Laurie Spiegel would grasp the unique compositional possibilities of the system and take it to the max.

In the meantime Max himself was a guy in demand. IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in France brought him on board as a scientific advisor as they built their own state of the art sound laboratory and studios in France between 1974 and 1980.
​
In 1987 Max left his position at Bell Labs to become a Professor of Music (Research) at Stanford University.  There he continued to work on musical software and hardware, with a focus on using the technology in a live setting. “Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Picture
Max/MSP
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP.  Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.

Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.

Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.

Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.

Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today. 

 Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.

It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.

Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
 
Refernces:
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
https://en.wikipedia.org/wiki/DDP-24
https://en.wikipedia.org/wiki/Max_(software)

Read the rest of the Radiophonic Laboratory series.
0 Comments

The Bell Sound: from ALICE to AMY

7/7/2020

0 Comments

 
Picture
Just as the folks inside the Sound-House of the BBC’s Radiophonic Workshop continued to refine their approach and techniques to electronic music, another older sound house back across the pond in America continued to research new “means to convey sounds in trunks and pipes, in strange lines and distances”. Where the BBC Radiophonic Workshop used budget friendly musique concrete techniques to create their otherworldly incidental music, the pure research conducted at Bell Laboratories was widely diffused and the electronic music systems that arose out of those investigations were incidental and secondary byproducts. The voder and vocoder were just the first of these byproducts.

Hal Alles was a researcher in digital telephony. The fact that he is remembered as the creator of what some consider the first digital additive synthesizer is a quirk of history. Other additive synthesizers had been made at Bell Labs, but these were software programs written for their supersized computers.

Alles needed to sell his digital designs within and without a company that had been the lords of analog, and it needed to be interesting. The synthesizer he came up with, was his way of demonstrating the companies digital prowess, and entertaining his internal and external clients at the same time. What he came up with was called the Bell Labs Digital Synthesizer or sometimes the Alles Machine or ALICE.

It should be noted that Hal bears no relation to the computer in 2001: A Space Odyssey. The engineer recalls those heady days in the late sixties and 1970s.  “As a research organization (Bell labs), we had no product responsibility. As a technology research organization, our research product had a very short shelf life. To have impact, we had to create ‘demonstrations’. We were selling digital design within a company with a 100 year history of analog design. I got pretty good at 30 minute demonstrations of the real time capabilities of the digital hardware I was designing and building. I was typically doing several demonstrations a week to Bell Labs people responsible for product development. I had developed one of the first programmable digital filters that could be dynamically reconfigured to do all of the end telephone office filtering and tone generation. It could also be configured to play digitally synthesized music in real time. I developed a demo of the telephone applications (technically impressive but boring to most people), and ended the demo with synthesized music. The music application was almost universally appreciated, and eventually a lot of people came to just hear the music.”

Max Mathews was one of the people who got to see one of these demos, where the telephonic equipment received a musical treatment. Mathews was the creator of the MUSIC X series of computer synthesis programming languages. He was excited by what Alles was doing and saw its potential. He encouraged the engineer to develop a digital music instrument.
​
“The goal was to have recording studio sound quality and mixing/processing capabilities, orchestra versatility, and a multitude of proportional human controls such as position sensitive keyboard, slides, knobs, joysticks, etc,” Mathews said. “It also needed a general purpose computer to configure, control and record everything. The goal included making it self-contained and ‘portable’. I proposed this project to my boss while walking back from lunch. He approved it before we got to our offices.”
Picture
Harmonic additive synthesis had already been used back in the 1950s by linguistics researchers who were working on speech synthesis and Bell Labs was certainly in on the game. Additive synthesis at its most basic works by adding sine waves together to create timbre. The more common technique until that time had been subtractive synthesis, which used filters to remove or attenuate the timbre of a sound.

Computers were able to do additive synthesis with wavetables that had been pre-computed, but it could also be done by mixing the output of multiple sine wave generators. This is what Karlheinz Stockhausen basically did with Studie II, though he achieved the effect through by building up layers of pure sine waves on tape rather than with a pre-configured synth or computer set up.

That method is laborious. A machine that can do it for you goes a long way towards being able to labor at other things while making music.

ALICE was a hybrid machine in that it used a mini-computer to control a complex bank of sound generating oscillators. The mini-computer was an LSI-11, by the Digital Equipment Corporation, a cost reduced version of their PDP-11 in production for twenty years starting in 1970. This controlled the 64 oscillators whose output whose was then mixed to create a number of distinct sounds and voices. It had programmable sound generating functions and the ability to accept a number of different input devices.

The unit was outfitted with two 8-inch floppy drives supplied by Heathkit; they made their own version of the LS-11 and sold it as the H11. AT&T rigged it out with one of their color video monitors. A custom converter was made that sampled the analog inputs and transferred them to 7 bit digital resolution 250 times a second. There were a number of inputs used to work with ALICE in real time: two 61-key piano keyboards, 72 sliders alongside various switches, and four analog joysticks just to make sure the user was having fun. These inputs were interpreted by the computer which in turn controlled the outputs sent to sound generators as parameters. The CPU could handle around 1,000 parameter changes per second before it got bogged down.
The sound generators themselves were quite complex. A mere 1,400 integrated circuits were used in their design. Out of the 64 oscillators the first bank of 32 were used as master signals. This meant ALICE could be expected to achieve 32 note polyphony. The second set was slaved to the masters and generated a series of harmonics. If this wasn’t enough sound to play around with, ALICE was also equipped with 32 programmable filters and 32 amplitude multipliers. With the added bank of 256 envelope generators ALICE had a lot of sound potential and sound paths that could be explored through her circuitry. All of those sounds could mixed in many different ways into the 192 accumulators she was also equipped with. Each of the accumulators was then sent to one of the four 16-bit output channels then reconverted from digital back into analog on the audio output.   

 Waveforms were generated by looking up the amplitude for a given time in a 64k word ROM table. There were a number of tricks Alles programmed into the table to reduce the number of calculations the CPU needed to run. 255 timers outfitted with 16 FIFO stacks controlled the whole shebang. The user put events into a timestamp sorted queue that fed it all into the generator.

Though the designers claimed the thing was portable, all the equipment made it weigh in at a hefty 300 pounds, making it an unlikely option for touring musicians. As the worlds first true digital additive synthesizer it was quite the boat anchor.

Completed in 1976, only one full-length composition was recorded for the machine, though a number of musicians, including Laurie Spiegel whose work will be explored later, played the instrument in various capacities. For the most part though the Alles Synth was brushed aside; even if the scientists and engineers at Bell Labs were tasked to engage in pure research they still had business to answer to. A use was found for Hal’s invention in terms of marketing was found once again in 1977. 

In that year the Motion Picture Academy was celebrating the 50th anniversary of the talkies. The sound work for The Jazz Singer, the first talking picture, had been done by Western Electric, with their Vitaphone system technology. The successful marriage of moving image and sound first seen and heard in that movie wouldn’t have been possible without the technology developed by the AT&T subsidiary and Ma Bell was still keen to be in on the commemoration of the film. ALICE is what they chose to use as the centerpiece for the event.

A Bell Labs software junky by the name of Doug Bayer was brought in to improve the operating system of the synth and try to make the human interface a bit more user friendly. The instrument was flown to Hollywood at considerable risk. The machine was finicky enough without transporting it. Taking it on a plane where it could get banged up, whacking out all of its components in just one bump, and potentially sending it into meltdown mode was not out of the question.
​
So they hired musician and composer Laurie Spiegel, who’d already been working at the Labs without pay, to be filmed playing ALICE. This would be shown in the event that the musician they hired to play it live, Roger Powell, wouldn’t be able to do so due to malfunction. This film is the only recording of it in performance left in known existence. 
Yet to hear how the Bell Labs Digital Synthesizer sounds look no further than Don Slepian’s album Sea of Bliss. Max Mathews had hired Slepian to work with the synth as an artist in residence between 1979 and 1982. Don had been born into a scientific family. From an early age he demonstrated technical talent and musical ability. He had begun making music in 1968, programming his own computers, soldering together his own musical circuits, and experimenting with tape techniques. As a member of the Defense Advanced Research Projects Agency (DARPA) Don worked as a tester on the early iteration of the internet and dor a time he lived in Hawaii and played as a synthesizer soloist with the Honolulu Symphony. All of this made him a perfect fit as artist in residence at Bell Labs.
​
The results his work are on the album: epic length cuts of deep ambient music bringing relaxation and joy to the listener. It’s the audio version of taking valium. Listen to it and feel the stress of life melt away.
Don Slepian described his 1980 masterpiece for the online Ambient Music Guide. “It’s stochastic sequential permutations (the high bell tones), lots of real time algorithmic work, but who cares? It's pretty music: babies have been born to it, people have died to it, some folks have played it for days continuously. No sequels, no formulas. It was handmade computer music."
The Bell Labs Digital Synthesizer was soon to leave its birthplace after Don had done his magic with the machine. In 1981 ALICE was disassembled and donated to the TIMARA Laboratories at the Oberlin Conservatory of Music.

Oberlin, and by extension TIMARA (Technology in Music and Related Arts) has a history that reaches back to the very beginning of electronic music, in the mid-19th century. None other than Elisha Gray was an adjunct physics professor at the college. He is considered by some as the father of the synthesizer due to his invention of the musical telegraph and his seventy plus patents for inventions that were critical in the development of telecommunications, electronic music and other fields. If it had not been for Gray’s electromechanical oscillator, Thaddeus Cahill would never have been able to create that power hungry beast of an instrument, the Telharmonium.

The Music Conservatory at Oberlin dates back to 1865 and they joined the ranks of those radio and television stations who built electronic music studios with the opening of TIMARA in 1967. The department was founded by Olly Wilson as a response to the demand for classes in electronics from composition students. It became the first of a number of departments in the American higher education scene to create a space for experimentation in analog synthesis and mixed media arts.

Though ALICE is now enshrined in one of the many sound laboratories at TIMARA her influence continued to be felt not long after she was sequestered there. A number of commercial synthesizers based on the Alles design were produced in 1980s. 
Picture
The Atari AMY sound chip is a case in point and was the smallest of the products to be designed. It stood for Additive Music sYnthesis. It still had 64-oscillators but they were reduced to a single-IC sound chip. A chip that had numerous design issues. Additive synthesis could now be done with less, though it never really got into the hands of users. It was scheduled to be used on a new generation of 16-bit Atari computers and for the next line of game consoles and by their arcade division. AMY never saw the light of day in any configuration. Even after Atari was sold in 1984, she remained waiting in the dark to get used on a project, but was cut from being included in new products after many rounds at the committee table, where so many dreams wind up dead.  

Still other folks in the electronic music industry made use of the principles first demonstrated by ALICE.  The Italian company Crumar and Music Technologies’ of New York got into a partnership to create Digital Keyboards. Like Atari they wanted to resize the Alles Machine, make it smaller. They came up with a two-part invention  using a Z-80 microcomputer and a single keyboard with limited controls. They gave it the unimaginative name Crumar General Development System and it sold in 1980 for $30,000 buckaroos. Since it was out of the price range of your average musician, they marketed the product to music studios. Wendy Carlos got her hands on one and the results can be heard on the soundtrack to Tron.
 
Other companies got into the game and tried to produce something similar at lower cost, but none of these really managed to find a good home in the market due to the attached price tag. When Yamah released the DX7 in 1983 for $2,000 the demand for additive synths tanked. The DX7 implemented FM synthesis and enabled it to achieve many of the same effects as ALICE with as few as two oscillators. FM synthesis and its relationship to FM radio modulation will be looked at in detail in another article. 
It had all started out as a way for Hal Alles to look at potential problems in digital communications, such as switching, distortion, and echo. It ended up becoming a tool for extending human creativity.
 
References: 
http://120years.net/bell-labs-hal-alles-synthesiser-hall-alles-usa-1977/
https://en.wikipedia.org/wiki/Bell_Labs_Digital_Synthesizer
http://www.atarimuseum.com/computers/8BITS/XL/ASG/Chips/AMY/index.html
https://en.wikipedia.org/wiki/TIMARA

Read the other articles in the Radiophonic Laboratory series.
0 Comments

Musician of Sounds: Noise, Pierre Schaeffer, and Musique Concrète

4/9/2020

0 Comments

 
Picture
IS THERE ANY ESCAPE FROM NOISE?

​In our machine dominated age there is hardly any escape from noise. Even in the most remote wilderness outpost planes will fly overhead to disrupt the sound of the wind in the trees and the birds in the wind. In the city it is so much part of the background we have to tune in to the noise in order to notice it because we’ve become adept at tuning it out. Roaring motors, the incessant hum of the computer fan, the refrigerator coolant, metal grinding at the light industrial factory down the street, the roar of traffic on I-75, the beep of a truck backing up, these and many other noises are all part of our daily soundscape.
​
Throughout human history musicians have sought to mimic the sounds around them, the gentle drone of the tanpura, a stringed instrument that accompanies sitar, flute, voice and other instruments in classical Indian music, was said to mimic the gentle murmur of the rivers and streams. Should it be a surprise then, that in the nineteenth and twentieth century musicians and composers started to mimic the sounds of the machines around them? In bluegrass and jazz there are a whole slew of songs that copied the entrancing rhythms of the train. As more and more machines filled up the cities is at any wonder that the beginnings of a new genre of music –noise music- started to emerge? Is it any wonder, that as acoustic and sound technology progressed, our music making practices also came to be dominated by machines. 
Picture
THE ART OF NOISES

And just what is music anyway? There are many definitions from across the span of time and human culture. Each definition has been made to fit the type, style and particular practice or praxis of music.  

In his 1913 manifesto The Art of Noises the Italian Futurist thinker Luigi Russolo argues that the human ear has become accustomed to the speed, energy, and noise of the urban industrial soundscape. In reaction to those new conditions he thought there should be a new approach to composition and musical instrumentation. He traced the history of Western music back to Greek musical theory which was based on the mathematical tetrachord of Pythagoras. This did not allow for harmony. This changed during the middle-ages first with the invention of plainchant in Christian monastic communities. Plainchant employs the modal system and this is used to work out the relative pitches of each line on the staff, and was the first revival of musical notation after knowledge of the ancient Greek system was lost. In the late 9th century, plainsong began to evolve into organum, which led to the development of polyphony. Until then the chord did not exist, as such.

Russolo thought that the chord was the "complete sound." He noted that in history chords developed slowly over time, first moving from the "consonant triad to the consistent and complicated dissonances that characterize contemporary music." He pointed out that early music tried to create sounds that were sweet and pure, and then it evolved to become more and more complex. By the time of Schoenberg and the twelve tone revolution of serial music musicians sought to create new and more dissonant chords. These dissonant chords brought music ever closer to his idea of "noise-sound."

With the relative quiet of nature and pre-industrial cities disturbed Russolo thought a new sonic palette was required. He proposed that electronics and other technology would allow futurist musicians to substitute for the limited variety of timbres available in the traditional orchestra. His view was that we must "break out of this limited circle of sound and conquer the infinite variety of noise-sounds." This would be done with new technology that would allow us to manipulate noises in ways that never could have been done with earlier instruments. In that, he was quite correct.

Russolo wasn’t the only one thinking of the aesthetics of noise, or seeking new definitions of music. French Modernist composer Edgar Varèse said that “music is organized sound.” It was a statement he used as a guidepost for his aesthetic vision of "sound as living matter" and of "musical space as open rather than bounded". Varèse thought that "to stubbornly conditioned ears, anything new in music has always been called noise", and he posed the question, "what is music but organized noises?" An open view of music allows new elements to come into the development of musical traditions, where a bound view would try to keep out those things out that did not fit the preexisting definition.
​
Out of this current of noise music initiated in part by Russolo and Varese a new class of musician would emerge, the musician of sounds.
Picture
MUSICIAN OF SOUNDS
​
Fellow Frenchmen Pierre Schaeffer developed his theory and practice of musique concrète during the 1930s and ‘40s and saw it spread to people such as Karlheinz Stockhausen, the founders of the BBC Radiophonic Workshop, F.C. Judd and many others in the 50’s. Musique concrète was a practical application of Russolo’s idea of “noise-sound” and exploration of expanded timbres possible through then new studio techniques. It was also a way of making music according to the “organized sound” definition and was distinct from previous methods by being the first type of music completely dependent on recording and broadcast studios.
In musique concrète sounds are sampled and modified through the application of audio effects and tape manipulation techniques, then reassembled into a form of montage or collage. It can feature any sounds derived from any recordings of musical instruments, the human voice, field recordings of the natural and man-made environment or sounds created in the studio. Schaeffer was an experimental audio researcher who combined his work in the field of radio communications with a love for electro-acoustics.  Because Schaeffer was the first to use and develop these studio music making methods he is considered a pioneer of electronic music, and one of the most influential musicians of the 20th century. These recording and sampling techniques which he was the first to use and practice are now part of the standard operating procedures used by nearly all record production companies around the world.  Schaeffer’s efforts and influence in this area earned him the title “Musician of Sounds.”

Schaeffer, born in 1910, had a wide variety of interests throughout his eighty-five years on this planet. He worked variously across the fields of composing, writing, broadcasting, engineering, and as a musicologist and acoustician. His work was innovative in science and art. It was after World War II that he developed musique concrète, all while continuing to write for essays, short novels, biographies and pieces for the radio. Much of his writing was geared towards the philosophy and theory of music, which he then later demonstrated in his compositions.

It is interesting to think of the influences on him as a person. Both his parents were musicians, his father a violinist, and his mother a singer, but they discouraged him from pursuing a career in music and instead pushed him into engineering. He studied at the the École Polytechnique where he received a diploma in radio broadcasting. He brought the perspective and approach of an engineer with his inborn musicality to bear on his various activities.

Schaeffer got his first telecommunications gig in 1934 is Strasbourg. The next year he got married and the couple had their first child before moving to Paris where he began work at Radiodiffusion Française (now called Radiodiffusion-Télévision Française, RTF). As he worked in broadcasting he started to drift away from his initial interests in telecommunications towards music. When these two sides met he really began to excel.
​
After convincing the management at the radio station of the alternate possibilities inherent in the audio and broadcast equipment, as well as the possibility of using records and phonographs as a means for making new music he started to experiment. He would records sounds to phonographs and speed them up, slow them down, play them backwards and run them through other audio processing devices, and mixing sounds together. While all this is just par for the course in today’s studios, it was the bleeding edge of innovation at the time.  
With these mastered he started to work with people he met via the RTF. All this experimentation had as a natural outgrowth a style that leant itself to the avant-garde of the day. The sounds he produced challenged the way music had been thought of and heard. With the use of his own and his colleagues engineering acumen new electronic instruments were made to expand on the initial processes in the audio lab, which eventually became formalized as the Club d’Essai, or Test Club. 
CLUB D’ESSAI

In 1942 Pierre founded the Studio d'Essai, later dubbed the Club d'Essai at RTF. The Club was active in the French resistance during World War II, later to become a center of musical activity. It started as an outgrowth of Schaeffer’s radiophonic explorations, but with a focus on being radio active in the Resistance on French radio. It was responsible for the first broadcasts to liberated Paris in August 1944. He was joined in the leadership of the Club by Jacques Copeau, the theatre director, producer, actor, and dramatist. 
It was at the Club where many of Schaeffer’s ideas were put to the test. After the war Schaeffer had written a paper that discussed questions about how sound recording creates a transformations in the perception of time, due to the ability to slow down and speed up sounds. The essay showed his grasp of sound manipulation techniques which were also demonstrated in his compositions.

In 1948 Schaeffer initiated a formal “research in to noises” at the Club d'Essai and on October 5th of that year presented the results of his experimentation at a concert given in Paris. Five works for phonograph (known collectively as Cinq études de bruits—Five Studies of Noises) including Etude violette (Study in Purple) and Etude aux chemins de fer (Study of the Railroads), were presented. This was the first flowering of the musique concrete style, and from the Club d’Essai another research group was born.
Picture
GRM: Groupe de Recherche de Musique Concrète

In 1949 another key figure in the development of Musique Concrète stepped onto the stage. By the time Pierre Henry met Pierre Schaeffer via Club d’Essai the twenty-one year percussionist-composer old had already been experimenting with sounds produced by various objects for six years. He was obsessed with the idea of integrating noise into music, and had already studied with the likes of Olivier Messiaen, Nadia Boulanger, and Félix Passerone at the Paris Conservatoire from 1938 to 1948.

For the next nine years he worked at the Club d'Essai studio at RTF. In 1950 he collaborated with Schaeffer on the piece Symphonie pour un homme seul. Two years later he scored the first musique concrète to appear in a commercial film, Astrologie ou le miroir de la vie. Henry remained a very active composer and scored for a number of other films and ballets.

Together the two Pierres were quite a pair and founded the Groupe de Recherche de Musique Concrète (GRMC) in 1951. This gave Schaeffer a new studio, which included a tape recorder. This was a significant development for him as he previously only worked with phonographs and turntables to produce music.  This sped up the work process, and also added a new dimension with the ability to cut up and splice tape in new arrangements, something not possible on a phonograph. Schaeffer is generally acknowledged as being the first composer to make music using magnetic tape.

Eventually Schaeffer had enough experimentation and material under his belt to publish À la Recherche d'une Musique Concrète ("In Search of a Concrete Music") in 1952, which was a summation of his working methods up to that point.

Schaeffer remained active in other aspects of music and radio throughout the ‘50s. In 1954 he co-founded Ocora, a music label and facility for training broadcast technicians. Ocora stood for the “Office de Coopération Radiophonique”. The purpose of the label was to preserve via recordings, rural soundscapes in Africa. Doing this kind of work also put Schaeffer at the forefront of field recording work, and in the preservation of traditional music. The training side of the operation helped get people trained to work with the African national broadcasting services. 

His last electronic noise etude was realized in 1959, the "Study of Objects" (Etudes aux Objets).
​
For Pierre Henry’s part, two years after leaving the RTF, he founded with Jean Baronnet the first private electronic studio in France, the Apsone-Cabasse Studio. Later Henry made a tribute to composing his Écho d'Orphée.
A CONCRETE LEGACY

usique remains concrete. Schaeffer had known of the “noise orchestras” of his predecessor Luigi Russolo, but took the concept of noise music and developed it further by making it clear that any and all sounds had a part to play in the vocabulary of music. He created the toolkit later experimenters took as a starting point. He was the original sampler. In all his work he emphasized the role of play, or jeu, in making music. His ide of jeu in music came from the French verb jouer. It shares the same dual meaning as the English word play. To play is to have two things at once: to make pleasing sounds or songs on a musical instrument, and to engage with things as way of enjoyment and recreation. Taking sounds and manipulating them, seeing what certain processes will do to them, is at the heart of discover and play inside the radiophonic laboratory. The ability to play opens up the mind to new possibilities.   
***

This article originally appeared in the April 2020 edition of the Q-Fiver.

If you enjoyed this article please consider reading the rest of the Radiophonic Laboratory series. 

0 Comments
<<Previous

    Justin Patrick Moore

    Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.

    For shorter pieces,  announcements of JPM radio-activity, radio show downloads, publications,  catalogers book and music alerts, and sporadic dream infused rants follow Justin at sothismedias.dreamwidth.org.

    To listen to completed musical projects please visit sothismedias.bandcamp.com 

    Archives

    March 2021
    February 2021
    January 2021
    December 2020
    October 2020
    September 2020
    August 2020
    July 2020
    May 2020
    April 2020
    February 2020
    January 2020
    December 2019
    November 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019

    Categories

    All
    Down Home Punk
    Greeat American Eccentrics
    Radiophonic Laboratory

    RSS Feed

Powered by Create your own unique website with customizable templates.