Another way Information Theory has been used in the making of music is through the sonification of data. It is the audio equivalent of visualizing data as charts, graphs, and connected plot points on maps full of numbers. Audio, here meaning those sounds that fall outside of speech categories, has a variety of advantages to other forms of conveying information. The spatial, tempo, frequency and amplitude aspects of sound can all be used to relay different messages.
One of the earliest and most successful tools to use sonification has been the Geiger counter from 1908. Its sharp clicks alert the user to the level of radiation in an area and are familiar with anyone who is a fan of post-apocalyptic sci-fi zombie movies. The faster the tempo and number of clicks the higher the amount of radiation detected in an area.
A few years after the Geiger counter was invented Dr. Edmund Fournier d'Albe came up with the optophone, a system that used photosensors to detect black printed typeface and convert it into sound. Designed to be used by blind people for reading, the optophone played a set of group notes: g c' d' e' g' b' c. The notes corresponded with positions on the reading area of the device and a note was silenced if black ink was sensed. These missing notes showed the positions where the black ink was and in this way a user could learn to read a text via sound. Though it was a genius invention the optophone didn’t catch on.
Other areas where sonification did get used include pulse oximeters (a device that measures oxygen saturation in the blood), sonar, and auditory displays inside aircraft cockpits, among others.
In 1974 a trio of experimental researchers at Bell Laboratories conducted the earliest work on auditory graphing; Max Mathews, F.R. Moore, and John M. Chambers wrote a technical memorandum called “Auditory Data Inspection.” They augmented a scatterplot -a mathematical diagram using Cartesian coordinates to display values for two or more variables in a data set- using a variety of sounds that changed frequency, spectral content, and amplitude modulation according to the points on their diagram.
Two years later the technology and science philosopher Don Ihde wrote in his book, Listening and Voice: phenomenologies of sound, "Just as science seems to produce an infinite set of visual images for virtually all of its phenomena--atoms to galaxies are familiar to us from coffee table books to science magazines; so 'musics,' too, could be produced from the same data that produces visualizations." Ihde pointed to using the tool of sonification for creativity, so that we might in effect, be able to listen to the light of the stars, the decomposition of soil, the rhythm of blood pulsing through the veins, or to make a composition out of the statistics from a series of baseball games.
It wasn’t long before musical artists headed out to carve a way through the woods where Ihde had suggested there might be a trail.
There are many techniques for transforming data into audio dada. The range of sound, its many variables and a listener’s perception give ample parameters for transmitting information as audio. Increasing or decreasing the tempo, volume, or pitch of a sound is a simple method. For instance, in a weather sonification app temperature could be read as the frequency of one tone that rises in pitch as temperature and lowers as it falls. The percentage of cloud cover could be connected to another sound that increases or decreases in volume according to coverage, while wind speed could be applied as a resonant filter across another tone. The stereo field could also be used to portray information with a certain set of data coming in on the left channel, and another set on the right.
The audio display of data is still in a wild west phase of development. No standard set of techniques has been adopted across the board. Do to the variables of information presented, and the setting of where it is presented, researchers in this field are working towards determining which set of sounds are best suited for particular applications. Programmers are writing programs or adapting existing ones to be able to parse streams of information and render it according to sets of sonification rules.
One particular technique is audification. It can be defined as a "direct translation of a data waveform to the audible domain." Data sequences are interpreted and mapped in time to an audio waveform. Various aspects of the data correspond to various sound pressure levels. Signal processing and audio effects are used to further translate the sound as data. Listeners can then hear periodic components as frequencies of sound. Audification thus requires large sets of data containing periodic components.
Developed by Greg Kramer in 1992 the goal was to allow listeners to be able to hear the way scientific measurements sounded. Audification has a number of applications in medicine, seismology, and space physics. In seismology, it is used as an additional method of earthquake prediction alongside visual representations. NASA has applied audification to the field of astrophysics, using sounds to represent various radio and plasma wave measurements. There are many musicians who are finding inspiration in using the sets of data culled from astronomy and astrophysics in the creation of new works. It’s an exciting development in the field of music.
American composer Gordon Mumma had been inspired by seismography and incorporated it into his series of piano works called Mographs. A seismic wave is the energy moving through the Earth's layers caused by earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions. All of these events give out low-frequency acoustic energy that can be be picked up by a seismograph. A seismogram has wiggly lines going all across it. These are all the seismic waves that the seismograph has recorded. Most of the waves are small because no one felt them, little tiny waves called microseisms can even be caused by ocean waves hitting the beach, heavy traffic of rumbling semi-trucks, and other things that might cause the seismograph to shake. Little dots along the graph show the minutes so the seismic waves can be seen in time. When there is seismic activity the P-wave is the first wave to be bigger than the small normal microseisms. P waves are the fastest moving seismic wave and these are usually the first to be recorded by a seismograph. The next set of waves on the seismogram are the S-waves. S-waves have a higher frequency than the P-waves and appear bigger on the seismogram.
Mumma based the structure and activity of each Mograph around data derived from seismogram recordings of earthquakes and underground nuclear explosions. The seismograms he was looking at were part of cold-war research that attempted to verify the differences between various seismic disturbances. The government wanted to know if it was a nuke that had hit San Francisco or just another rumbling from the earth. For Mumma, the structural relationships between the way the patters of P-waves and S-waves traveled in time, and their reflections, had the “compositional characteristics of musical sound-spaces”. One of the strategies he used to sonify the seismograms into music was to limit the pitch-vocabulary and intervals in each work. This gave Mumma the ability draw attention to the complexity of time and rhythmic events within each Mograph.
With these themes in mind, listening to the Mograph is like hearing tectonic plates being jostled around, here hitting each other abruptly, and there in a slow silence that grinds as two plates meet. It is the sound of very physical waves rumbling through earth and stone and dirt, and beneath concrete, as interpreted by the piano, or pairs of pianos used in some arrangements. In making these pieces from seismograph data Gordon Mumma sketched a process for others to use in future works of sonification.
By the Code of Soil
Another down to earth sonification project deals with the soil beneath our feet. It started out as a commission for artist Kasia Molga from the GROW Observatory, a citizen science organization working to take action on climate change, build better soil and grow healthier food, while using data provided by the European Space Agencies Copernicus satellites to achieve their goals.
Kasia began her project by analyzing the importance and meaning of soil, and she looked at what is happening to the soil now and how that impacts farmers, urbanites, and well, everyone. She listened to the concerns of the scientists at GROW and spent a chunk time parsing the data from the GROW sensors and the Sentinel-1A satellite that is used to asses soil moisture across Europe.
In the course of her background work Kasia wondered how she could get important information about soil health out there to the largest number of people and she hit upon the idea of using a computer virus. The resulting project, By the Code of Soil, ended up working with peoples computers and smart phones. The program didn’t install any malware, self-replicate, or actually infect anyone’s computer, but rather worked as a way to interrupt those people who spend most of their time in front of screens and remind them of the real analog world underneath their feet.
She recruited a few other people to work with her on the project, tech artists Erik Overmeire and Dan Hett, and musician Robin Rimbaud, aka Scanner. Their project turns soil data into digital art that appears on a participants’ computer (downloaded as an app) whenever land-mapping satellite Sentinel-1A passes overhead.
The Sentinel satellite missions include radar and super-spectral imaging for land, ocean and atmospheric monitoring. Each Sentinel mission is based on a constellation of two satellites that fulfill and revisit the coverage requirements for each individual mission. This provides a robust dataset for researchers to access here on Earth. Sentinel-1 provides all-weather, day and night radar imaging for land and ocean services. GROW Observatory has gotten involved by deploying thousands of soil sensors all across Europe to improve the accuracy of the observations from the orbiting birds.
Kasia designed the video art for the piece. Twice a day the Sentinel-1 passes overhead in Europe and the artwork and sounds change in real time as driven by the data.
Kasia writes, “The artwork takes control of user’s computer for a minute or two in full screen mode. It manifests itself in a quite unexpected manner – that is it only will become visible on the computer when the Sentinel-1A satellite passes by the computer’s location – approximately twice within 24 hours but never at the same time of the day.” This is how it reacts like a virus, erupting unexpectedly (unless you happen to be tracking the movement of the satellite).
To portray the soil data visually Kasia started with a pixel and a matrix. She thought of these as single grains of soil, from which something else can be created and emerge. She used visual white noise, like that of a TV on station with a channel with no broadcast, to show a signal coming out of the noise when the satellite passes, activating the algorithm written for the piece. “Various configurations of the noise – its frequencies, shapes, speed of motion and sizes – reflect the moisture, light, temperature and texture of the land near to the participant’s computer based on its IP address.”
Meanwhile Scanner handled the sound design for the project. He took a similar approach as Kasia and looked at the granular aspects of sound. “Trying to score data was a seemingly impossible task. How to soundtrack something that is ever changing, ever developing, ever in flux, refusing to remain still. Most times when one accompanies image with sound the image is locked, only to repeat again and again on repeated viewing. By the Code of Soil refuses to follow this pattern. Indeed it wasn’t until I watched the work back one evening, having last seen it the previous morning, that I realized how alive data can really be.
The only solution sonically was to consider sound, like soil, as a granular tool. The sound needed to map the tiniest detail of alterations in the data received so I created sounds that frequently last half a second long and map these across hundreds of different possibilities. It was a like a game of making mathematics colorful and curiously one can only hear it back by following the App in real time. I had to project into the future what I felt would work most successfully, since I never knew how the data would develop and alter in time either. As such the sound is as alive as the images, as malleable as the numbers which dictate their choices. Data agitates the sound into a restless and constantly mutable soundscape.”
He spent many hours designing a library of sounds with Native Intstruments Reaktor and GRM Tools and then mapping them into families. These families of sound were in turn mapped onto various aspects of the data. With the data coming into the satellite from the sensors, and the data collected from the sensors feeding into the program, different sets of sounds and visuals were played according to the system.
The success of this project for Kasia Molga and Scanner has led to them working together again in creating another multimedia work, Ode to Dirt, using soil data as a source code, for content, and inspiration. In this piece “(de)Compositions bridges the source (input) and the data (output) through inviting viewers to take part in a multi sensory experience observing how the artwork - a fragment of the ‘land’ - changes through time - its form, sound and even smell - determined by the activities of the earthworms.”
READING MUSIC: LISTENING AS INFORMATION EXTRACTION
Many musicians know how to read sheet music. For composers it’s a basic tool. But what if average people learned how to read music, that is, listen to a composition and extract information from it as if it were a couple of paragraphs of text, or for really long works, a whole book?
It strikes me that this is a distinct possibility as the field of sonification grows. Just as we have learned to signify and interpret letters and words, we may eventually come to have another shared grammar of sound that allows people to listen to the music of data and interpret that text with our ears.
This new way of reading music as information has the possibility of transforming the field of radio as the imagination is opened up to new ways of receiving knowledge. It would be interesting to create radio that included sonified data as a regular part of news stories.
This project of mapping knowledge to sound is implicit in Hesse’s description of the Glass Bead Game. Sonification is another way to bring it about as a reality. Yet to make the most of this listening opportunity, to listen to music in a way analogous to reading a book, we will have to grow new organs of perception. Pauline Oliveros started the work of carving out new pathways for the way we perceive the world in her Deep Listening workshops, concerts and work in general. This work is being continued by her partner Ione, and others trained in the skills of Deep Listening. Kim Cascone has also taught workshops on the subject of what he calls Subtle Listening. Through a variety of meditation and other exercises Kim teaches his students how to “grow new organs of perception”. Perhaps through techniques such as these we may learn to listen to data in a way that engages the imagination and transforms it into knowledge.
Listening and Voice: A phenomenology of sound by David Idhe, State University of New York, 2007
David Tudor & Gordon Mumma, Rainforest / 4 Mographs, Sections X and 7 from Gestures, New World Records, 2006
Robin Rimbaud (project documentation sent in personal communication, September 29 2020)
Read the rest of the RADIOPHONIC LABORATORY series.
Karlheinz Stockhausen’s opera cycle LICHT is many things and as a great work of art it is subject to multiple, if not endless, interpretations. These interpretations are multiple because the opera is made up of living symbols. As Carl Jung taught, it is possible to distinguish between a symbol and a sign. A symbol is the best possible expression for something that is unknown, whereas a sign is something specific, such as the insignia worn by a military officer showing his specific rank.
For this work the specific and very rich symbolism of LICHT will be set aside to look at it from a structural and systems point of view. The way Stockhausen gave his work specific limitations shaped the work in unique ways. His adept and intuitive grasp of combinatorial procedures within the limits of the system gave him a wide ranging freedom to play with the materials he had chosen, shaping the raw ingredients into an astonishing and sensual feast of sound, color, and movement.
Opening up the lid of the opera cycle it’s possible to see how its individual components create a musical engine whose individual circuits sync together in a series allowing for a dynamic flow of energies and psychoacoustic forces. Let’s look under the hood of LICHT to see how its various pieces fit together.
Conception of LICHT: Formula & Super Formula
Great ideas often come as revelatory seeds into the mind of those who are prepared. By the mid-seventies Stockhausen had been composing for a quarter of a century and he had already explored a vast territory of sound implementing new ideas for the arrangement of music in time and space. He had played with intuitive music, aleatory processes, and had mastered new electronic music techniques in the studios of WDR, just for starters. The soil of his mind and spirit were fertile, waiting for the next big idea to be planted.
Another tactic basically invented by Stockhausen was formula composition and it came out of his deep engagement with serialism. It involves the projection, expansion and ausmultiplikation of either a single melody-formula, or a two- or three-voice contrapuntal construction. In serial music the structuring features remain basically abstract but in formula composition properties such as duration, pitch, tempo, timbre, and dynamics are also specified from the formula. By using concise and specific tone succession based on the single melody formula both the macro structure and micro details of the composition can be derived.
The roots of his method of formula composition can be traced back to his once withdrawn orchestral piece Formel where the first basic pattern of notes are gradually transformed over the course of the work. The central pitch is first broadened out before the notes are removed leaving only the low and high extremes. He continued to use serial operations on his next batch of works, Kreuzspiel and Punkte, and then introduced musical pointillism into the methods as explored in Kontrapunkte and Gruppen.
Then for a time he moved on to other musical tactics and explorations but came back to the practice with ferocity in Mantra from 1970. Written for two ring modulated pianos, the pianists are also required to play a chromatic cymbals and a wood block. One of the players also has a short-wave radio tuned to a station sending morse code, or when CW isn’t readily available live on the air, a tape recording of morse code is played. It was the first composition that he wrote where he used the term formula, and was one of many watershed moments in his musical thinking. The formula involved the expansion and contraction of counterpointed melodies.
His next piece to use formula composition was Inori from 1974. By this time Stockhausen had already been working extensively with writing music that incorporated elements of theater. Inori took it to another level and he had the insight that he could use the formula, not just for music, but as a way to compose gestures. This was another component that would become essential in LICHT.
Inori is a long work with performances lasting around seventy minutes. The formula for the piece is made up of fifteen notes using 5, 3, 2, 1 and 4 pitches respectively. When the formula is used on the macros scale for the work these five phrases are split into five segments Stockhausen to create a narrative sequence. Robin Maconie says it “lead[s] from pure rhythm . . . via dynamics, melody, and harmony, to polyphony: —hence, a progression from the primitive origin of music to a condition of pure intellect. The entire work is a projection of this formula onto a duration of about 70 minutes”
In 1977 Stockhausen went back to Japan to work on a commission for the National Theater of Tokyo. The idea for intermodulation of music had come to him in his first Japanese commission with Telemusik and he had played his music alongside nineteen ensemble musicians in the special spherical chamber designed for him at the World Fair in Osaka in 1970 for about five and a half hours a day, 183 days in a row. Japan had been a good country for his musical expression. The piece he came to work on when LICHT was conceived was to being written for traditional Gagaku orchestra and Noh actors. The dramatic elements for the production however came to him in a dream, just one of many dreams that gave him direct inspiration for compositions. While composing what became Der Jahreslauf, (Course of the Years), he had a revelation about a way to represent different levels of time by different instrument groups: millenniums are depicted as three harmoniums, centuries by an anvil and three piccolos, decades by a bongo and three saxophones, and years by a bass drum, harpsichord and guitar. These instrument groups became representations of vast forces and scales of time.
This idea of composing music around the theme of various increments of time stayed with the composer for the rest of his life. While working on this commission, another idea was also transmitted into his mind, the super-formula that became the basis for LICHT. In a flash a small seed became the basis for a work of cosmic proportions. Subsequently he used Der Jahreslauf as the first act of Dienstag aus LICHT (Tuesday from Light).
In LICHT he realized his formula technique could be considerably expanded. The entire cycle of seven operas is based on three counterpointed melody formulas. Each of these is associated with one of the three principal characters that make up the dramatic element of the production. (Stockhausen himself said the formulas are the characters.) The melodies then define the tonal center and durations of scenes, and zooming in, give detailed melodic phrasing to more refined elements. The three characters are Eve, Lucifer, and Michael, and they are each associated with a specific instrument, bassett horn, trombone, and trumpet in turn.
This explains formula composition, but what about a super-formula?
In 1977 Stockhausen had been composing for just over twenty-five years. In the super-formula he synthesized nearly all of his musical ideas into a musical tool that would occupy him for the next twenty-seven years until 2003 when the last bars for Sonntag aus LICHT were drying on the staff paper.
He had the insight to take the three formulas he had come up with for Eve, Lucifer and Michael and layer them horizontally on top of each other to make the super-formula. Now they existed as one, each with their own layer, named after the character, or force, in question. The super-formula then gets subdivided again, vertically, into seven portions, of two to four measures each. These seven vertical rows form the days of the week.
Combined the horizontal and vertical rows make up the rich matrix out of which the overall structure of LICHT is built. To expand the formula in time, every quarter note of the super-formula is equal to 16 minutes of music. This is how the maestro -or magister- used it determine the durations of the opera cycles various acts and scenes.
Stockhausen also decided to create a kind of skeleton key, bare bones version of the super formula for each of the three characters. These he called “nuclear formulas” (kernformel) and consisted of just the pitches, duration and dynamics. Boiling the bones down even further provides the broth that the music is bathed in. When the nuclear formulas are reduced to just the notes what is left is essentially a serialist tone row. These are known as the kernels, central tones, or nuclear tones. Nuclear, because they form the very atoms of the music.
With all of this in place the fun has a chance to begin. The super-formula can now be used in all manner of ways. Sometimes Stockhausen employed it in an inverted or retrograde fashion (upside down or backwards). It is very often stretched out across the time frame of scenes and whole acts. Other times it is transposed vertically. Once the listener becomes familiar with each of the formulas for the characters or forces, it is possible to pick out those forces at work in the music even though the formula is not really used as a recurring theme in the typical sense of classical music. Rather, as Ed Chang said, “In LICHT, the MICHAEL, EVE and LUCIFER formulas are used more as structural forces whose tonal characteristics exert a kind of planetary gravity over the surrounding musical ether.”
LICHT is a complete system. The superformula, nuclear kernels, and nuclear tones form the mathematical and musical parts of the systems ecology. The content of the opera, its symbolism based around the days, and the spiritual realities of Eve, Michael, and Lucifer are another aspect of the system. All of this gave Stockhausen the raw material out of which to craft his magnum opus. The music and symbolism mix together and all are now subject to a remarkable game of combination and recombination. The system of LICHT forms the matrix of possibilities, and displayed within that matrix are an extraordinary blending and synthesis of constituent forms.
The idea of ausmultiplikation, which can be translated as "multiplying-out" bears further looking at in terms of how formula composition creates musical forms mirrored on the macro and micro scales. Stockhausen described the technique as when a long note is replaced by shorter "melodic configurations, internally animated around central tones". This bears a strong resemblance to the Renaissance musical technique of diminution or coloration, where long notes are divided into a series of shorter, frequently melodic, values. But Stockhausen also used the term to refer to when he substituted a complete or partial formula for a single long tone, often as background layer projections of the formula. Formula composition and its various components like ausmuliplikation can be seen as Stockhausen’s way of creating a way to practice the Glass Bead Game in music.
Robin Hartwell had the insight that when this is done at more than one level results resemble those of a fractal. If the formula compositions are fractal like, and he also used the idea of spirals throughout his work, one way of looking at LICHT is as a composed fractal music. Zooming in and out, the same structure is played in both minutely on the microscopic level, and at large on the macroscopic across the range of an entire work. Having boiled down of the musical components to microscopic levels, and having diluted them out to the macro, was one way Stockhausen prevented signal loss and maximized the transmission of his musical information. The super-formula is present and exists on every level and in every moment of LICHT.
Another way Licht can be seen as a musical system is by how it is structured in component modules. First of all, it should be considered that each of the operas is a work capable of being appreciated and understood unto itself, without having to hear or see the other sections. While listening to the whole cycle certainly enhances the experience of individual parts, those individual parts can also be enjoyed one at a time in and of themselves. Each opera, act, scene is self-sufficient. Even some parts of scenes can be extracted as solitary works. Certain other extra-curricular or auxiliary works have also been extrapolated out of the formulas of LICHT and its modular structure. All of these contain the essence of LICHT and give the listener one of many ways of enjoying the various elements of the cycle.
This was all made possible due to the practical aspects of Stockhausen’s life as a composer. After he began LICHT, when he received a commission for a new work from this or that person or cultural institution, prescribed for this or that choir group, string quartet, or other group of instrumentation, he would incorporate the work on that commission into LICHT. It was an elegant solution that allowed him to finish the massive project.
Some of the examples of modular works that can be extracted from LICHT include Klavierstucke XII and Michael’s Reise from Donnerstag; Weltraum is an assemblage of the electronic greetings and farewells of Freitag; Kathinka’s Chant for flute and electronics is an extract from Samstag; Angel Procession’s for choir comes from Sonntag; Ypssilon for flute and Xi for basset horn from Montag; the electronic layer from the second act of Dienstag becomes the piece Oktophonie; and the infamous Helicopter String Quartet is a section from Mittwoch. These are just a few of the pieces he was able to write in a modular fashion to fulfill a commission and thus complete a section of LICHT. Alternately he was able to adapt an already written section of LICHT as a module to fulfill a commission and thereby create a smaller chamber type work.
These smaller modules, extracts and auxiliary works from LICHT represent another fractal like aspect of the cycle as a system. They are separate and yet also a part of the system. The formula and super-formula interact with themselves, alongside the set symbolism of the days of the week, to produce an array of combinations perceived and permutated through Stockhausen’s intuitive imagination. Through this thoroughly disciplined act of creation and applied artistry Stockhausen has shown himself to be a “Magister Ludi” or master of the Glass Bead Game.
He has fused mathematics and music together and along these strands and placed connecting beads from the various religious and mystical traditions of the world. He used traditional correspondences, such as in Samstag for instance, associated with Saturday, and the planet Saturn, and it’s symbolism of contraction, limitation, and death. In Samstag he wrote the section Kathinka’s Gesang as Lucifer’s Requiem. Thus the mysteries of death become a main feature of this section of the work. In this piece the flautist performs a ritual with six percussionists. The ritual consists of twenty-four exercises based on Stockhausen’s study of the Tibetan Book of the Dead. It was written as a chant protecting the soul of the recently departed (in this case Lucifer) by means of musical exercises regularly performed for 49 days after the death of the body, and lead the recently deceased into to the light of clear consciousness. For these exercises he permutated the Lucifer formula into a showstopper of extended flute techniques of deft virtuosity.
And the piece may really be used by the living, and played for 49 days after the departure of a loved one to help assist them in their afterlife transition.
The entire cycle is filled with this plentitude of subtle correspondences between music, science and various world cultures. These become the raw data for his applied musical calculus that is dancing in an elaborate play upon all these correspondences, inside a defined system, to express in multiplexed forms, that which is universal.
After finishing the 29 hours of Licht, a feat some of his critics never expected him to complete, Stockhausen begin writing a series of chamber pieces called Klang, with the intent of writing one for each of the twenty-four hours of the day. Having conceived the musical forces of the days of the week, he was zooming in again to explore the musical forces behind each hour of the day. Formula composition gave him the tool he needed to explore these hours. Having written 21 of the pieces the cycle was unfinished at the time of the composer’s unexpected death in 2007 when he voyaged forth into the greater harmonies of cosmic space and time.
Read the rest of the Radiophonic Laboratory series.
Other Planets: The Complete Works of Karlheinz Stockhausen 1950–2007, by Robin Maconie,Rowman & Littlefield Publishers, Maryland, 2016.
Ed Chang's website in general has been super helpful in understanding the super-formula. It is a great journey through the Space of Stockhausen.
Threats and Promises: Lucifer, Hell, and Stockhausen's Sunday from Light" by Robin Hartwell in Perspectives of New Music 50, nos. 1 & 2 (Winter–Summer): 393–424.
Into the Middleground: Formula Syntax in Stockhausen's Licht" by Jerome Kohl in Perspectives of New Music 28, no. 2 (Summer): 262–91.
Shannon wasn’t the only one looking at the way signals were transmitted. The same year he published his breakthrough paper, another mathematician published a book that would leave a lasting impression on a number of different fields, electronic music among them. The man was Norbert Wiener and his book was Cybernetics: or control and communication in animal and machine. Wiener defined cybernetics as "the scientific study of control and communication in the animal and the machine".
Wiener was a child prodigy. Born to Polish and German Jewish immigrants, on his fathers side Nobert was related to Maimonides, the famous rabbi, philosopher and physician from Al Andalus. The predisposition to intellectual greatness was hardwired into his system. Norbert’s father Leo was a teacher of Germanic and Slavic languages and avid reader and book hound who put together an impressive personally library which his son devoured. His father also had a gift for math and gave his son additional instructions in the subject.
At age 11 Norbert graduated Ayer Highschool in Massachussettes and then began attending Tufts College where he received a BA in mathematics at the age of 14. From there he went on to study zoology at Harvard before transferring to Cornell to pursue philosophy, where he graduated at the ripe old age of 17 in 1911, when his classmates from Ayer were probably just entering college if they went at all. Then he went back to Harvard where he wrote a dissertation on mathematical logic, comparing the works of Ernst Schröder with Bertrand Russel and Albert North Whitehead. His work showed that ordered pairs could be defined according to elementary set theory. His Ph.D. was awarded before he turned twenty. Later that same year he went to Cambridge and studied under Russel, as well as at the University of Göttingen where to learn from Edmund Husserl.
After a brief period teaching philosophy at Harvard, Wiener eventually found a position at MIT that would become permanent. In 1926, Wiener returned to Cambridge and Göttingen as a Guggenheim scholar, on a trip that would have important implications for his future work. He spent his time there investigating Brownian motion, the Fourier integral, Dirichlet's problem, harmonic analysis, and the Tauberian theorems.
Harmonic analysis and Browninan motion in particular would go on to have a key role in the development of cybernetics.
Harmonic analysis is a branch off the great tree of math that is concerned with analyzing and describing periodic and recurrent phenomena in nature, such as the many forms of waves: musical waves, tidal waves, radio waves, alternating current, the motion and vibration of machines. And it branched off the research of French mathematician Joseph Fourier (1768-1830). Fourier was interested in the conduction of heat and other thermal effects, a trail later followed by Nyquist in his own investigations of thermal noise.
According to the Encyclopedia Brittanica the motions of waves “can be measured at a number of successive values of the independent variable, usually the time, and these data or a curve plotted from them will represent a function of that independent variable. Generally, the mathematical expression for the function will be unknown. However, with the periodic functions found in nature, the function can be expressed as the sum of a number of sine and cosine terms.” The sum of these is known as a Fourier series. The determination of the coefficients of these terms is became known as harmonic analysis.
Brownian motion or movement relates to a variety of physical phenomena where some quantity of substance undergoes small and constant but random fluctuations. When those particles that are subject to Brownian motion are moving inside a given medium, and there is no preferred direction for these random oscillations to go, the particles will over time, spread out evenly in the substance.
Both Browninan motion and harmonic analysis can be considered stochastic processes. A stochastic process is, at its core, a process that involves the operation of chance. It is a process where values change in a random way over time. Markov chains are another important form of stochastic process that has been applied to music. Stochastic process can also be used to study noise, and Wiener was a student of this mathematical noise.
Amidst the conflicts of WWII Norbert was called upon to use his prodigious brain for solving technical problems associated with warfare. He attacked the problem of automatic aiming and firing of anti-aircraft guns. This required the development and further branching of even more specialized math. It also introduced statistical methods into the recondite area of control and communications engineering, which in turn led to his formulation of the cybernetics concept.
His concept of cybernetics was eerily close to Claude Shannon’s information theory. What they both had in common was knowledge of the influence of noise and the desire to communicate or find signals in, above, and around the noise. One of the ways Wiener figured out how to do this was through filtering. Enter the Wiener filter. It works by computing statistical estimates of unknown signals using a related signal as an input and filtering that to produce an estimated output. Say a signal has been obscured by the addition of noise. The Wiener filter removes the added noise from the signal to give an estimate of the original signal.
Cybernetics is also related to systems theory, and studied in particular the idea of feedback, or a closed signaling loop. Wiener originally referred to the way information or signals effect relationships in system as “circular causal”. Feedback occurs when some action within the system triggers a change in the environment. The environment in turn effects another change in the system when it feeds back the now transformed signal into the originating source. Wiener, through his study of zoology was applicable to biological and social systems, as well as the mechanical ones his research had originally grown out of. Cognitive systems could also be understood in terms of these circular causal chains of action and reaction feeding back in on itself.
Cybernetic’s essential idea of feedback was also directly applicable to the new electronic musical systems defined by the advent of the microphone, amplifier, and speaker. When these devices are connected together in a circuit audio feedback is one possible result stemming from holding the mic close to the speaker. Everyone has experienced the unintentional noise when a PA is being tested. Musicians quickly adapted the idea of using intentional feedback, and distortion (noise on a signal) to give their recordings and live performances a new sound.
Cybernetics is not limited to mapping the flow of information, distorted or otherwise, in and out of systems. It also includes concepts of learning and adaption, social control, connectivity and communication, efficiency, efficacy, and emergence.
The related fields of information theory, cybernetics and systems theory would have huge impacts on music and the arts, as the theories trickled down from places like Bell Labs, the Macy Conferences with their focus on communication across scientific disciplines, and the success of Wiener’s book outside of strictly scientific circles.
The word cybernetics sounds kind of cold and inhuman. It conjures up the chrome clad computerized villains made famous by Doctor Who, the cybermen who speak only in monotone and whose overriding program is to delete organic life. Yet the word cybernetics itself comes from the Greek kybernḗtēs, or "steersman, governor, pilot, or rudder.” Human systems require a guide, someone to steer them. Wiener had picked up the word from the French mathematician and physicist André-Marie Ampère who coined the word "cybernetique" in an 1834 essay on science and civil government. Governments and other systems of human invention require steersman and guides with a firm hand on the rudder to give direction and control the effects of feedback.
The creation of systems is a human trait, and their guidance, via our input, doesn’t have to be cold. It can be done with intuition, insight, and artistic flair. Writing on systems in the world of art for the 1968 Cybernetic Serendipity art and music show at the ICA gallery in London, Jasia Reichardt wrote, "The very notion of having a system in relation to making paintings is often anathema to those who value the mysterious and the intuitive, the free and the expressionistic, in art. Systems, nevertheless, dispense neither with intuition nor mystery. Intuition is instrumental in the design of the system and mystery always remains in the final result."
The Discreet Music of Brian Eno
Designing musical systems can result in extraordinary beauty. In the mid-1960s while attending Ipswich Art School Brian Eno had his first encounter with cybernetics. It would go on to have a lasting influence. Under the mentorship of Roy Ascott who had developed the controversial “Groundcourse” curriculum adopted by a number of other art colleges Eno absorbed Ascott’s philosophy of systems learning, making mind maps, and playing mental games.
Eno started thinking of the music studio and groups of musicians in terms of cybernetic systems. Making great musical compositions started with designing the parameters, limits, inputs and outputs that would give a composition its ultimate form. Creating these systems and letting them run was how many of his first, and the first, ambient music records were made.
The liner notes for Eno’s 1975 album Discreet Music contain a block diagram of the system he created for the music. He had been given an album of 18th century harp music to listen to while laying in the bed in the hospital, where he was recovering from a car accident injury. A friend who had been visiting put the record on for him before she left but the volume was turned down too low. Outside it was raining and he listened to “these odd notes of the harp that were just loud enough to be heard above the rain.” The experience “presented what was for me a new way of hearing music—as part of the ambience of the environment just as the color of the light and the sound of the rain were parts of that ambience.”
Eno connected this experience to Erik Satie’s idea of “furniture music” that was intended to blend into the ambient atmosphere of the room, and not be something focused on directly. Furniture music could mix and combine with the sounds of forks, knives, tinkling glasses and conversation at a dinner.
After Eno’s listening experience in the hospital he set out to make his own ambient music, setting off a musical cascade and defining and kick-starting a genre that at the time of this writing is now forty-five years old.
In the liner notes to Discreet Music, Eno wrote these now famous lines, “Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part. That is to say, I tend towards the roles of the planner and programmer, and then become an audience to the results.”
The liner notes also contain a block diagram of the system he set up. Eno had wanted to create a background drone for guitarist Robert Fripp to play along with. He was working with an EMS Synthi AKS with built-in memory and a tape delay system. He kept being interrupted in his musical work by knocks on the door and phone calls. He says, “I was answering the phone and adjusting all this stuff as it ran. I almost made that without listening to it. It was really automatic music.”
Discreet music started with two melodic phrases of differing lengths played back from the digital recall of the synth. That signal was then ran through a graphic equalizer to change its timbre. After the EQ the audio went into an echo unit and the output of that was recorded to a tape machine. That tape runs to the take-up reel of a second tape machine, whose output is fed back into the first machine which records the overlapping signals and sounds. When Fripp came by the next day to have a listen Eno accidentally played the recording back at half-speed. Eno says of the result “it was probably one of the best things I’d ever done and I didn’t even realize I was doing it at the time.”
Autonomous Dynamical Systems
Another example of musical systems in practice comes from the work of David Dunn. David is a composer, sound artist, bioacoustics researcher and an expert at making audio recordings of wildlife. A deep interest in acoustic ecology informs his work. Ecological thinking and systems thinking go hand in hand and this sensibility is present in many of David’s works.
His 2007 album Autonomous Dynamical Systems touches on ecology, fractals and chaos theory, graphic imagery to sound conversions, and feedback loops. The album consists of four compositions. Lorenz from 2005 is a collaboration with chaos scientist James Crutchfield. James has a long history of work in the areas of nonlinear dynamics, solid-state physics, astrophysics, fluid mechanics, critical phenomena and phase transitions, chaos, and pattern formation, having published over 100 papers in his field of mathematics and physics.
The Lorenz attractor was first studied by meteorologist Edward Lorenz in 1963. He derived the math from a simplified model of convection in the earth's atmosphere and is most frequently expressed as a set of three coupled non-linear differential equations. In popular culture the idea of the “butterfly effect” comes from the physical implications of the Lorenz attractor. In any deterministic nonlinear system one small change, even the small disturbances in air made by the flight of a butterfly, can result in huge differences to the system at a later time. This shows that systems can be deterministic and unpredictable at the same time. When the Lorenz attractor is plotted out graphically it has two large interconnected oval shapes resembling a butterfly or a pair of wings.
For the piece Lorenz, David Dunn used a piece of software written by Crutchfield called MODE (Multiple Ordinary Differential Equations) plugged into the interface program OSC (Open Sound Control), a networking protocol that allows synthesizers, computers, and other multimedia devices. OSC is then in turn fed into sound synthesis program. The sound synthesis program is then fed back into OSC and again into MODE. The entire piece is a feedback loop originating from chaos controlled sound. As such its structure embodies the very principles it seeks to express as music. Another piece on the album, Nine Strange Attractors from 2006 steps up the game even further in its creative use of mathematics to explore feedback loops.
Another piece uses feedback loops in a different way. Autonomous Systems: Red Rocks from 2003 used environmental field recordings fed into computer systems. Saved in the memory a chaos generator program chooses from among the sounds in a non-linear fashion and plays them back, sometimes electronically transformed, other times not. The composition is done, not by performing live, but by setting up and programming the system, then stepping away, sitting back, and listening to the results.
John Cage said, “My compositions arise by asking questions.” The music of systems proceeds from this same curious spirit. When designing new electronic works the composer must begin by asking questions of herself. Then systems can be designed to ask that question in different ways and to find out different answers.
Wobbly and his Smart Phone System
Wobbly, aka Jon Leidecker, a solo artist, member of Negativland, and now host of radio show Over the Edge after the death of Don Joyce has also made a very interesting album by working with systems.
Between 2015 and 2018 Wobbly worked on an album called Monitress, released in 2019. He created an innovative system leveraging musical pitch tracking apps and synthesizers on a group of mobile phones and other mobile devices. Each of the devices was sent an audio signal. This was picked up by the pitch tracking app and coverted to MIDI data used to drive the synth. The resulting sound is then fed into an analog mixer. Once the signal is going into the mixer it can be routed and fed back into another mobile device also running a pitch tracking app and synth. The resulting effect is a cascade of sound between the devices.
As Jon writes in the liner notes for the album, “ Feedback loops similar to acoustic or electrical feedback occur when you close the circle. The pitch-tracking apps are prone to errors, especially when presented with complex multiphonics or polyphonies; they get quite a few notes fascinatingly wrong. But more striking is the audible reality of their listening to each other. Unison lines are an elemental sign of musical intelligence; we are entrained to emotional reactions when hearing multiple voices attempting the same melody. These machines may not meet our current criterion for consciousness, but every audience I’ve played this piece in front of quickly realizes they're not listening to a solo…
The technology used to create these sounds existed before the mobiles, but this music would not have been made on earlier equipment -- it's a result of the relationship developed with a machine that is always present, and always listening. This was the project I dug into as we woke up to the true owners of these tools, a frame to make the relationship between ourselves and our machines audible while we think about the necessary steps to take next.”
The textures on this album are sublime, the kind of things that could only be heard through this a cascade of forces, each triggered by the preceding and affecting the whole in tandem. Wobbly did do post production editing of this work, but the initial results he captured once the process was set in motion is where the real magic lies. This is the kind of music that can’t be predicted. It couldn’t be written by a composer note for note. Rather the job of the composer is to design systems capable of eliciting beauty.
The three examples of systems music explored here are only a few of many. Musical systems is a large category within the new common practice generally. Other ways of thinking about it is in terms of modular set ups, various configurations of test equipment, systems of feedback in the way guitar pedals are arranged, and more. I don’t know if Norbert Wiener ever thought of music as one of the places where cybernetics would take flight. To hear the music made with its principles is an artistic way of exploring the rich ecology of sound.
Read the rest of the Radiophonic Laboratory series.
The Information: a history, a theory, a flood by James Gleick, Pantheon, 2011
A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman, Simon & Schuster, 2018
Encyclopedia Britannica: https://www.britannica.com/science/harmonic-analysis
Brian Eno, Discreet Music, Obscure Records, 1975
David Dunn, Autonomous and Dynamical Systems, New World Records 2007
Wobbly, Monitress: https://hausumountain.bandcamp.com/album/monitress
Stockhausen picked up his interest in information theory by way of Werner Meyer-Eppler during his time as a student at the Bonn. Meyer-Eppler himself was something of a scientific polymath, having studied mathematics, chemistry and physics at the University of Cologne in the late 1930s before going to the Bonn where he became a scientific assistant in the physics department, and then a lecturer in experimental physics. After WWII ended his attention turned with laser beam focus to the subject of phonetics and speech synthesis. In 1947 Paul Menzerath brought him into faculty of the Phonetic Institute of the University of Bonn. It was in this time period when Meyer-Eppler started publishing essays on the Voder, Vocoder, and Visible Speech Machine. One of his contributions to the field that is still in use today was his work on the development of the electrolarynx.
Information theory made many contributions to many fields. Linguistics was one of those fields where it was influential in studying how frequently words were used, word length, and the speed words could be read. Shannon had tested the information theory principle of redundancy, or the amount of wasted space used to transmit a message, by having his wife predict the number of repeating letters in a random crime novel he pulled off his bookshelf. Sometimes redundant is better, when it comes to getting a message across. Redundancy is added while communicating over noisy channels as a method of error correction.
Shannon had the insight that this was a baked-in purpose behind the repetition of letters in Englsih. He had also showed that he could use stochastic processes to build something that resembled the English language from scratch.
Werner had been following these developments of information theory with special attention to their applications in linguistics and speech. Later in the 50’s Meyer-Eppler became concerned with how statistics and probability, core tools of information theory, might be applied to creating electronic music as explored in his book Statistic and Psychologic Problems of Sound. In this work Meyer-Eppler introduced the word “aleatoric” into the musical lexicon. According to his definition “a process is said to be aleatoric ... if its course is determined in general but depends on chance in detail”.
Aleatoric music is made when some element of the composition is left to chance or when a significant portion of how the composition is realized is left up to the performer or performers. Aleatoric composition has a precedent in the dice games of the 18th century. The word itself comes from the Latin alea, meaning dice.
There are many methods for applying aleatoric processes to music. One of the ways Stockhausen tackled it was by using a polyvalent structure, or writing a piece that was open to a number of different interpretations. Klavierstucke XI is an example of such a piece that he wrote for piano.
The piece is made up of 19 fragments printed on a very large piece of paper. There is no turning of the sheet music. The pianist may start with any fragment they wish and from there continue on to any other fragment they wish to play. It is polyvalent because each performance could begin and end in new places. There is no set musical narrative; it is more like reading a choose-your-own adventure, or wondering through a maze, or labyrinth which the pianist enters, circumnavigates, and then returns. Each time the pianist may enter the labyrinth from a new entrance and likewise, reemerge in a different place.
The pianist shares responsibility with the composer for the eventual shape of any given performances. The possible permutations are vast, yet even in different interpretations it may be heard as the same piece of music, its essential characteristics remaining the same no matter the order they are played.
Commenting on the piece the composer said, "Piano Piece XI is nothing but a sound in which certain partials, components, are behaving statistically... If I make a whole piece similar to the ways in which (a complex noise) is organized, then naturally the individual components of this piece could also be exchanged, permutated, without changing its basic quality."
Considered as a whole Piece XI will sound the same even though every time it is played it will sound different. It is a system unto itself, and as a system, even when the component parts are rearranged in the order they are played it is still the same system, and will still sound like itself. Listened to statistically the musical values remain the same.
Stockhausen would go on to use polyvalent form again and again. In his percussion piece Zyklus (Cycle) from 1959 the score is printed as a spiral and the performer may start anywhere within the spiral he or she chooses. Furthermore, they may play the piece from left to right or right to left. The piece is finished when the player reaches the original starting point. In the performance space the cycle is shown again visually with the percussion pieces laid out in a circle with the performer moving around them in the manner determined by a chosen starting point.
Zyklus also shows the amazing diversity of possible interpretations demonstrated before in Piece XI. It is however the interpretation of the scored is a bit more closed. On one side of the score the music becomes increasingly aleatoric, giving more freedom to the player in how it is interpreted. On the other side of the spiral the composition is exactly fixed and predetermined. Played on way it moves from fixed to open, and in the other direction from open to closed.
Stockhausen was obsessed with cycles. Specifically cycles of time. His mid-seventies composition Tierkreis (Zodiac) consisted of twelve melodies for each of the twelve zodiac signs. Originally written for custom made music boxes, Tierkreis can be played on any melody instrument and peformed in a number of different ways. For the purpose here a complete performance begins with the melody for the corresponding zodiac sign for the day when the performance is being held. For instance, if the performance was held on August 22 the performers would begin with the Leo melody and proceed through Virgo, Libra, and the rest until they return to the starting melody of Leo. Each melody is played at least three times and may be improvised upon. This gives considerable variation to individual performances. Further variations are specified by the composer.
In his chamber opera Sirius written a few years later the Tierkreis melodies are employed again in a section of the piece called The Wheel. Here the music may be heard in four different ways, depending on the season it is performed. If played in the Winter the section starts with the melody for Capricorn, if in the Spring with Aries, Summer starts with Cancer, and Autumn with Libra.
In all of these cyclical works an echo of the tape loop may be heard. Stockhausen had worked with tape loops extensively in his piece Kontakte, using them to show relationships between pitch, timbre and the way musical events can be perceived in time and space through the process of slowing things up or down. I wonder, if besides the strong grounding Stockhausen had in religion, philosophy, and science if the eternal return and recurrence of the tape loop at all framed his cosmic conception of the vast cycles of time.
The cycles continued in his magnum opus LICHT (Light): Die sieben Tage der Woche (The Seven Days of the Week). Written between 1977 and 2003 it is a cycle of seven operas, one for each of the seven days of the week. Stockhausen described the work as an “eternal spiral” considering there to be “neither end nor beginning to the week.” Clocking in at a total duration of 29 hours, deft intricacies exist within the piece on a micro and macro scale and many volumes have already been and will continue to be written about it. Within the broad palette afforded by an opera cycle longer than Wagner’s the Ring, Stockhausen was able to play the role of a Magister Ludi, or master of the Glass Bead Game. LICHT is a system, and within that system Stockhausen playfully and masterfully displayed with pyrotechnic virtuosity a comprehensive knowledge of combinatorial and permutative arts as applied to music.
These arts of combination were a central component of the Glass Bead Game as played in the novel.
To show how all of these interlocking parts fit together the basic structure of the opera must be examined. And to understand LICHT as a system a slight change of lanes onto the parallel track of Norbert Wiener and his theory of cybernetics is in order.
Read the rest of the Radiophonic Laboratory Series.
Other Planets: The Complete Works of Karlheinz Stockhausen 1950–2007, by Robin Maconie,Rowman & Littlefield Publishers, Maryland, 2016.
Klavierstucke XI essay by Ed Chang:
The Glass Bead Game by Herman Hesse, translated by Clara and Richard Winston, Holt, Rinehart and Winston, 1990
From the ice cold farms and fields of Michigan to the halls of MIT and then onwards to Bell Labs at Murray Hill, Claude Shannon was a mathematical maverick and inveterate tinkerer. In the 1920s, in those places where the phone company had not deigned to bring their network, around three million farmers built their own by connecting telegraph keys to the barbed wire fences that stretched between properties. As a young boy Shannon rigged up one of these “farm networks so he and one his friend who lived half a mile away could talk to each other at night in Morse code. He was also the local kid people in the town would bring their radios to when they needed repair and he got them to work. He had the knack.
He also had an aptitude for the more abstract side of a math and his mind could handle complex equations with ease. At the age of seventeen he was already in college at the University of Michigan and had published his first work in an academic journal, a solution to a math problem presented in the pages of American Mathematical Monthly. He did a double major in school and graduated with degrees in electrical engineering and mathematics then headed off to MIT for his masters.
While there he got under the wing of Vannevar Bush. Vannevar had followed in the footsteps of Lord Kelvin, who had created one of the world’s first analog computers, the harmonic analyzer, used to measure the ebb and flow of the tides. Vannevar’s differential analyzer was a huge electromechanical computer that was the size of a room. It solved differential equations by integration, using a wheel-and-disc mechanisms to perform the integration.
At school he was also introduced to the work of mathematician George Boole, whose 1854 book on algebraic logic The Laws of Thought laid down some of the essential foundations for the creation of computers. George Boole had in turn taken up the system of logic developed by Gottfried Wilhelm Leibniz. Might Boole have also been familiar with Leibniz’s book De Arte Combinatoria? In this book Leibniz proposed an alphabet of human thought, and was himself inspired by the Ars Magna of Ramon Lull. Leibniz wanted to take the Ars Magna, or “ultimate general art” developed by Lull as a debating tool that helped speakers combine ideas through a compilation of lists, and bring it closer to mathematics and turn it into a kind of calculus. Shannon became the inheritor of these strands of thought, through their development in the mathematics and formal logic that became Boolean algebra.
Between working with Bush’s differential analyzer and his study of Boolean algebra, Shannon was able to design switching circuits. This became the subject of his 1937 master thesis, A Symbolic Analysis of Relay and Switching Circuits.
Shannon was able to prove his switching circuit could be used simplify the complex and baroque system of electromechanical relays used in AT&T’s routing switches. Then he expanded his concept and showed that his circuits could solve any Boolean algebra problem. He finalized the work with a series of circuit diagrams.
In writing his paper Shannon took George Boole’s algebraic insights and made them practical. Electrical switches could now implement logic. It was a watershed moment that established the integral concept behind all electronic digital computers. Digital circuit design was born.
Next he had to get his PhD. It took him three more years, and his subject matter showed the first signs of multidisciplinary inclination that would later become a dominant feature of information theory. Vannevar Bush compelled him to go to Cold Spring Harbor Laboratory to work on his dissertation in the field of genetics. For Vannevar the logic was that if Shannon’s algebra could work on electrical relays it might also prove to be of value in the study of Mendelian heredity. His research in this area resulted in his work An Algebra for Theoretical Genetics, for which he received his PhD in 1940.
The work proved to be too abstract to be useful and during his time at Cold Spring Harbor he was often distracted. In a letter to his mentor Vannevar he wrote, “I’ve been working on three different ideas simultaneously, and strangely enough it seems a more productive method that sticking to one problem… Off and on I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intelligence, including telephony, radio, television, telegraphy, etc…”
With a doctorate under his belt Shannon went on to the Institute of Advanced Study in Princeton, New Jersey where his mind was able to wonder across disciplines and where he rubbed elbows with other great minds, including on occasion, Albert Einstein and Kurt Gödel. He discussed science, math and engineering with Hermann Weyl and John Von Neumann. All of these encounters fed his mind.
It wasn’t long before Shannon went elsewhere in New Jersey, to Bell Labs. There he got to rub elbows with other great minds such as Thornton Fry and Alan Turing. His prodigious talents were also being put to work for the war effort.
It started with a study of noise. During WWII Shannon had worked on the SIGSALY system that was used for encrypting voice conversations between Franklin D. Roosevelt and Winston Churchill. It worked by sampling the voice signal fifty times a second, digitizing it, and then masking it with a random key that sounded like the circuit noise so familiar to electrical engineers.
Shannon hadn’t designed the system, but he had been tasked with trying to break it, like a hacker, to see what its weak spots were, to find out if it was an impenetrable fortress that could withstand the attempts of an enemy assault.
Alan Turing was also working at Bell Labs on SIGSALY. The British had sent him over to also make sure the system was secure. If Churchill was to be communicating on it, it needed to be uncrackable. During the war effort Turing got to know Claude. The two weren’t allowed to talk about their top secret projects, cryptography, or anything related to their efforts against the Axis powers but they had plenty of other stuff to talk about, and they explored their shared passions, namely, math and the idea that machines might one day be able to learn and think.
Are all numbers computable? This was a question Turing asked in his famous 1937 paper On Computable Numbers. He had shown the paper to Shannon. In it Turing defined calculation as a mechanical procedure or algorithm.
This paper got the pistons in Shannon’s mind firing. Alan had said, “It is always possible to use sequences of symbols in the place of single symbols.” Shannon was already thinking of the way information gets transmitted from one place to the next. Turing used statistical analysis as part of his arsenal when breaking the Enigma ciphers. Information theory in turn ended up being based on statistics and probability theory.
The meeting of these two preeminent minds was just one catalyst for the creation of the large field and sandbox of information theory. Important legwork had already been done by other investigators who had made brief excursions into the territory later mapped out by Shannon.
Telecommunications in general already contained within it many ideas that would later become part of the theories core. Starting with telegraphy and Morse code in the 1830s common letters expressed with the least amount of variation, as in E, one dot. Letters not used as often have a longer expression, such as B, a dash and three dots. The whole idea of lossless data compression is embedded as a seed pattern within this system of encoding information.
In 1924 Harry Nyquist published the exciting Certain Factors Affecting Telegraph Speed in the Bell System Technical Journal. Nyquist’s research was focused on increasing the speed of a telegraph circuit. One of the first things an engineer runs into when working on this problem is how to transmit the maximum amount of intelligence on a given range of frequencies without causing interference in the circuit or others that it might be connected to. In other words how do you increase speed and amount of intelligence without adding distortion, noise or create spurious signals?
In 1928, Ralph Hartley, also at Bell Labs, wrote his paper the Transmission of Information. He made it explicit that information was a measurable quantity. Information could only reflect the ability of the receiver to distinguish that one sequence of symbols had been intended by the sender rather than any other, that the letter A means A and not E.
Jump forward another decade to the invention of the vocoder. It was designed to use less bandwidth, compressing the voice of the speaker into less space. Now that same technology is used in cellphones as codecs to compress the voice and so more lines of communication can be used on the phone companies allocated frequencies.
WWII had a way of producing scientific side effects, discoveries that would break on through to affect civilian life after the war. While Shannon worked on SIGSALY and other cryptic work he continued to tinker on other projects. Shannon’s paper was one of the things he tinkered and had profound side effects. Twenty years after Hartley addressed the way information is transmitted, Shannon stated it this way, "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
In addition to the ideas of clear communication across a channel Information theory also brought the following ideas into play:
-The Bit, or binary digit. One bit is the information entropy of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.
-The Shannon Limit: A formula for channel capacity. This is the speed limit for a given communication channel.
-Within that limit there must always be techniques for error correction that can overcome the noise level on a given channel. A transmitter may have to send more bits to a receiver at a slower rate but eventually the message will get there.
His theory was a strange attractor in a chaotic system of noisy information. Noise itself tends to bring diverse disciplinary approaches together, interfering in their constitution and their dynamics. Information theory, in transmitting its own intelligence, has in its own way, interfered with other circuits of knowledge it has come in contact with.
A few years later psychologist and computer scientist J.C. R. Licklider said, “It is probably dangerous to use this theory of information in fields for which it was not designed, but I think the danger will not keep people from using it.”
Information theory encompasses every other field it can get its hands on. It’s like a black hole, and everything in its gravitational path gets sucked in. Formed at the spoked crossroads of cryptography, mathematics, statistics, computer science, thermal physics, neurobiology, information engineering, and electrical engineering it has been applied to even more fields of study and practice: statistical inference, natural language processing, the evolution and function of molecular codes (bioinformatics), model selection in statistics, quantum computing, linguistics, plagiarism detection. It is the source code behind pattern recognition and anomaly detection, two human skills in great demand in the 21st century.
I wonder if Shannon knew when he wrote ‘A Mathematical Theory of Communication’ for the 1948 issue of the Bell Systems Technical Journal that his theory would go on to unify, fragment, and spin off into multiple disciplines and fields of human endeavor, music just one among a plethora.
Yet music is a form of information. It is always in formation. And information can be sonified and used to make music. Raw data becomes audio dada. Music is communication and one way of listening to it is as a transmission of information. The principles Shannon elucidated are form of noise in the systems of world knowledge, and highlight one way of connecting different fields of study together. As information theory exploded it was quickly picked up as a tool among the more adventurous music composers.
Information theory could be at the heart of making the fictional Glass Bead Game of Herman Hesse a reality. Herman Hesse also dropped several hints and clues in his work that connected it with the same thinkers whose work served as a link to Boolean algebra, namely Athanasius Kircher, Lull and Leibniz who were all practitioners and advocates of the mnemonic and combinatorial arts. Like its predecessors, Information Theory is well suited to connecting the spaces between different fields. In Hesse’s masterpiece the game was created by a musician as a way of “represent[ing] with beads musical quotations or invented themes, could alter, transpose, and develop them, change them and set them in counterpoint to one another.” After some time passed the game was taken up by mathematicians. “…the Game was so far developed it was capable of expressing mathematical processes by special symbols and abbreviations. The players, mutually elaborating these processes, threw these abstract formulas at one another, displaying the sequences and possibilities of their science.”
Hesse goes on to explain, “At various times the Game was taken up and imitated by nearly all the scientific and scholarly disciplines, that is, adapted to the special fields. There is documented evidence for its application to the fields of classical philology and logic. The analytical study had led to the reduction of musical events to physical and mathematical formulas. Soon after philology borrowed this method and began to measure linguistic configurations as physics measured processes in nature. The visual arts soon followed suit, architecture having already led the way in establishing the links between visual art and mathematics. Thereafter more and more new relations, analogies, and correspondences were discovered among the abstract formulas obtained this way.”
In the next sections I will explore the way information theory was used and applied in the music of Karlheinz Stockhausen.
Read the rest of the Radiophonic Laboratory series.
A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman, Simon & Schuster, 2018
The Information: a history, a theory, a flood by James Gleick, Pantheon, 2011
The Glass Bead Game by Herman Hesse, translated by Clara and Richard Winston, Holt, Rinehart and Winston, 1990
Information Theory and Music by Joel Cohen, Behavioral Science, 7:2
Information Theory and the Digital Age by Aftab, Cheung, Kim, Thakkar, Yeddanapudi
Logic and the art of memory: the quest for a universal language, by Paolo Rossi, The Athlone Press, University of Chicago, 2000.
“There is more in man and in music than in mathematics, but music includes all that is in mathematics.”—Peter Hoffman
Infotainment is usually thought of as light entertainment peppered with superficial “facts” and forgettable news. Yet another kind of infotainment exists, a musical kind that is based on mathematical algorithms. It is true entertainment that is filled with true information and though it is mathematically modeled none of it is fake.
In the twentieth century interest in the multidisciplinary fields of Information Theory and Cybernetics led to dizzy bursts of creativity when their ideas were applied to making new music. These disciplines applied rigorous math to the study of communication systems and how a signal transmitted from one person can cut through the noise of other spurious signals to be received by another person. They also made explicit the role of feedback inside of a system, how signals can amplify themselves and trigger new signals. All of this was studied complex equations and formulas.
Yet there is nothing new about the relationship between music and math.
Algorithmic music has been made for centuries. It can be traced all the way back to Pythagoras, who thought of music and math as inseparable. If music can be formalized in terms of numbers, music can also be formalized as information or data. The “data” the ancients used to drive their compositions was the movement of the stars. Ptolemy is known to us most for his geocentric view of the cosmos and the ordered spheres the celestial bodies traveled on. Besides being an astronomer Ptolemy was also a systematic musical theorist. He believed that math was the basis for musical intervals and he saw those same intervals at play in the spacing of heavenly bodies, each planet and body corresponding to a certain modes and notes.
Ptolemy was just one of many who believed in the reality of the music of the spheres. Out of these ancient Greek investigations into the nature of music and the cosmos came the first musical systems. The musician who used them was thus a mediator between the cosmic forces of the heavens above and the life of humanity here below.
Western music went through myriad changes across the intervening centuries after Ptolemy. World powers rose and fell, new religions came into being. Out of the mystical monophonic plainchant uttered by Christian monks in candlelit monasteries polyphony arose, and it called for new rules and laws to govern how the multiple voices were to sing together. This was called “canonic” composition. A composer in this era (15th century) would write a line for a single voice. The canonic rule gave the additional singers and voices the necessary instruction. For instance one rule would be to for a second voice to start singing the melody begun by one voice again after a set amount of time. Other rules would denote inversions, retrograde movement, or other practices as applied to the music.
From this basis the rules, voices, and number of instruments were enlarged through the renaissance until the time of the era of “Common Practice”, roughly between 1650 to 1900. This period encompassed baroque music, and the classical, romantic and impressionist movements. The 20th and 21st century are now giving birth to what Alvin Curran has called the New Common Practice.
In the Common Practice Era tonal harmony and counterpoint reigned supreme, and a suite of rhythmic and durational patterns gave form to the music. These were the “algorithmic” sand boxes composers could play in.
The New Common Practice, according to Curran encompasses, “the direct unmediated embracing of sound, all and any sound, as well as the connecting links between sounds, regardless of their origins, histories or specific meanings; by extension, it is the self guided compositional structuring of any number of sound objects of whatever kind sequentially and/or simultaneously in time and in space with any available means.” I’ve begun to think of this New Common Practice as embracing the entire gamut of 20th and 21st century musical practices: serialism, atonality, musique concrete, electronics, solo and collective improvisation, text pieces, and the rest of it.
One vital facet of the New Common Practice is chance operations, or the use of randomizing procedures to create compositions. Chance operations have a direct relation to information theory, but this approach can already be seen making cultural inroads in the 18th century when games of chance had a brief period of popularity among composers and the musical and mathematically literate. These are a direct precursor to the deeper algorithmic musical investigations that have started to flourish in the 20th century.
Much of this original algorithmic music work was done the old school way, with pencil, sheets of paper, and tables of numbers. This was the way composers plotted voice-leading in Western counterpoint. Chance operations have also been used as one way of making algorithmic music, such as the Musikalisches Würfelspiel or musical dice game, a system that used dice to randomly generate music from tables of pre-composed options. These games were quite popular throughout Western Europe in the 18th century and a number of different versions were devised. Some didn’t use dice but just worked on the basis of choosing random numbers.
In his paper on the subject Stephen Hedges wrote how the middle class in Western Europe were at the time enamored with mathematics, a pursuit as much at home in the parlors of the people as in the classroom of professors. "In this atmosphere of investigation and cataloguing, a systematic device that would seem to make it possible for anyone to write music was practically guaranteed popularity.”
The earliest known example was created by Johann Philipp Kirnberger with his "The Ever-Ready Minuet and Polonaise Composer" in 1757. C. P. E. Bach's came out with his musical dice game "A method for making six bars of double counterpoint at the octave without knowing the rules" five years later in 1758. In 1780 Maximilian Stadler published "A table for composing minuets and trios to infinity, by playing with two dice". Mozart was even thought to have gotten in on the dice game in 1792 when an unattributed version made an appearance from his music publisher a year after the composer’s death. This has not been authenticated to be by the maestro’s hand, but as with all games of possibility, there is a chance.
These games may have been one of the many inspirations behind The Glass Bead Game by Herman Hesse. This novel was one of the primary literary inspirations and touchstones for the young Karlheinz Stockhausen. The Glass Bead Game portrays a far future culture devoted to a mystical understanding of music. It was at the center of the culture of the Castalia, that fictional province or state devoted to the pursuit of pure knowledge.
As Robin Maconie put it the Glass Bead Game itself appears to be “an elusive amalgam of plainchant, rosary, abacus, staff notation, medieval disputation, astronomy, chess, and a vague premonition of computer machine code… In terms suggesting more than a passing acquaintance with Alan Turing’s 1936 paper ‘On Computable Numbers’, the author described a game played in England and Germany, invented at the Musical Academy of Cologne, representing the quintessence of intellectuality and art, and also known as ‘Magic Theater’.”
Hesse wrote his book between 1931 and 1943. The interdisciplinary game at the heart of the book prefigures Claude Shannon’s explosive Information Theory which was established in his 1948 paper A Mathematical Theory of Communication. His paper in turn bears a debt to Alan Turing, whom Shannon met in 1942. Norbert Wiener also published his work on Cybernetics the same year as Shannon. All of these ideas were bubbling up together out of the minds of the leading intellectuals of the day. Ideas about computable numbers, the transmission of information, communication, and thinking in systems, all of which would give artists practical tools for connecting one field to another as Hesse showed was possible in the fictional world of Castalia.
Robin Maconie again had the insight to see the connection between the way Alan Turing visualized “a universal computing machine as an endless tape on which calculations were expressed as a sequence of filled or vacant spaces, not unlike beads on a string”.
As the Common Practice era of western music came to an end at the close of the 19th century, the mathematically inclined serialism came into its own, and as the decades wore on games of chance made a resurgence, defining much of the music of the 20th century. With the advent of computers the paper and pencil method have taken a temporary backseat in favor of methods that introduce programmed chance operations.
Composers like John Cage took to the I Ching with as much tenacity as the character Elder Brother did in Hesse’s book. Karlheinz Stockhausen meanwhile used his music as means to make connections between myriad subjects and to create his own unique ‘Magic Theater’. Cybernetics and Information Theory each contributed to thinking of these and other composers.
Dice Music in the Eighteenth Century, pp. 184–185, Music and Letters 59: 180–87.
Conceptualizing music: cognitive structure, theory and analysis, by Lawrence M. Zbikowski, Oxford, 2002
The New Common Practice by Alvin Curran
Other planets: the complete works of Karlheinz Stockhausen 1950–2007, Rowman & Littlefield Publishers, 2016
A set of musicians dice have been made that offer up numerous possibilities for the practicing musician. Using random process doesn't just have to be for avant-garde composers anymore!
"The Musician’s Dice are patented, glossy black 12-sided dice, engraved in silver with the chromatic scale. They can be used in any number of ways – they bring the element of chance into the musical process. They're great for composing Aleatory and 12 tone-music, and as a basis for improvisation – they’re really fun in a jam session. They also make an effective study tool: they can be used as “musical flash cards” when learning harmony, and their randomness makes for fresh and challenging exercise in sight-singing and ear training. Plus, they look really cool on the coffee table, and give you a chance to throw around words like "aleatory.""
Below two musicians play around with using these dice.
Read the rest of the Radiophonic Laboratory series.
One of the key researchers and musicians exploring the new frontiers of science and music at Bell Labs was Laurie Spiegel. She was already an accomplished musician when she started working with interactive compositional software on the computers at Bell between the years at the age of twenty-eight. The year was 1973.
Laurie brought her restless curiosity and ceaseless inquiry with her to Bell Labs. She was the kind of person who could see the creative potential in the new tools the facility was creating and make something timeless. Her skill and ability in doing so was something she had prepared herself for through a scholars devotion to musical practice and study.
She was interested in the stringed instruments, the ones you strums and pluck. She picked up guitar, banjo, and mandolin for starters and learned to play these all by ear in her teens. She excelled in High School and was able to graduate early and get a jump start on a more refined education. Shimer College had an early entrance program and she made the cut. With Shimer as a launching board she got into their study abroad program and left her native Chicago to join the scholars at Oxford University. While pursuing her degree in Social Sciences she decided she better teach herself Western music notation. It was essential if she was to start writing down her own compositions. She managed to stay on at Oxford for an additional year after her undergraduate was completed. In between classes she would commute to London for lessons with composer and guitarist John W. Durante who fleshed out her musical theory and composition.
She was no slacker.
Her devotion to music continued to flourish when she came back to the states. In New York she worked briefly on documentary films in the field of social science, but the drive to compose music pushed her back onto the path of continuing education. So she headed back to school again, at Juilliard, going for a Masters in Composition. Hall Overton, Emmanuel Ghent and Vincent Perischetti were some of her teachers between 1969 and 1972. Jacob Druckman was another and she ended up becoming his assistant and ended following him to Brooklyn College. While there she also managed to find some time to research early American music under H. Wiley Hitchock before completing her MA in 1975.
Laurie was no stranger to work, and to making the necessary sacrifices so she could achieve her aims and full artistic potential. Laurie’s thinking is multidimensional, and her art multidisciplinary. Working with moving images was a natural extension of her musicality. She supported herself in the 70s in part through soundtrack composition at Spectra Films, Valkhn Films, and the Experimental TV Lab at WNET (PBS). TV Lab provided artists with equipment to produce video pieces through an artist-in-residence program. Laurie held that position in 1976 and composed series music for the TV Lab's weekly "VTR—Video and Television Review". She also did the audio sound effects for director David Loxton’s SF film The Lathe of Heaven, based on the novel by Ursula K. Leguin, and produced for PBS by WNET.
Speaking of the Experimental TV Lab she said, "They had video artists doing really amazing stuff with abstract video and image processing. It was totally different from conventional animation of the hand-drawn or stop-motion action kind. Video was much more fluid and musical as a form."
Going to school and scoring for film and television wasn’t enough to satisfy Laurie’s endless inquisitive curiosity. Besides playing guitar, she’d been working with analog modular instruments by Buchla, Electrocomp, Moog and Ionic/Putney. After a few years of experimentation she outgrew these synths and started seeking something that had the control of logic and a larger capacity for memory. This led Laurie to the work being done with computers and music at Bell Labs in Murray Hill. At first she was a resident visitor at Bell Labs, someone who got the privilege of working and researching there, but not the privilege of being on Ma Bell’s payroll.
Laurie had already been playing the ALICE machine when the Bell Telephone Company needed to film someone playing it for the 50th anniversary of the Jazz Singer. She had already become something of a fixture at Murray Hill so the company hired her as a musician. Not that the engineers at Bell who created the musical instruments were unmusical, but they were engineers. Laurie had the necessary background as a composer and the interest in how technology could open up to musical expression she was the perfect fit.
In 1973 while still working on her Masters she started getting her GROOVE on at Bell Labs, using the system developed by Max Mathews and Richard Moore.
GROOVE was to prove the perfect foil for expressing Spiegel’s creative ideas. While Max Mathews was bouncing around between a dozen different departments, Laurie was getting her GROOVE on at Murray Hill.
In the liner notes to the reissue of her Expanding Universe album created with GROOVE she wrote, “Realtime interaction with sound and interactive sonic processes were major factors that I had fallen in love with in electronic music (as well as the sounds themselves of course), so non-realtime computer music didn’t attract me. The digital audio medium had both of the characteristics I so much wanted, But it was not yet possible to do much at all in real time with digital sound. People using Max’s Music V were inputting their data, leaving the computer running over the weekend, and coming back Monday to get their 30 seconds of audio out of the buffer. I just didn’t want to work that way.
But GROOVE was different. It was exactly what I was looking for. Instead of calculating actual audio signal, GROOVE calculated only control voltage data, a much lighter computational load. That the computer was not responsible for creating the audio signal made it possible for a person to interact with arbitrarily complex computer-software-based logic in real time while listening to the actual musical output. And it was possible to save both the software and the computed time functions to disk and resume work where we left off, instead of having to start all over from scratch every time or being limited to analog tape editing techniques ex post facto of creating the sounds in a locked state on tape.”
RECORD IN A BOTTLE
Laurie’s most famous work is also the one most likely to be heard by space aliens. It was a realization of Johannes Kepler’s Harmonices Mundi using the GROOVE system and was the first track featured on the golden phonograph records placed aboard the Voyager spacecrafts launched in 1977. The records contain sounds and images intended to portray the vast diversity of life and culture on planet Earth. The records form a kind of time capsule, a message in a bottle sent off into interstellar space.
Carl Sagan chaired the committee that determined what contents should be put on the record. He said “The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space, but the launching of this 'bottle' into the cosmic 'ocean' says something very hopeful about life on this planet."
A message in a bottle isn’t the most efficient way of communicating if your purpose is to reach a specific person in short amount of time. If however, you trust in fate or providence and the natural waves of the ocean, to guide the message to whomever it is meant to be received by, it can be oracular.
Like many musicians before her Laurie had been fascinated by the Pythagorean dream of a music of the spheres. When she set about to realize Kepler’s 17th century speculative composition, she had no idea her music would actually be traveling through the spheres. Kepler’s Harmonices Mundi was based on the varying speeds of orbit of the planets around the sun. He wanted to be able to hear “the celestial music that only God could hear” as Spiegel said.
"Kepler had written down his instructions but it had not been possible to actually turn it into sound at that time. But now we had the technology. So I programmed the astronomical data into the computer, told it how to play it, and it just ran."
The resulting sounds aren’t the kind of thing you’d typically put on your turntable after getting home from a hectic day to relax. The sounds are actually kind of agitating. Yet if you listen to the piece as the product of a mathematical and philosophical exercise it can still be enjoyable.
Other sounds that can be heard on the Voyager Golden Records include spoken greetings from Earth-people in fifty-five languages, Johnny B Goode by Chuck Berry, Melancholy Blues by Louis Armstrong, and music from all around the world, from folk to classical. Each record is encased in a protective aluminum jacket, and includes a cartridge and a needle for the aliens. Symbolic instructions, kind of like those for building a piece of furniture from Ikea, show the origin of the spacecraft and indicate how the record is to be played. In addition to the music and sounds there are 115 images are encoded in analog form.
Laurie was in Woodstock, New York when she received a phone call requesting the use of her music for the record. “I was sitting with some friends in Woodstock when a telephone call was forwarded to me from someone who claimed to be from NASA, and who wanted to use a piece of my music to contact extraterrestrial life. I said, 'C'mon, if you're for real you better send the request to me through the mail on official NASA letterhead!'”
It turned out to be the real deal and not just a prank on a musician.
In 2012 Voyager I entered Interstellar Space. And it’s till out there running, sending back information. Laurie says, “It's extremely heartening to think that our species, with all its faults, is capable of that level of technical operation. We're talking Apple II level technology, but nobody's had to go out there and reboot them once!"
AN EXPANDING UNIVERSE
Laurie explored many other ideas within the structure of the highly adaptable GROOVE system, taking naps in the Bell Labs anechoic chamber, when she needed a rest during the frequent all-nighters she pulled to get her work out into the world.
But getting them into a fashion fit for a golden record, or more common earthbound vinyl, was not easy. The results however were worth the effort of working with a system that took up space in multiple rooms.
“Down a long hallway from the computer room …was the analog room, Max Mathew’s lab, room 2D-562. That room was connected to the computer room by a group of trunk cables, each about 300 feet long, that carried the digital output of the computer to the analog equipment to control it and returned the analog sounds to the computer room so we could hear what we were doing in real time. The analog room contained 3 reel-to-reel 1/4” two-track tape recorders, a set of analog synthesizer modules including voltage-controllable lab oscillators (each about the size of a freestanding shoe box), and various oscillators and filters and voltage-controllable amplifiers that Max Mathews had built or acquired. There was also an anechoic sound booth, meant for recording, but we often took naps there during all-nighters. Max’s workbench would invariably have projects he was working on on it, a new audio filter, a 4-dimensional joystick, experimental circuits for his latest electric violin project, that kind of stuff.
Because of the distance between the 2 rooms that comprised the GROOVE digital-analog-hybrid system, it was never possible to have hands-on access to any analog synthesis equipment while running the computer and interacting with its input devices. The computer sent data for 14 control voltages down to the analog lab over 14 of the long trunk lines. After running it through 14 digital-to-analog converters (which we each somehow chose to calibrate differently), we would set up a patch in the analog room’s patch bay, then go back to the computer room and the software we wrote would send data down the cables to the analog room to be used in the analog patch. Many many long walks between those two rooms were typically part of the process of developing a new patch that integrated well with the controlling computer software we were writing.
So how was it possible to record a piece with those rooms so far apart? We were able to store the time functions we computed on an incredibly state-of-the-art washing-machine-sized disk drive that could hold up to a whopping 2,400,000 words of computer data, and to store even more data on a 75 ips computer tape drive. When ready to record, we could walk down and disconnect the sampling rate oscillator at the analog lab end, walk back and start the playback of the time functions in the computer room, then go back to the analog lab, get our reel-to-reel deck physically patched in, threaded or rewound, put into record mode and started running. Then we’d reconnect the sampling rate oscillator, which would start the time functions actually playing back from the disk drive in the other room, and then the piece would be recorded onto audio tape.”
Every piece on her album, The Expanding Universe, was recorded at Bell Labs. She computed in real time the envelopes for individual notes, how they were placed in the stereo field and their pitches. “Above the level of mere parameters of sound were more abstract variables, probability curves, number sequence generators, ordered arrays, specified period function generators, and other such musical parameters as were not, at the time, available to composers on any other means of making music in real time.”
Computer musicians today who are used to working with programs like Reaktor, Pure Data, Max/MSP, Ableton, Supercollider and a slew of others take for granted the ability to manipulate the sound as it is being made, on the fly, and with a laptop. Back then it was state of the art to be able to do these things, but doing it required huge efforts, and took up a lot of space.
During the height of the progressive rock music era, making music with computers was also risky business on the level of personal politics. Computers weren’t seen in a positive light. They were the tool of the Establishment, man. Used for calculating the path of nuclear missiles and storing your data in an Orwellian nightmare. Musicians who chose to work with technology were often despised at this time. There was an attitude that you were succeeding your creative humanity to a cold dead machine. “Back then we were most commonly accused of attempting to completely dehumanize the arts,” she said. This macho prog rock tenor haunted Laurie, despite her being an accomplished classical guitarist, and capable of shredding endless riffs on an electrified axe if she chose to.
She also took risks in her compositions inside the avant-garde circles she frequented. Her music is full of harmony when dissonance was all the rage. “It wasn’t really considered cool to write tonal music,” she said, speaking of the power structures at play in music school. All I know is that it’s a good thing she listened to the music she had inside of her.
Between 1974-79 Laurie got the idea that GROOVE could be used to create video art with just a little tweaking of the system. Unlike the hours of music released on her Expanding Universe album, her video work at Bell didn’t get the documentation it deserved. This was in part due to the systems early demise. Hardware changes at the lab prevented many records and tracings from being left behind.
VAMPIRE however is still worth mentioning. It stands for Video And Music Program for Interactive Realtime Exploration/Experimentation. Laurie was able to turn GROOVE into a VAMPIRE with the help of computer graphics pioneer Ken Knowlton. Ken was also an artist and a researcher in the field of evolutionary algorithms, something else Laurie would later take up and apply to music. In the 60’s Knowlton had created BEFLIX (Bell Flicks), a programming language for bitmap computer-produced movies. After Laurie got to know him they soon started collaborating together. It was another avenue for her to pursue her ideas for making musical structures visible.
Laurie had reasoned that if computer logic and languages had made it possible to interact with sound in real time, than the GROOVE system should be powerful enough to handle the real time manipulation of graphics and imagery. She started working on this theory first using a program called RTV (Real Time Video) and a routine given to her by Ken. She wrote a drawing program, now similar to what would be called Paint. It became the basis on which VAMPIRE was built.
With Ken she worked out a routine for a palette of 64 definable bitmap textures. These could be used as brushes, alphabet letters, or other images. This was used inside of a box with 10 columns, each column having 12 buttons representing a bit that could be on or off. This is how she entered the visual patterns.
In addition to weaving strands of sound Laurie was also a hand weaver. Cards with small holes in them have often been used over the years as one approach to the art form. Card weaving is a way to create patterned woven bands, both beautiful and sturdy. Some may think the cards are a simple tool, but they can produce weavings of infinite design and complexity. Hand weaving cards are made out of cardboard or cardstock, with holes in them for the threads, very similar to the Hollerith punch cards used for programming computers. She struck upon the idea that she could create punch cards to enter batches of patterns via the card reader on the computer. After she consulted some of her weaving books she made a large deck of the cards to be able to shuffle and input into the system.
Laurie quickly found that she enjoyed playing the drawing parameters just like someone would play a musical instrument. Instead of changing pitch, duration, timbre she could change the size, color and texture of an image, as she drew it in real time with switches and knobs making it appear on the monitor. Her skills as a guitarist directly translated to this ability. One hand would do the drawing. Perhaps it was the same as did the strumming and plucking of the strings. The other hand would change the parameters of the image using a joystick, and the other tools, just as it might change chords on one of her lutes, banjos or mandolins.
She saw the objects on the screen as melodies, but it was just one line of music. She wanted more lines as counterpoint was her favorite musical form. She wanted to be able to multiple strands of images together. She wrote into the program another realtime device to interact with. This was a square box of 16 buttons for typical contrapuntal options as applied to images. This gave her a considerable expansion of options and variables to play with.
After all this work she eventually hit a wall of what she could achieve with VAMPIRE in terms of improvisation. “The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential.” It had reached the point where she needed to think of composition.
Ken Knowlton’s work with algorithms was beginning to rub off on her and she started to think of how “powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to (‘overdubbed’ or ‘multitracked’), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.”
Ever the night owl Laurie said of her work with the system, “Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.”
ECHOES OF THE BELL
The echoes of Laurie’s time spent at Bell Laboratories can be found in the work she has done since then, even as she was devastated by the death of GROOVE and VAMPIRE.
She went on to write the Music Mouse software in 1986 for Macintonsh, Amiga and Atari computers and also founded the New York University Computer Music Studio. She has continued to write about music for many journals and publications and has continued to compose. Laurie has applied her knowledge of algorithmic composition and information theory into her work.
Now the tools for making computer music can be owned by many people and used in their own home studios, but the echo of the Bell is still heard.
This article only scratches the surface of Laurie's life and work. A whole book could be written about her, and I hope someone will.
The liner notes to the 2012 reissue of Expanding Universe
Read the rest of the Radiophonic Laboratory series.
At Bell Labs Max Mathews was the granddaddy of all its music makers. If you use a computer to make or record music with, he is your granddaddy too. In 1957 Max wrote a program for a digital computer called Music I. It was a landmark demonstration in the ability to write code to command a machine to synthesize music. Computers can do things and play things that humans alone cannot. Music I opened up a world of new timbral and acoustic possibilities. This was a perfect line of inquiry for the director of Bell Laboratories Behavioral and Acoustic Research Center where Mathews explored a spectrum of ideas and technologies between 1955 and 1987. Fresh out of MIT where he received a Sc.D in electrical engineering Mathews was ready to get to work and Music I was only the beginning of a long creative push in technology and the arts.
Max’s corner of the sprawling laboratory in Murray Hill, New Jersey carried out research in speech communication, speech synthesis, human learning and memory, programmed instruction, the analysis of subjective opinions, physical acoustics, industrial robotics and music.
Max followed the Music I program with II, III, IV and V, each iteration taking its capabilities further and widening its parameters. These programs carried him through a decade of work and achievement. As noted in the chapter on the Synthesis of Speech, Max had created the musical accompaniment to “Daisy: A Bicycle Built for Two” later made famous by the fictional computer HAL in Stanley Kubrick’s 2001: A Space Odyssey.
Starting in 1970 he started working with Richard Moore to create the GROOVE system. It was intended to be a “musician-friendly” computer environment. The other programs broke incredible new ground, but the use of them leaned more towards those who could program computers and write code in their esoteric languages, than the average musician or composer of the time. GROOVE was the next step in bringing it to its potential users. It was a hybrid digital-analog system that stood for Generating Realtime Operations On Voltage-controlled Equipment.
Max notes, “Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
But Chowning hadn’t discovered FM Synthesis at the time GROOVE was being created. It was still the 70’s and affordable computers and synthesizers had yet to make it into the homes outside of the most devoted hobbyists. GROOVE was a first step to making computer music in real time. The set up included an analog synth with a computer and monitor. The computer’s memory made it appealing to musicians who could store their manipulations of the interface for later recall. It was a clever workaround the limitations of each technology. The computer was used for its ability to store the musical parameters while the synth was used to create the timbres and texture without relying on digital programming. This set up allowed creators to play with the system, fine tune what they wanted it to do, for later re-creation.
Bell Labs had acquired a Honeywell DDP224 computer from MIT to use specifically for sound research. This is what GROOVE was built on. The DDP-24 was a 24-bit transistor machine that used magnetic core memory to store data and program instructions. That it had disk storage also meant it was possible for libraries of programming routines to be written. This allowed the users to create customized logic patterns. A composition could be tweaked, adjusted and mixed in real time on the knobs, controls, and keys. In this manner a piece could be reviewed as a whole or in sections and then replayed from the stored data.
When the system was first demonstrated in Stockholm at the 1970 conference on Music and Technology organized by UNESCO, music by Bartok and Bach was played. A few years later Laurie Spiegel would grasp the unique compositional possibilities of the system and take it to the max.
In the meantime Max himself was a guy in demand. IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in France brought him on board as a scientific advisor as they built their own state of the art sound laboratory and studios in France between 1974 and 1980.
In 1987 Max left his position at Bell Labs to become a Professor of Music (Research) at Stanford University. There he continued to work on musical software and hardware, with a focus on using the technology in a live setting. “Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP. Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.
Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.
Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.
Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.
Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today.
Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.
It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
Read the rest of the Radiophonic Laboratory series.
One of the worst symphony orchestras ever to have existed in the world now gets the respect it is due in a retrospective book published by Soberscove Press, collecting the memories, memorabilia and photographs of its talented members. The Worlds Worst: A Guide to the Portsmouth Sinfonia, edited by Christopher M. Reeves and Aaron Walker, though long overdue, has arrived just in time.
For those unfamiliar with the Portsmouth Sinfonia, here is the cliff notes version: founded by a group of students at the Portsmouth School of Art in England 1970 this “scratch” orchestra was generally open to anyone who wanted to play and ended up drawing art students who liked music but had no musical training or, if they were actual musicians, they had to choose and play an instrument that was entirely new to them. One of the other limits or rules they set up was to only play compositions that would be recognizable even to those who weren’t classical music buffs. The William Tell Overture being one example, Bheetoven’s Fifth Symphony and Also Sprach Zarathustra being others. Their job was to play the popular classics, and to do it as amateurs. English composer Gavin Bryars was one of their founding members. The Sinfonia started off as a tongue-in-cheek performance art ensemble but quickly took on a life of its own, becoming a cultural touchstone over the decade of its existence, with concerts, albums, and a hit single on the charts.
The book has arrived just in time because one of the lenses the work of the Portsmouth Sinfonia can be viewed through is that of populism; and now, when the people and politics on this planet have seen a resurgence of populist movements, the music of the Portsmouth Sinfonia can be recalled, reviewed, reassessed and their accomplishments given a wider renown.
One way to think of populism is as the opposite and antithesis of elitism. I have to say I agree with noted essayist John Michael Greer and his frequent tagline that “the opposite of one bad idea is usually another bad idea”. Populism may not be the answer to the worlds struggle against elitism, yet it is a reaction, knee jerk as it may be. Anyone who hasn’t been blind-sighted by the bourgeois will know the soi-distant have long looked down on those they deem lesser than with an upturned nose and sneer. Many of those sneering people have season tickets to their local symphony orchestra. They may not go because they are music lovers, but because it is a signifier of their class and social status. As much as the harmonious chords played under the guidance of the conductors swiftly moving baton induce in the listener a state of beatific rapture, there is on the other hand, the very idea that attending an orchestral concert puts one at the height of snobbery. After all, orchestral music is not for everyone, as ticket prices ensure.
The Portsmouth Sinfonia was a remedy to all that. It put classical music back into the hands and mouthpieces, of the people. It brought a sense of lightheartedness and irreverence into the stuffy halls that were so often filled with dour, stuffy, serious people listening in such a serious way to so serious music. The Porstmouth Sinfonia made the symphony fun again, and showed that the canon of the classics shouldn’t just be left to the experts. Musical virtue wasn’t just for virtuosos, but could be celebrated by anyone who was sincere in their love of play.
Still the Sinfonia was also more than that. It was an incubator for creative musicians and a doorway from which they could launch and explore what composer Alvin Curran has called the “new common Practice”, that grab bag of twentieth century compositional tools, tricks, and approaches, from the seriality of Schoneberg to the madcap tomfoolery of Fluxus. This book shows some of these explorations through the voices of the members of the Sinfonia as they recollect their ten year experiment at playing, and being playful with, the classical hits of the ages.
As Brian Eno noted in the liner notes to Portsmouth Sinfonia Plays the Popular Classics, essential reading that is provided in the book, “many of the more significant contributions to rock music, and to a lesser extent avant-garde music, have been made by enthusiastic amateurs and dabblers. Their strength is that they are able to approach the task of music making without previously acquired solutions and without a too firm concept of what is and what is not musically possible.” Thus they have not been brainwashed, I mean trained, to the strict standards and world view of the classical career musician.
Gavin Bryars, who was another founding member of the orchestra speaks to this in an interview with Michael Nyman, also included in the book. He said, “Musical training is geared to seeing your output in the light of music history.” Such training is what can make the job of the classical musician stressful and stifling. Stressful because of the degree of perfection players are required to achieve, and stifling because deviation, creative or otherwise, is disavowed and un-allowed. I’m reminded of how Karlheinz Stockhausen, when exploring improvisation and intuitive music had to work really hard at un-training his classically trained ensemble of musicians in the matter of being freed from the score.
The amateurs in the Portsmouth Sinfonia were free from the weight of musical history. If a wrong note was played, and many were, they could just get on with it, and let it be. This created performances full of humor and happy accidents even as they tried render the music correct as notated.
Training and discipline in music give can give a kind of perfectionists freedom as it relates to playing with total accuracy, but takes that freedom away when it comes to experimenting and exploration. Under the strictures of the conductor’s baton, playing in the symphony seems to be more about taking marching orders from a dictator than playing equally with a group of fellow musicians. John Farley, who took on the role of conductor within the Sinfonia, held his baton lightly. He wasn’t so much telling the other musicians how to play, or even keeping time, but acting out the part of what an audience expects of a conductor, acting as something of a foil for the musicians he was collaborating with in the performance.
One of the essential texts included in this book is “Collaborative Work at Portsmouth” written by Jeffrey Steele in 1976. His piece shows how the Sinfonia really grow out of social concerns and looking at new ways to work together. Steele’s essay allies itself from the start with the constructivist movement of art, which he had been involved with as a painter. Constructivism was more concerned with the use of art in practical and social contexts. Associated with socialism and the Russian avant-garde, it took a steely eyed look at mysticism and the spiritual content so often found in painting and music, on the one hand, and the academicism music can degenerate into on the other. The Portsmouth Sinfonia coalesced in a dialectical resolution between these two tendencies. Again, the opposite of one bad idea is usually another. The Sinfonia bypassed these binary oppositions to create a third pole.
A version of Steele’s essay was originally supposed to be included in an issue of Christopher Hobbs Experimental Musical Catalogue (EMC). A “Porstmouth Anthology” had been planned as an issue of the Catalogue, and a dummy of the publication even made, but that edition of EMC never came out. It has been rescued here in this book. Other rescued bits include a selection of correspondence.
Besides the populist implications, and the permission given to enthusiastic amateurs to take center stage, the book explores the ideas, philosophies and development of the various artists and musicians who made up the Sinfonia itself in the recollections section of the book where Ian Southwood, David Saunders, Suzette Worden, Robin Mortimore and the groups manager and publicist Martin Lewis all reflect on their time as members. Reading these you get the sense that the whole thing was a real community effort, a collaborative effort where everyone had a role and took initiative in whatever ways they could.
A long essay by Christopher M. Reeves, one of the editors of the book, puts the whole project into historical and critical context. Reeves writes of their “transition from intellectual deconstrunction to punchline symphony is a trajectory in art that has little precedent, and points to a more general tendency in the arts throughout the 1970s, in the move from commenting or critiquing dominant culture, to becoming subordinate to it.” His essay goes from the groups origins as a cross-disciplinary adventure to their eventual appropriation by the mainstream as a kind of novelty music you might here on an episode of Dr. Demento’s radio show.
Just how serious was the Sinfonia supposed to be taken?
Reeve’s puts it thus, “It is within this question that the Sinfonia found a sandbox, muddying up the distinctions between seriousness and goofing off, intellectual exercises and pithy one liners.” The Sinfonia’s last album was titled Classical Muddly. The waters left behind by them are still full of silt and only partially clear. This book does a good job at straining their efforts through a sieve and presenting the reader with the material and textual ephemera the group left behind, all in a beautifully made tome that is itself a showcase of the collaborative spirit found in the Portsmouth Sinfonia.
Robert Mortimore had told Melody Maker’s Steve Lake in 1974, “The Sinfonia came about partly as a reaction against Cardew [and his similar Scratch Orchestra]. He had the classical training and his audience was very elitist. But he wasn’t achieving anything. We listened, thought, ‘well, why don’t we have a go, it can’t be all that difficult. Y’know if Benjamin Britten and Sir Adrian Boult can do it, why can’t we?”
In this time when so many artistic and musical institutions are underfunded, the Portsmouth Sinfonia can serve as a model. By having trained musicians play instruments they did not originally know how to play, and by having untrained musicians pick an instrument and be a part of an ensemble, they showed that with diligence anyone can bring the western canon of classical music to life, and often do it with much more humor and life than can be heard in contemporary concert halls.
Just maybe people are tired of being told how to think and what to do. Or how to play an instrument, and what “good” music should be played on that instrument. The Worlds Worst is a reminder of the inspiring example of the Portsmouth Sinfonia, and the accomplishments that can be made when amateurs and in-experts take to the world’s stage and have fun making a raid on the western classical canon, wrong notes and all.
The Worlds Worst: A Guide to the Portsmouth Sinfonia edited by Christopher M. Reeves and Aaron Walker is available from Soberscove Press.
Just as the folks inside the Sound-House of the BBC’s Radiophonic Workshop continued to refine their approach and techniques to electronic music, another older sound house back across the pond in America continued to research new “means to convey sounds in trunks and pipes, in strange lines and distances”. Where the BBC Radiophonic Workshop used budget friendly musique concrete techniques to create their otherworldly incidental music, the pure research conducted at Bell Laboratories was widely diffused and the electronic music systems that arose out of those investigations were incidental and secondary byproducts. The voder and vocoder were just the first of these byproducts.
Hal Alles was a researcher in digital telephony. The fact that he is remembered as the creator of what some consider the first digital additive synthesizer is a quirk of history. Other additive synthesizers had been made at Bell Labs, but these were software programs written for their supersized computers.
Alles needed to sell his digital designs within and without a company that had been the lords of analog, and it needed to be interesting. The synthesizer he came up with, was his way of demonstrating the companies digital prowess, and entertaining his internal and external clients at the same time. What he came up with was called the Bell Labs Digital Synthesizer or sometimes the Alles Machine or ALICE.
It should be noted that Hal bears no relation to the computer in 2001: A Space Odyssey. The engineer recalls those heady days in the late sixties and 1970s. “As a research organization (Bell labs), we had no product responsibility. As a technology research organization, our research product had a very short shelf life. To have impact, we had to create ‘demonstrations’. We were selling digital design within a company with a 100 year history of analog design. I got pretty good at 30 minute demonstrations of the real time capabilities of the digital hardware I was designing and building. I was typically doing several demonstrations a week to Bell Labs people responsible for product development. I had developed one of the first programmable digital filters that could be dynamically reconfigured to do all of the end telephone office filtering and tone generation. It could also be configured to play digitally synthesized music in real time. I developed a demo of the telephone applications (technically impressive but boring to most people), and ended the demo with synthesized music. The music application was almost universally appreciated, and eventually a lot of people came to just hear the music.”
Max Mathews was one of the people who got to see one of these demos, where the telephonic equipment received a musical treatment. Mathews was the creator of the MUSIC X series of computer synthesis programming languages. He was excited by what Alles was doing and saw its potential. He encouraged the engineer to develop a digital music instrument.
“The goal was to have recording studio sound quality and mixing/processing capabilities, orchestra versatility, and a multitude of proportional human controls such as position sensitive keyboard, slides, knobs, joysticks, etc,” Mathews said. “It also needed a general purpose computer to configure, control and record everything. The goal included making it self-contained and ‘portable’. I proposed this project to my boss while walking back from lunch. He approved it before we got to our offices.”
Harmonic additive synthesis had already been used back in the 1950s by linguistics researchers who were working on speech synthesis and Bell Labs was certainly in on the game. Additive synthesis at its most basic works by adding sine waves together to create timbre. The more common technique until that time had been subtractive synthesis, which used filters to remove or attenuate the timbre of a sound.
Computers were able to do additive synthesis with wavetables that had been pre-computed, but it could also be done by mixing the output of multiple sine wave generators. This is what Karlheinz Stockhausen basically did with Studie II, though he achieved the effect through by building up layers of pure sine waves on tape rather than with a pre-configured synth or computer set up.
That method is laborious. A machine that can do it for you goes a long way towards being able to labor at other things while making music.
ALICE was a hybrid machine in that it used a mini-computer to control a complex bank of sound generating oscillators. The mini-computer was an LSI-11, by the Digital Equipment Corporation, a cost reduced version of their PDP-11 in production for twenty years starting in 1970. This controlled the 64 oscillators whose output whose was then mixed to create a number of distinct sounds and voices. It had programmable sound generating functions and the ability to accept a number of different input devices.
The unit was outfitted with two 8-inch floppy drives supplied by Heathkit; they made their own version of the LS-11 and sold it as the H11. AT&T rigged it out with one of their color video monitors. A custom converter was made that sampled the analog inputs and transferred them to 7 bit digital resolution 250 times a second. There were a number of inputs used to work with ALICE in real time: two 61-key piano keyboards, 72 sliders alongside various switches, and four analog joysticks just to make sure the user was having fun. These inputs were interpreted by the computer which in turn controlled the outputs sent to sound generators as parameters. The CPU could handle around 1,000 parameter changes per second before it got bogged down.
The sound generators themselves were quite complex. A mere 1,400 integrated circuits were used in their design. Out of the 64 oscillators the first bank of 32 were used as master signals. This meant ALICE could be expected to achieve 32 note polyphony. The second set was slaved to the masters and generated a series of harmonics. If this wasn’t enough sound to play around with, ALICE was also equipped with 32 programmable filters and 32 amplitude multipliers. With the added bank of 256 envelope generators ALICE had a lot of sound potential and sound paths that could be explored through her circuitry. All of those sounds could mixed in many different ways into the 192 accumulators she was also equipped with. Each of the accumulators was then sent to one of the four 16-bit output channels then reconverted from digital back into analog on the audio output.
Waveforms were generated by looking up the amplitude for a given time in a 64k word ROM table. There were a number of tricks Alles programmed into the table to reduce the number of calculations the CPU needed to run. 255 timers outfitted with 16 FIFO stacks controlled the whole shebang. The user put events into a timestamp sorted queue that fed it all into the generator.
Though the designers claimed the thing was portable, all the equipment made it weigh in at a hefty 300 pounds, making it an unlikely option for touring musicians. As the worlds first true digital additive synthesizer it was quite the boat anchor.
Completed in 1976, only one full-length composition was recorded for the machine, though a number of musicians, including Laurie Spiegel whose work will be explored later, played the instrument in various capacities. For the most part though the Alles Synth was brushed aside; even if the scientists and engineers at Bell Labs were tasked to engage in pure research they still had business to answer to. A use was found for Hal’s invention in terms of marketing was found once again in 1977.
In that year the Motion Picture Academy was celebrating the 50th anniversary of the talkies. The sound work for The Jazz Singer, the first talking picture, had been done by Western Electric, with their Vitaphone system technology. The successful marriage of moving image and sound first seen and heard in that movie wouldn’t have been possible without the technology developed by the AT&T subsidiary and Ma Bell was still keen to be in on the commemoration of the film. ALICE is what they chose to use as the centerpiece for the event.
A Bell Labs software junky by the name of Doug Bayer was brought in to improve the operating system of the synth and try to make the human interface a bit more user friendly. The instrument was flown to Hollywood at considerable risk. The machine was finicky enough without transporting it. Taking it on a plane where it could get banged up, whacking out all of its components in just one bump, and potentially sending it into meltdown mode was not out of the question.
So they hired musician and composer Laurie Spiegel, who’d already been working at the Labs without pay, to be filmed playing ALICE. This would be shown in the event that the musician they hired to play it live, Roger Powell, wouldn’t be able to do so due to malfunction. This film is the only recording of it in performance left in known existence.
Yet to hear how the Bell Labs Digital Synthesizer sounds look no further than Don Slepian’s album Sea of Bliss. Max Mathews had hired Slepian to work with the synth as an artist in residence between 1979 and 1982. Don had been born into a scientific family. From an early age he demonstrated technical talent and musical ability. He had begun making music in 1968, programming his own computers, soldering together his own musical circuits, and experimenting with tape techniques. As a member of the Defense Advanced Research Projects Agency (DARPA) Don worked as a tester on the early iteration of the internet and dor a time he lived in Hawaii and played as a synthesizer soloist with the Honolulu Symphony. All of this made him a perfect fit as artist in residence at Bell Labs.
The results his work are on the album: epic length cuts of deep ambient music bringing relaxation and joy to the listener. It’s the audio version of taking valium. Listen to it and feel the stress of life melt away.
Don Slepian described his 1980 masterpiece for the online Ambient Music Guide. “It’s stochastic sequential permutations (the high bell tones), lots of real time algorithmic work, but who cares? It's pretty music: babies have been born to it, people have died to it, some folks have played it for days continuously. No sequels, no formulas. It was handmade computer music."
The Bell Labs Digital Synthesizer was soon to leave its birthplace after Don had done his magic with the machine. In 1981 ALICE was disassembled and donated to the TIMARA Laboratories at the Oberlin Conservatory of Music.
Oberlin, and by extension TIMARA (Technology in Music and Related Arts) has a history that reaches back to the very beginning of electronic music, in the mid-19th century. None other than Elisha Gray was an adjunct physics professor at the college. He is considered by some as the father of the synthesizer due to his invention of the musical telegraph and his seventy plus patents for inventions that were critical in the development of telecommunications, electronic music and other fields. If it had not been for Gray’s electromechanical oscillator, Thaddeus Cahill would never have been able to create that power hungry beast of an instrument, the Telharmonium.
The Music Conservatory at Oberlin dates back to 1865 and they joined the ranks of those radio and television stations who built electronic music studios with the opening of TIMARA in 1967. The department was founded by Olly Wilson as a response to the demand for classes in electronics from composition students. It became the first of a number of departments in the American higher education scene to create a space for experimentation in analog synthesis and mixed media arts.
Though ALICE is now enshrined in one of the many sound laboratories at TIMARA her influence continued to be felt not long after she was sequestered there. A number of commercial synthesizers based on the Alles design were produced in 1980s.
The Atari AMY sound chip is a case in point and was the smallest of the products to be designed. It stood for Additive Music sYnthesis. It still had 64-oscillators but they were reduced to a single-IC sound chip. A chip that had numerous design issues. Additive synthesis could now be done with less, though it never really got into the hands of users. It was scheduled to be used on a new generation of 16-bit Atari computers and for the next line of game consoles and by their arcade division. AMY never saw the light of day in any configuration. Even after Atari was sold in 1984, she remained waiting in the dark to get used on a project, but was cut from being included in new products after many rounds at the committee table, where so many dreams wind up dead.
Still other folks in the electronic music industry made use of the principles first demonstrated by ALICE. The Italian company Crumar and Music Technologies’ of New York got into a partnership to create Digital Keyboards. Like Atari they wanted to resize the Alles Machine, make it smaller. They came up with a two-part invention using a Z-80 microcomputer and a single keyboard with limited controls. They gave it the unimaginative name Crumar General Development System and it sold in 1980 for $30,000 buckaroos. Since it was out of the price range of your average musician, they marketed the product to music studios. Wendy Carlos got her hands on one and the results can be heard on the soundtrack to Tron.
Other companies got into the game and tried to produce something similar at lower cost, but none of these really managed to find a good home in the market due to the attached price tag. When Yamah released the DX7 in 1983 for $2,000 the demand for additive synths tanked. The DX7 implemented FM synthesis and enabled it to achieve many of the same effects as ALICE with as few as two oscillators. FM synthesis and its relationship to FM radio modulation will be looked at in detail in another article.
It had all started out as a way for Hal Alles to look at potential problems in digital communications, such as switching, distortion, and echo. It ended up becoming a tool for extending human creativity.
Read the other articles in the Radiophonic Laboratory series.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.