Stockhausen picked up his interest in information theory by way of Werner Meyer-Eppler during his time as a student at the Bonn. Meyer-Eppler himself was something of a scientific polymath, having studied mathematics, chemistry and physics at the University of Cologne in the late 1930s before going to the Bonn where he became a scientific assistant in the physics department, and then a lecturer in experimental physics. After WWII ended his attention turned with laser beam focus to the subject of phonetics and speech synthesis. In 1947 Paul Menzerath brought him into faculty of the Phonetic Institute of the University of Bonn. It was in this time period when Meyer-Eppler started publishing essays on the Voder, Vocoder, and Visible Speech Machine. One of his contributions to the field that is still in use today was his work on the development of the electrolarynx.
Information theory made many contributions to many fields. Linguistics was one of those fields where it was influential in studying how frequently words were used, word length, and the speed words could be read. Shannon had tested the information theory principle of redundancy, or the amount of wasted space used to transmit a message, by having his wife predict the number of repeating letters in a random crime novel he pulled off his bookshelf. Sometimes redundant is better, when it comes to getting a message across. Redundancy is added while communicating over noisy channels as a method of error correction.
Shannon had the insight that this was a baked-in purpose behind the repetition of letters in Englsih. He had also showed that he could use stochastic processes to build something that resembled the English language from scratch.
Werner had been following these developments of information theory with special attention to their applications in linguistics and speech. Later in the 50’s Meyer-Eppler became concerned with how statistics and probability, core tools of information theory, might be applied to creating electronic music as explored in his book Statistic and Psychologic Problems of Sound. In this work Meyer-Eppler introduced the word “aleatoric” into the musical lexicon. According to his definition “a process is said to be aleatoric ... if its course is determined in general but depends on chance in detail”.
Aleatoric music is made when some element of the composition is left to chance or when a significant portion of how the composition is realized is left up to the performer or performers. Aleatoric composition has a precedent in the dice games of the 18th century. The word itself comes from the Latin alea, meaning dice.
There are many methods for applying aleatoric processes to music. One of the ways Stockhausen tackled it was by using a polyvalent structure, or writing a piece that was open to a number of different interpretations. Klavierstucke XI is an example of such a piece that he wrote for piano.
The piece is made up of 19 fragments printed on a very large piece of paper. There is no turning of the sheet music. The pianist may start with any fragment they wish and from there continue on to any other fragment they wish to play. It is polyvalent because each performance could begin and end in new places. There is no set musical narrative; it is more like reading a choose-your-own adventure, or wondering through a maze, or labyrinth which the pianist enters, circumnavigates, and then returns. Each time the pianist may enter the labyrinth from a new entrance and likewise, reemerge in a different place.
The pianist shares responsibility with the composer for the eventual shape of any given performances. The possible permutations are vast, yet even in different interpretations it may be heard as the same piece of music, its essential characteristics remaining the same no matter the order they are played.
Commenting on the piece the composer said, "Piano Piece XI is nothing but a sound in which certain partials, components, are behaving statistically... If I make a whole piece similar to the ways in which (a complex noise) is organized, then naturally the individual components of this piece could also be exchanged, permutated, without changing its basic quality."
Considered as a whole Piece XI will sound the same even though every time it is played it will sound different. It is a system unto itself, and as a system, even when the component parts are rearranged in the order they are played it is still the same system, and will still sound like itself. Listened to statistically the musical values remain the same.
Stockhausen would go on to use polyvalent form again and again. In his percussion piece Zyklus (Cycle) from 1959 the score is printed as a spiral and the performer may start anywhere within the spiral he or she chooses. Furthermore, they may play the piece from left to right or right to left. The piece is finished when the player reaches the original starting point. In the performance space the cycle is shown again visually with the percussion pieces laid out in a circle with the performer moving around them in the manner determined by a chosen starting point.
Zyklus also shows the amazing diversity of possible interpretations demonstrated before in Piece XI. It is however the interpretation of the scored is a bit more closed. On one side of the score the music becomes increasingly aleatoric, giving more freedom to the player in how it is interpreted. On the other side of the spiral the composition is exactly fixed and predetermined. Played on way it moves from fixed to open, and in the other direction from open to closed.
Stockhausen was obsessed with cycles. Specifically cycles of time. His mid-seventies composition Tierkreis (Zodiac) consisted of twelve melodies for each of the twelve zodiac signs. Originally written for custom made music boxes, Tierkreis can be played on any melody instrument and peformed in a number of different ways. For the purpose here a complete performance begins with the melody for the corresponding zodiac sign for the day when the performance is being held. For instance, if the performance was held on August 22 the performers would begin with the Leo melody and proceed through Virgo, Libra, and the rest until they return to the starting melody of Leo. Each melody is played at least three times and may be improvised upon. This gives considerable variation to individual performances. Further variations are specified by the composer.
In his chamber opera Sirius written a few years later the Tierkreis melodies are employed again in a section of the piece called The Wheel. Here the music may be heard in four different ways, depending on the season it is performed. If played in the Winter the section starts with the melody for Capricorn, if in the Spring with Aries, Summer starts with Cancer, and Autumn with Libra.
In all of these cyclical works an echo of the tape loop may be heard. Stockhausen had worked with tape loops extensively in his piece Kontakte, using them to show relationships between pitch, timbre and the way musical events can be perceived in time and space through the process of slowing things up or down. I wonder, if besides the strong grounding Stockhausen had in religion, philosophy, and science if the eternal return and recurrence of the tape loop at all framed his cosmic conception of the vast cycles of time.
The cycles continued in his magnum opus LICHT (Light): Die sieben Tage der Woche (The Seven Days of the Week). Written between 1977 and 2003 it is a cycle of seven operas, one for each of the seven days of the week. Stockhausen described the work as an “eternal spiral” considering there to be “neither end nor beginning to the week.” Clocking in at a total duration of 29 hours, deft intricacies exist within the piece on a micro and macro scale and many volumes have already been and will continue to be written about it. Within the broad palette afforded by an opera cycle longer than Wagner’s the Ring, Stockhausen was able to play the role of a Magister Ludi, or master of the Glass Bead Game. LICHT is a system, and within that system Stockhausen playfully and masterfully displayed with pyrotechnic virtuosity a comprehensive knowledge of combinatorial and permutative arts as applied to music.
These arts of combination were a central component of the Glass Bead Game as played in the novel.
To show how all of these interlocking parts fit together the basic structure of the opera must be examined. And to understand LICHT as a system a slight change of lanes onto the parallel track of Norbert Wiener and his theory of cybernetics is in order.
Read the rest of the Radiophonic Laboratory Series.
From the ice cold farms and fields of Michigan to the halls of MIT and then onwards to Bell Labs at Murray Hill, Claude Shannon was a mathematical maverick and inveterate tinkerer. In the 1920s, in those places where the phone company had not deigned to bring their network, around three million farmers built their own by connecting telegraph keys to the barbed wire fences that stretched between properties. As a young boy Shannon rigged up one of these “farm networks so he and one his friend who lived half a mile away could talk to each other at night in Morse code. He was also the local kid people in the town would bring their radios to when they needed repair and he got them to work. He had the knack.
He also had an aptitude for the more abstract side of a math and his mind could handle complex equations with ease. At the age of seventeen he was already in college at the University of Michigan and had published his first work in an academic journal, a solution to a math problem presented in the pages of American Mathematical Monthly. He did a double major in school and graduated with degrees in electrical engineering and mathematics then headed off to MIT for his masters.
While there he got under the wing of Vannevar Bush. Vannevar had followed in the footsteps of Lord Kelvin, who had created one of the world’s first analog computers, the harmonic analyzer, used to measure the ebb and flow of the tides. Vannevar’s differential analyzer was a huge electromechanical computer that was the size of a room. It solved differential equations by integration, using a wheel-and-disc mechanisms to perform the integration.
At school he was also introduced to the work of mathematician George Boole, whose 1854 book on algebraic logic The Laws of Thought laid down some of the essential foundations for the creation of computers. George Boole had in turn taken up the system of logic developed by Gottfried Wilhelm Leibniz. Might Boole have also been familiar with Leibniz’s book De Arte Combinatoria? In this book Leibniz proposed an alphabet of human thought, and was himself inspired by the Ars Magna of Ramon Lull. Leibniz wanted to take the Ars Magna, or “ultimate general art” developed by Lull as a debating tool that helped speakers combine ideas through a compilation of lists, and bring it closer to mathematics and turn it into a kind of calculus. Shannon became the inheritor of these strands of thought, through their development in the mathematics and formal logic that became Boolean algebra.
Between working with Bush’s differential analyzer and his study of Boolean algebra, Shannon was able to design switching circuits. This became the subject of his 1937 master thesis, A Symbolic Analysis of Relay and Switching Circuits.
Shannon was able to prove his switching circuit could be used simplify the complex and baroque system of electromechanical relays used in AT&T’s routing switches. Then he expanded his concept and showed that his circuits could solve any Boolean algebra problem. He finalized the work with a series of circuit diagrams.
In writing his paper Shannon took George Boole’s algebraic insights and made them practical. Electrical switches could now implement logic. It was a watershed moment that established the integral concept behind all electronic digital computers. Digital circuit design was born.
Next he had to get his PhD. It took him three more years, and his subject matter showed the first signs of multidisciplinary inclination that would later become a dominant feature of information theory. Vannevar Bush compelled him to go to Cold Spring Harbor Laboratory to work on his dissertation in the field of genetics. For Vannevar the logic was that if Shannon’s algebra could work on electrical relays it might also prove to be of value in the study of Mendelian heredity. His research in this area resulted in his work An Algebra for Theoretical Genetics, for which he received his PhD in 1940.
The work proved to be too abstract to be useful and during his time at Cold Spring Harbor he was often distracted. In a letter to his mentor Vannevar he wrote, “I’ve been working on three different ideas simultaneously, and strangely enough it seems a more productive method that sticking to one problem… Off and on I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intelligence, including telephony, radio, television, telegraphy, etc…”
With a doctorate under his belt Shannon went on to the Institute of Advanced Study in Princeton, New Jersey where his mind was able to wonder across disciplines and where he rubbed elbows with other great minds, including on occasion, Albert Einstein and Kurt Gödel. He discussed science, math and engineering with Hermann Weyl and John Von Neumann. All of these encounters fed his mind.
It wasn’t long before Shannon went elsewhere in New Jersey, to Bell Labs. There he got to rub elbows with other great minds such as Thornton Fry and Alan Turing. His prodigious talents were also being put to work for the war effort.
It started with a study of noise. During WWII Shannon had worked on the SIGSALY system that was used for encrypting voice conversations between Franklin D. Roosevelt and Winston Churchill. It worked by sampling the voice signal fifty times a second, digitizing it, and then masking it with a random key that sounded like the circuit noise so familiar to electrical engineers.
Shannon hadn’t designed the system, but he had been tasked with trying to break it, like a hacker, to see what its weak spots were, to find out if it was an impenetrable fortress that could withstand the attempts of an enemy assault.
Alan Turing was also working at Bell Labs on SIGSALY. The British had sent him over to also make sure the system was secure. If Churchill was to be communicating on it, it needed to be uncrackable. During the war effort Turing got to know Claude. The two weren’t allowed to talk about their top secret projects, cryptography, or anything related to their efforts against the Axis powers but they had plenty of other stuff to talk about, and they explored their shared passions, namely, math and the idea that machines might one day be able to learn and think.
Are all numbers computable? This was a question Turing asked in his famous 1937 paper On Computable Numbers. He had shown the paper to Shannon. In it Turing defined calculation as a mechanical procedure or algorithm.
This paper got the pistons in Shannon’s mind firing. Alan had said, “It is always possible to use sequences of symbols in the place of single symbols.” Shannon was already thinking of the way information gets transmitted from one place to the next. Turing used statistical analysis as part of his arsenal when breaking the Enigma ciphers. Information theory in turn ended up being based on statistics and probability theory.
The meeting of these two preeminent minds was just one catalyst for the creation of the large field and sandbox of information theory. Important legwork had already been done by other investigators who had made brief excursions into the territory later mapped out by Shannon.
Telecommunications in general already contained within it many ideas that would later become part of the theories core. Starting with telegraphy and Morse code in the 1830s common letters expressed with the least amount of variation, as in E, one dot. Letters not used as often have a longer expression, such as B, a dash and three dots. The whole idea of lossless data compression is embedded as a seed pattern within this system of encoding information.
In 1924 Harry Nyquist published the exciting Certain Factors Affecting Telegraph Speed in the Bell System Technical Journal. Nyquist’s research was focused on increasing the speed of a telegraph circuit. One of the first things an engineer runs into when working on this problem is how to transmit the maximum amount of intelligence on a given range of frequencies without causing interference in the circuit or others that it might be connected to. In other words how do you increase speed and amount of intelligence without adding distortion, noise or create spurious signals?
In 1928, Ralph Hartley, also at Bell Labs, wrote his paper the Transmission of Information. He made it explicit that information was a measurable quantity. Information could only reflect the ability of the receiver to distinguish that one sequence of symbols had been intended by the sender rather than any other, that the letter A means A and not E.
Jump forward another decade to the invention of the vocoder. It was designed to use less bandwidth, compressing the voice of the speaker into less space. Now that same technology is used in cellphones as codecs to compress the voice and so more lines of communication can be used on the phone companies allocated frequencies.
WWII had a way of producing scientific side effects, discoveries that would break on through to affect civilian life after the war. While Shannon worked on SIGSALY and other cryptic work he continued to tinker on other projects. Shannon’s paper was one of the things he tinkered and had profound side effects. Twenty years after Hartley addressed the way information is transmitted, Shannon stated it this way, "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
In addition to the ideas of clear communication across a channel Information theory also brought the following ideas into play:
-The Bit, or binary digit. One bit is the information entropy of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.
-The Shannon Limit: A formula for channel capacity. This is the speed limit for a given communication channel.
-Within that limit there must always be techniques for error correction that can overcome the noise level on a given channel. A transmitter may have to send more bits to a receiver at a slower rate but eventually the message will get there.
His theory was a strange attractor in a chaotic system of noisy information. Noise itself tends to bring diverse disciplinary approaches together, interfering in their constitution and their dynamics. Information theory, in transmitting its own intelligence, has in its own way, interfered with other circuits of knowledge it has come in contact with.
A few years later psychologist and computer scientist J.C. R. Licklider said, “It is probably dangerous to use this theory of information in fields for which it was not designed, but I think the danger will not keep people from using it.”
Information theory encompasses every other field it can get its hands on. It’s like a black hole, and everything in its gravitational path gets sucked in. Formed at the spoked crossroads of cryptography, mathematics, statistics, computer science, thermal physics, neurobiology, information engineering, and electrical engineering it has been applied to even more fields of study and practice: statistical inference, natural language processing, the evolution and function of molecular codes (bioinformatics), model selection in statistics, quantum computing, linguistics, plagiarism detection. It is the source code behind pattern recognition and anomaly detection, two human skills in great demand in the 21st century.
I wonder if Shannon knew when he wrote ‘A Mathematical Theory of Communication’ for the 1948 issue of the Bell Systems Technical Journal that his theory would go on to unify, fragment, and spin off into multiple disciplines and fields of human endeavor, music just one among a plethora.
Yet music is a form of information. It is always in formation. And information can be sonified and used to make music. Raw data becomes audio dada. Music is communication and one way of listening to it is as a transmission of information. The principles Shannon elucidated are form of noise in the systems of world knowledge, and highlight one way of connecting different fields of study together. As information theory exploded it was quickly picked up as a tool among the more adventurous music composers.
Information theory could be at the heart of making the fictional Glass Bead Game of Herman Hesse a reality. Herman Hesse also dropped several hints and clues in his work that connected it with the same thinkers whose work served as a link to Boolean algebra, namely Athanasius Kircher, Lull and Leibniz who were all practitioners and advocates of the mnemonic and combinatorial arts. Like its predecessors, Information Theory is well suited to connecting the spaces between different fields. In Hesse’s masterpiece the game was created by a musician as a way of “represent[ing] with beads musical quotations or invented themes, could alter, transpose, and develop them, change them and set them in counterpoint to one another.” After some time passed the game was taken up by mathematicians. “…the Game was so far developed it was capable of expressing mathematical processes by special symbols and abbreviations. The players, mutually elaborating these processes, threw these abstract formulas at one another, displaying the sequences and possibilities of their science.”
Hesse goes on to explain, “At various times the Game was taken up and imitated by nearly all the scientific and scholarly disciplines, that is, adapted to the special fields. There is documented evidence for its application to the fields of classical philology and logic. The analytical study had led to the reduction of musical events to physical and mathematical formulas. Soon after philology borrowed this method and began to measure linguistic configurations as physics measured processes in nature. The visual arts soon followed suit, architecture having already led the way in establishing the links between visual art and mathematics. Thereafter more and more new relations, analogies, and correspondences were discovered among the abstract formulas obtained this way.”
In the next sections I will explore the way information theory was used and applied in the music of Karlheinz Stockhausen.
Read the rest of the Radiophonic Laboratory series.
A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman, Simon & Schuster, 2018
The Information: a history, a theory, a flood by James Gleick, Pantheon, 2011
The Glass Bead Game by Herman Hesse, translated by Clara and Richard Winston, Holt, Rinehart and Winston, 1990
Information Theory and Music by Joel Cohen, Behavioral Science, 7:2
Information Theory and the Digital Age by Aftab, Cheung, Kim, Thakkar, Yeddanapudi
Logic and the art of memory: the quest for a universal language, by Paolo Rossi, The Athlone Press, University of Chicago, 2000.
“There is more in man and in music than in mathematics, but music includes all that is in mathematics.”—Peter Hoffman
Infotainment is usually thought of as light entertainment peppered with superficial “facts” and forgettable news. Yet another kind of infotainment exists, a musical kind that is based on mathematical algorithms. It is true entertainment that is filled with true information and though it is mathematically modeled none of it is fake.
In the twentieth century interest in the multidisciplinary fields of Information Theory and Cybernetics led to dizzy bursts of creativity when their ideas were applied to making new music. These disciplines applied rigorous math to the study of communication systems and how a signal transmitted from one person can cut through the noise of other spurious signals to be received by another person. They also made explicit the role of feedback inside of a system, how signals can amplify themselves and trigger new signals. All of this was studied complex equations and formulas.
Yet there is nothing new about the relationship between music and math.
Algorithmic music has been made for centuries. It can be traced all the way back to Pythagoras, who thought of music and math as inseparable. If music can be formalized in terms of numbers, music can also be formalized as information or data. The “data” the ancients used to drive their compositions was the movement of the stars. Ptolemy is known to us most for his geocentric view of the cosmos and the ordered spheres the celestial bodies traveled on. Besides being an astronomer Ptolemy was also a systematic musical theorist. He believed that math was the basis for musical intervals and he saw those same intervals at play in the spacing of heavenly bodies, each planet and body corresponding to a certain modes and notes.
Ptolemy was just one of many who believed in the reality of the music of the spheres. Out of these ancient Greek investigations into the nature of music and the cosmos came the first musical systems. The musician who used them was thus a mediator between the cosmic forces of the heavens above and the life of humanity here below.
Western music went through myriad changes across the intervening centuries after Ptolemy. World powers rose and fell, new religions came into being. Out of the mystical monophonic plainchant uttered by Christian monks in candlelit monasteries polyphony arose, and it called for new rules and laws to govern how the multiple voices were to sing together. This was called “canonic” composition. A composer in this era (15th century) would write a line for a single voice. The canonic rule gave the additional singers and voices the necessary instruction. For instance one rule would be to for a second voice to start singing the melody begun by one voice again after a set amount of time. Other rules would denote inversions, retrograde movement, or other practices as applied to the music.
From this basis the rules, voices, and number of instruments were enlarged through the renaissance until the time of the era of “Common Practice”, roughly between 1650 to 1900. This period encompassed baroque music, and the classical, romantic and impressionist movements. The 20th and 21st century are now giving birth to what Alvin Curran has called the New Common Practice.
In the Common Practice Era tonal harmony and counterpoint reigned supreme, and a suite of rhythmic and durational patterns gave form to the music. These were the “algorithmic” sand boxes composers could play in.
The New Common Practice, according to Curran encompasses, “the direct unmediated embracing of sound, all and any sound, as well as the connecting links between sounds, regardless of their origins, histories or specific meanings; by extension, it is the self guided compositional structuring of any number of sound objects of whatever kind sequentially and/or simultaneously in time and in space with any available means.” I’ve begun to think of this New Common Practice as embracing the entire gamut of 20th and 21st century musical practices: serialism, atonality, musique concrete, electronics, solo and collective improvisation, text pieces, and the rest of it.
One vital facet of the New Common Practice is chance operations, or the use of randomizing procedures to create compositions. Chance operations have a direct relation to information theory, but this approach can already be seen making cultural inroads in the 18th century when games of chance had a brief period of popularity among composers and the musical and mathematically literate. These are a direct precursor to the deeper algorithmic musical investigations that have started to flourish in the 20th century.
Much of this original algorithmic music work was done the old school way, with pencil, sheets of paper, and tables of numbers. This was the way composers plotted voice-leading in Western counterpoint. Chance operations have also been used as one way of making algorithmic music, such as the Musikalisches Würfelspiel or musical dice game, a system that used dice to randomly generate music from tables of pre-composed options. These games were quite popular throughout Western Europe in the 18th century and a number of different versions were devised. Some didn’t use dice but just worked on the basis of choosing random numbers.
In his paper on the subject Stephen Hedges wrote how the middle class in Western Europe were at the time enamored with mathematics, a pursuit as much at home in the parlors of the people as in the classroom of professors. "In this atmosphere of investigation and cataloguing, a systematic device that would seem to make it possible for anyone to write music was practically guaranteed popularity.”
The earliest known example was created by Johann Philipp Kirnberger with his "The Ever-Ready Minuet and Polonaise Composer" in 1757. C. P. E. Bach's came out with his musical dice game "A method for making six bars of double counterpoint at the octave without knowing the rules" five years later in 1758. In 1780 Maximilian Stadler published "A table for composing minuets and trios to infinity, by playing with two dice". Mozart was even thought to have gotten in on the dice game in 1792 when an unattributed version made an appearance from his music publisher a year after the composer’s death. This has not been authenticated to be by the maestro’s hand, but as with all games of possibility, there is a chance.
These games may have been one of the many inspirations behind The Glass Bead Game by Herman Hesse. This novel was one of the primary literary inspirations and touchstones for the young Karlheinz Stockhausen. The Glass Bead Game portrays a far future culture devoted to a mystical understanding of music. It was at the center of the culture of the Castalia, that fictional province or state devoted to the pursuit of pure knowledge.
As Robin Maconie put it the Glass Bead Game itself appears to be “an elusive amalgam of plainchant, rosary, abacus, staff notation, medieval disputation, astronomy, chess, and a vague premonition of computer machine code… In terms suggesting more than a passing acquaintance with Alan Turing’s 1936 paper ‘On Computable Numbers’, the author described a game played in England and Germany, invented at the Musical Academy of Cologne, representing the quintessence of intellectuality and art, and also known as ‘Magic Theater’.”
Hesse wrote his book between 1931 and 1943. The interdisciplinary game at the heart of the book prefigures Claude Shannon’s explosive Information Theory which was established in his 1948 paper A Mathematical Theory of Communication. His paper in turn bears a debt to Alan Turing, whom Shannon met in 1942. Norbert Wiener also published his work on Cybernetics the same year as Shannon. All of these ideas were bubbling up together out of the minds of the leading intellectuals of the day. Ideas about computable numbers, the transmission of information, communication, and thinking in systems, all of which would give artists practical tools for connecting one field to another as Hesse showed was possible in the fictional world of Castalia.
Robin Maconie again had the insight to see the connection between the way Alan Turing visualized “a universal computing machine as an endless tape on which calculations were expressed as a sequence of filled or vacant spaces, not unlike beads on a string”.
As the Common Practice era of western music came to an end at the close of the 19th century, the mathematically inclined serialism came into its own, and as the decades wore on games of chance made a resurgence, defining much of the music of the 20th century. With the advent of computers the paper and pencil method have taken a temporary backseat in favor of methods that introduce programmed chance operations.
Composers like John Cage took to the I Ching with as much tenacity as the character Elder Brother did in Hesse’s book. Karlheinz Stockhausen meanwhile used his music as means to make connections between myriad subjects and to create his own unique ‘Magic Theater’. Cybernetics and Information Theory each contributed to thinking of these and other composers.
Dice Music in the Eighteenth Century, pp. 184–185, Music and Letters 59: 180–87.
Conceptualizing music: cognitive structure, theory and analysis, by Lawrence M. Zbikowski, Oxford, 2002
The New Common Practice by Alvin Curran
Other planets: the complete works of Karlheinz Stockhausen 1950–2007, Rowman & Littlefield Publishers, 2016
A set of musicians dice have been made that offer up numerous possibilities for the practicing musician. Using random process doesn't just have to be for avant-garde composers anymore!
"The Musician’s Dice are patented, glossy black 12-sided dice, engraved in silver with the chromatic scale. They can be used in any number of ways – they bring the element of chance into the musical process. They're great for composing Aleatory and 12 tone-music, and as a basis for improvisation – they’re really fun in a jam session. They also make an effective study tool: they can be used as “musical flash cards” when learning harmony, and their randomness makes for fresh and challenging exercise in sight-singing and ear training. Plus, they look really cool on the coffee table, and give you a chance to throw around words like "aleatory.""
Below two musicians play around with using these dice.
Read the rest of the Radiophonic Laboratory series.
One of the key researchers and musicians exploring the new frontiers of science and music at Bell Labs was Laurie Spiegel. She was already an accomplished musician when she started working with interactive compositional software on the computers at Bell between the years at the age of twenty-eight. The year was 1973.
Laurie brought her restless curiosity and ceaseless inquiry with her to Bell Labs. She was the kind of person who could see the creative potential in the new tools the facility was creating and make something timeless. Her skill and ability in doing so was something she had prepared herself for through a scholars devotion to musical practice and study.
She was interested in the stringed instruments, the ones you strums and pluck. She picked up guitar, banjo, and mandolin for starters and learned to play these all by ear in her teens. She excelled in High School and was able to graduate early and get a jump start on a more refined education. Shimer College had an early entrance program and she made the cut. With Shimer as a launching board she got into their study abroad program and left her native Chicago to join the scholars at Oxford University. While pursuing her degree in Social Sciences she decided she better teach herself Western music notation. It was essential if she was to start writing down her own compositions. She managed to stay on at Oxford for an additional year after her undergraduate was completed. In between classes she would commute to London for lessons with composer and guitarist John W. Durante who fleshed out her musical theory and composition.
She was no slacker.
Her devotion to music continued to flourish when she came back to the states. In New York she worked briefly on documentary films in the field of social science, but the drive to compose music pushed her back onto the path of continuing education. So she headed back to school again, at Juilliard, going for a Masters in Composition. Hall Overton, Emmanuel Ghent and Vincent Perischetti were some of her teachers between 1969 and 1972. Jacob Druckman was another and she ended up becoming his assistant and ended following him to Brooklyn College. While there she also managed to find some time to research early American music under H. Wiley Hitchock before completing her MA in 1975.
Laurie was no stranger to work, and to making the necessary sacrifices so she could achieve her aims and full artistic potential. Laurie’s thinking is multidimensional, and her art multidisciplinary. Working with moving images was a natural extension of her musicality. She supported herself in the 70s in part through soundtrack composition at Spectra Films, Valkhn Films, and the Experimental TV Lab at WNET (PBS). TV Lab provided artists with equipment to produce video pieces through an artist-in-residence program. Laurie held that position in 1976 and composed series music for the TV Lab's weekly "VTR—Video and Television Review". She also did the audio sound effects for director David Loxton’s SF film The Lathe of Heaven, based on the novel by Ursula K. Leguin, and produced for PBS by WNET.
Speaking of the Experimental TV Lab she said, "They had video artists doing really amazing stuff with abstract video and image processing. It was totally different from conventional animation of the hand-drawn or stop-motion action kind. Video was much more fluid and musical as a form."
Going to school and scoring for film and television wasn’t enough to satisfy Laurie’s endless inquisitive curiosity. Besides playing guitar, she’d been working with analog modular instruments by Buchla, Electrocomp, Moog and Ionic/Putney. After a few years of experimentation she outgrew these synths and started seeking something that had the control of logic and a larger capacity for memory. This led Laurie to the work being done with computers and music at Bell Labs in Murray Hill. At first she was a resident visitor at Bell Labs, someone who got the privilege of working and researching there, but not the privilege of being on Ma Bell’s payroll.
Laurie had already been playing the ALICE machine when the Bell Telephone Company needed to film someone playing it for the 50th anniversary of the Jazz Singer. She had already become something of a fixture at Murray Hill so the company hired her as a musician. Not that the engineers at Bell who created the musical instruments were unmusical, but they were engineers. Laurie had the necessary background as a composer and the interest in how technology could open up to musical expression she was the perfect fit.
In 1973 while still working on her Masters she started getting her GROOVE on at Bell Labs, using the system developed by Max Mathews and Richard Moore.
GROOVE was to prove the perfect foil for expressing Spiegel’s creative ideas. While Max Mathews was bouncing around between a dozen different departments, Laurie was getting her GROOVE on at Murray Hill.
In the liner notes to the reissue of her Expanding Universe album created with GROOVE she wrote, “Realtime interaction with sound and interactive sonic processes were major factors that I had fallen in love with in electronic music (as well as the sounds themselves of course), so non-realtime computer music didn’t attract me. The digital audio medium had both of the characteristics I so much wanted, But it was not yet possible to do much at all in real time with digital sound. People using Max’s Music V were inputting their data, leaving the computer running over the weekend, and coming back Monday to get their 30 seconds of audio out of the buffer. I just didn’t want to work that way.
But GROOVE was different. It was exactly what I was looking for. Instead of calculating actual audio signal, GROOVE calculated only control voltage data, a much lighter computational load. That the computer was not responsible for creating the audio signal made it possible for a person to interact with arbitrarily complex computer-software-based logic in real time while listening to the actual musical output. And it was possible to save both the software and the computed time functions to disk and resume work where we left off, instead of having to start all over from scratch every time or being limited to analog tape editing techniques ex post facto of creating the sounds in a locked state on tape.”
RECORD IN A BOTTLE
Laurie’s most famous work is also the one most likely to be heard by space aliens. It was a realization of Johannes Kepler’s Harmonices Mundi using the GROOVE system and was the first track featured on the golden phonograph records placed aboard the Voyager spacecrafts launched in 1977. The records contain sounds and images intended to portray the vast diversity of life and culture on planet Earth. The records form a kind of time capsule, a message in a bottle sent off into interstellar space.
Carl Sagan chaired the committee that determined what contents should be put on the record. He said “The spacecraft will be encountered and the record played only if there are advanced space-faring civilizations in interstellar space, but the launching of this 'bottle' into the cosmic 'ocean' says something very hopeful about life on this planet."
A message in a bottle isn’t the most efficient way of communicating if your purpose is to reach a specific person in short amount of time. If however, you trust in fate or providence and the natural waves of the ocean, to guide the message to whomever it is meant to be received by, it can be oracular.
Like many musicians before her Laurie had been fascinated by the Pythagorean dream of a music of the spheres. When she set about to realize Kepler’s 17th century speculative composition, she had no idea her music would actually be traveling through the spheres. Kepler’s Harmonices Mundi was based on the varying speeds of orbit of the planets around the sun. He wanted to be able to hear “the celestial music that only God could hear” as Spiegel said.
"Kepler had written down his instructions but it had not been possible to actually turn it into sound at that time. But now we had the technology. So I programmed the astronomical data into the computer, told it how to play it, and it just ran."
The resulting sounds aren’t the kind of thing you’d typically put on your turntable after getting home from a hectic day to relax. The sounds are actually kind of agitating. Yet if you listen to the piece as the product of a mathematical and philosophical exercise it can still be enjoyable.
Other sounds that can be heard on the Voyager Golden Records include spoken greetings from Earth-people in fifty-five languages, Johnny B Goode by Chuck Berry, Melancholy Blues by Louis Armstrong, and music from all around the world, from folk to classical. Each record is encased in a protective aluminum jacket, and includes a cartridge and a needle for the aliens. Symbolic instructions, kind of like those for building a piece of furniture from Ikea, show the origin of the spacecraft and indicate how the record is to be played. In addition to the music and sounds there are 115 images are encoded in analog form.
Laurie was in Woodstock, New York when she received a phone call requesting the use of her music for the record. “I was sitting with some friends in Woodstock when a telephone call was forwarded to me from someone who claimed to be from NASA, and who wanted to use a piece of my music to contact extraterrestrial life. I said, 'C'mon, if you're for real you better send the request to me through the mail on official NASA letterhead!'”
It turned out to be the real deal and not just a prank on a musician.
In 2012 Voyager I entered Interstellar Space. And it’s till out there running, sending back information. Laurie says, “It's extremely heartening to think that our species, with all its faults, is capable of that level of technical operation. We're talking Apple II level technology, but nobody's had to go out there and reboot them once!"
AN EXPANDING UNIVERSE
Laurie explored many other ideas within the structure of the highly adaptable GROOVE system, taking naps in the Bell Labs anechoic chamber, when she needed a rest during the frequent all-nighters she pulled to get her work out into the world.
But getting them into a fashion fit for a golden record, or more common earthbound vinyl, was not easy. The results however were worth the effort of working with a system that took up space in multiple rooms.
“Down a long hallway from the computer room …was the analog room, Max Mathew’s lab, room 2D-562. That room was connected to the computer room by a group of trunk cables, each about 300 feet long, that carried the digital output of the computer to the analog equipment to control it and returned the analog sounds to the computer room so we could hear what we were doing in real time. The analog room contained 3 reel-to-reel 1/4” two-track tape recorders, a set of analog synthesizer modules including voltage-controllable lab oscillators (each about the size of a freestanding shoe box), and various oscillators and filters and voltage-controllable amplifiers that Max Mathews had built or acquired. There was also an anechoic sound booth, meant for recording, but we often took naps there during all-nighters. Max’s workbench would invariably have projects he was working on on it, a new audio filter, a 4-dimensional joystick, experimental circuits for his latest electric violin project, that kind of stuff.
Because of the distance between the 2 rooms that comprised the GROOVE digital-analog-hybrid system, it was never possible to have hands-on access to any analog synthesis equipment while running the computer and interacting with its input devices. The computer sent data for 14 control voltages down to the analog lab over 14 of the long trunk lines. After running it through 14 digital-to-analog converters (which we each somehow chose to calibrate differently), we would set up a patch in the analog room’s patch bay, then go back to the computer room and the software we wrote would send data down the cables to the analog room to be used in the analog patch. Many many long walks between those two rooms were typically part of the process of developing a new patch that integrated well with the controlling computer software we were writing.
So how was it possible to record a piece with those rooms so far apart? We were able to store the time functions we computed on an incredibly state-of-the-art washing-machine-sized disk drive that could hold up to a whopping 2,400,000 words of computer data, and to store even more data on a 75 ips computer tape drive. When ready to record, we could walk down and disconnect the sampling rate oscillator at the analog lab end, walk back and start the playback of the time functions in the computer room, then go back to the analog lab, get our reel-to-reel deck physically patched in, threaded or rewound, put into record mode and started running. Then we’d reconnect the sampling rate oscillator, which would start the time functions actually playing back from the disk drive in the other room, and then the piece would be recorded onto audio tape.”
Every piece on her album, The Expanding Universe, was recorded at Bell Labs. She computed in real time the envelopes for individual notes, how they were placed in the stereo field and their pitches. “Above the level of mere parameters of sound were more abstract variables, probability curves, number sequence generators, ordered arrays, specified period function generators, and other such musical parameters as were not, at the time, available to composers on any other means of making music in real time.”
Computer musicians today who are used to working with programs like Reaktor, Pure Data, Max/MSP, Ableton, Supercollider and a slew of others take for granted the ability to manipulate the sound as it is being made, on the fly, and with a laptop. Back then it was state of the art to be able to do these things, but doing it required huge efforts, and took up a lot of space.
During the height of the progressive rock music era, making music with computers was also risky business on the level of personal politics. Computers weren’t seen in a positive light. They were the tool of the Establishment, man. Used for calculating the path of nuclear missiles and storing your data in an Orwellian nightmare. Musicians who chose to work with technology were often despised at this time. There was an attitude that you were succeeding your creative humanity to a cold dead machine. “Back then we were most commonly accused of attempting to completely dehumanize the arts,” she said. This macho prog rock tenor haunted Laurie, despite her being an accomplished classical guitarist, and capable of shredding endless riffs on an electrified axe if she chose to.
She also took risks in her compositions inside the avant-garde circles she frequented. Her music is full of harmony when dissonance was all the rage. “It wasn’t really considered cool to write tonal music,” she said, speaking of the power structures at play in music school. All I know is that it’s a good thing she listened to the music she had inside of her.
Between 1974-79 Laurie got the idea that GROOVE could be used to create video art with just a little tweaking of the system. Unlike the hours of music released on her Expanding Universe album, her video work at Bell didn’t get the documentation it deserved. This was in part due to the systems early demise. Hardware changes at the lab prevented many records and tracings from being left behind.
VAMPIRE however is still worth mentioning. It stands for Video And Music Program for Interactive Realtime Exploration/Experimentation. Laurie was able to turn GROOVE into a VAMPIRE with the help of computer graphics pioneer Ken Knowlton. Ken was also an artist and a researcher in the field of evolutionary algorithms, something else Laurie would later take up and apply to music. In the 60’s Knowlton had created BEFLIX (Bell Flicks), a programming language for bitmap computer-produced movies. After Laurie got to know him they soon started collaborating together. It was another avenue for her to pursue her ideas for making musical structures visible.
Laurie had reasoned that if computer logic and languages had made it possible to interact with sound in real time, than the GROOVE system should be powerful enough to handle the real time manipulation of graphics and imagery. She started working on this theory first using a program called RTV (Real Time Video) and a routine given to her by Ken. She wrote a drawing program, now similar to what would be called Paint. It became the basis on which VAMPIRE was built.
With Ken she worked out a routine for a palette of 64 definable bitmap textures. These could be used as brushes, alphabet letters, or other images. This was used inside of a box with 10 columns, each column having 12 buttons representing a bit that could be on or off. This is how she entered the visual patterns.
In addition to weaving strands of sound Laurie was also a hand weaver. Cards with small holes in them have often been used over the years as one approach to the art form. Card weaving is a way to create patterned woven bands, both beautiful and sturdy. Some may think the cards are a simple tool, but they can produce weavings of infinite design and complexity. Hand weaving cards are made out of cardboard or cardstock, with holes in them for the threads, very similar to the Hollerith punch cards used for programming computers. She struck upon the idea that she could create punch cards to enter batches of patterns via the card reader on the computer. After she consulted some of her weaving books she made a large deck of the cards to be able to shuffle and input into the system.
Laurie quickly found that she enjoyed playing the drawing parameters just like someone would play a musical instrument. Instead of changing pitch, duration, timbre she could change the size, color and texture of an image, as she drew it in real time with switches and knobs making it appear on the monitor. Her skills as a guitarist directly translated to this ability. One hand would do the drawing. Perhaps it was the same as did the strumming and plucking of the strings. The other hand would change the parameters of the image using a joystick, and the other tools, just as it might change chords on one of her lutes, banjos or mandolins.
She saw the objects on the screen as melodies, but it was just one line of music. She wanted more lines as counterpoint was her favorite musical form. She wanted to be able to multiple strands of images together. She wrote into the program another realtime device to interact with. This was a square box of 16 buttons for typical contrapuntal options as applied to images. This gave her a considerable expansion of options and variables to play with.
After all this work she eventually hit a wall of what she could achieve with VAMPIRE in terms of improvisation. “The capabilities available to me had gotten to be more than I could sensitively and intelligently control in realtime in one pass to any where near the limits of what I felt was their aesthetic potential.” It had reached the point where she needed to think of composition.
Ken Knowlton’s work with algorithms was beginning to rub off on her and she started to think of how “powerful evolutionary parameters in sonic composing, and the idea of organic or other visual growth processes algorithmicly described and controlled with realtime interactive input, and of composing temporal structures that could be stored, replayed, edited, added to (‘overdubbed’ or ‘multitracked’), refined, and realized in either audio or video output modalities, based on a single set of processes or composed functions, made an interface of the drawing system with GROOVE's compositional and function-oriented software an almost inevitable and irresistible path to take. It would be possible to compose a single set of functions of time that could be manifest in the human sensory world interchangeably as amplitudes, pitches, stereo sound placements, et cetera, or as image size, location, color, or texture (et cetera), or (conceivably, ultimately) in both sensory modalities at once.”
Ever the night owl Laurie said of her work with the system, “Like any other vampire, this one consistently got most of its nourishment out of me in the middle of the night, especially just before dawn. It did so from 1974 through 1979, at which time its CORE was dismantled, which was the digital equivalent of having a stake driven through its art.”
ECHOES OF THE BELL
The echoes of Laurie’s time spent at Bell Laboratories can be found in the work she has done since then, even as she was devastated by the death of GROOVE and VAMPIRE.
She went on to write the Music Mouse software in 1986 for Macintonsh, Amiga and Atari computers and also founded the New York University Computer Music Studio. She has continued to write about music for many journals and publications and has continued to compose. Laurie has applied her knowledge of algorithmic composition and information theory into her work.
Now the tools for making computer music can be owned by many people and used in their own home studios, but the echo of the Bell is still heard.
This article only scratches the surface of Laurie's life and work. A whole book could be written about her, and I hope someone will.
The liner notes to the 2012 reissue of Expanding Universe
Read the rest of the Radiophonic Laboratory series.
At Bell Labs Max Mathews was the granddaddy of all its music makers. If you use a computer to make or record music with, he is your granddaddy too. In 1957 Max wrote a program for a digital computer called Music I. It was a landmark demonstration in the ability to write code to command a machine to synthesize music. Computers can do things and play things that humans alone cannot. Music I opened up a world of new timbral and acoustic possibilities. This was a perfect line of inquiry for the director of Bell Laboratories Behavioral and Acoustic Research Center where Mathews explored a spectrum of ideas and technologies between 1955 and 1987. Fresh out of MIT where he received a Sc.D in electrical engineering Mathews was ready to get to work and Music I was only the beginning of a long creative push in technology and the arts.
Max’s corner of the sprawling laboratory in Murray Hill, New Jersey carried out research in speech communication, speech synthesis, human learning and memory, programmed instruction, the analysis of subjective opinions, physical acoustics, industrial robotics and music.
Max followed the Music I program with II, III, IV and V, each iteration taking its capabilities further and widening its parameters. These programs carried him through a decade of work and achievement. As noted in the chapter on the Synthesis of Speech, Max had created the musical accompaniment to “Daisy: A Bicycle Built for Two” later made famous by the fictional computer HAL in Stanley Kubrick’s 2001: A Space Odyssey.
Starting in 1970 he started working with Richard Moore to create the GROOVE system. It was intended to be a “musician-friendly” computer environment. The other programs broke incredible new ground, but the use of them leaned more towards those who could program computers and write code in their esoteric languages, than the average musician or composer of the time. GROOVE was the next step in bringing it to its potential users. It was a hybrid digital-analog system that stood for Generating Realtime Operations On Voltage-controlled Equipment.
Max notes, “Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
But Chowning hadn’t discovered FM Synthesis at the time GROOVE was being created. It was still the 70’s and affordable computers and synthesizers had yet to make it into the homes outside of the most devoted hobbyists. GROOVE was a first step to making computer music in real time. The set up included an analog synth with a computer and monitor. The computer’s memory made it appealing to musicians who could store their manipulations of the interface for later recall. It was a clever workaround the limitations of each technology. The computer was used for its ability to store the musical parameters while the synth was used to create the timbres and texture without relying on digital programming. This set up allowed creators to play with the system, fine tune what they wanted it to do, for later re-creation.
Bell Labs had acquired a Honeywell DDP224 computer from MIT to use specifically for sound research. This is what GROOVE was built on. The DDP-24 was a 24-bit transistor machine that used magnetic core memory to store data and program instructions. That it had disk storage also meant it was possible for libraries of programming routines to be written. This allowed the users to create customized logic patterns. A composition could be tweaked, adjusted and mixed in real time on the knobs, controls, and keys. In this manner a piece could be reviewed as a whole or in sections and then replayed from the stored data.
When the system was first demonstrated in Stockholm at the 1970 conference on Music and Technology organized by UNESCO, music by Bartok and Bach was played. A few years later Laurie Spiegel would grasp the unique compositional possibilities of the system and take it to the max.
In the meantime Max himself was a guy in demand. IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in France brought him on board as a scientific advisor as they built their own state of the art sound laboratory and studios in France between 1974 and 1980.
In 1987 Max left his position at Bell Labs to become a Professor of Music (Research) at Stanford University. There he continued to work on musical software and hardware, with a focus on using the technology in a live setting. “Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP. Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.
Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.
Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.
Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.
Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today.
Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.
It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
Read the rest of the Radiophonic Laboratory series.
One of the worst symphony orchestras ever to have existed in the world now gets the respect it is due in a retrospective book published by Soberscove Press, collecting the memories, memorabilia and photographs of its talented members. The Worlds Worst: A Guide to the Portsmouth Sinfonia, edited by Christopher M. Reeves and Aaron Walker, though long overdue, has arrived just in time.
For those unfamiliar with the Portsmouth Sinfonia, here is the cliff notes version: founded by a group of students at the Portsmouth School of Art in England 1970 this “scratch” orchestra was generally open to anyone who wanted to play and ended up drawing art students who liked music but had no musical training or, if they were actual musicians, they had to choose and play an instrument that was entirely new to them. One of the other limits or rules they set up was to only play compositions that would be recognizable even to those who weren’t classical music buffs. The William Tell Overture being one example, Bheetoven’s Fifth Symphony and Also Sprach Zarathustra being others. Their job was to play the popular classics, and to do it as amateurs. English composer Gavin Bryars was one of their founding members. The Sinfonia started off as a tongue-in-cheek performance art ensemble but quickly took on a life of its own, becoming a cultural touchstone over the decade of its existence, with concerts, albums, and a hit single on the charts.
The book has arrived just in time because one of the lenses the work of the Portsmouth Sinfonia can be viewed through is that of populism; and now, when the people and politics on this planet have seen a resurgence of populist movements, the music of the Portsmouth Sinfonia can be recalled, reviewed, reassessed and their accomplishments given a wider renown.
One way to think of populism is as the opposite and antithesis of elitism. I have to say I agree with noted essayist John Michael Greer and his frequent tagline that “the opposite of one bad idea is usually another bad idea”. Populism may not be the answer to the worlds struggle against elitism, yet it is a reaction, knee jerk as it may be. Anyone who hasn’t been blind-sighted by the bourgeois will know the soi-distant have long looked down on those they deem lesser than with an upturned nose and sneer. Many of those sneering people have season tickets to their local symphony orchestra. They may not go because they are music lovers, but because it is a signifier of their class and social status. As much as the harmonious chords played under the guidance of the conductors swiftly moving baton induce in the listener a state of beatific rapture, there is on the other hand, the very idea that attending an orchestral concert puts one at the height of snobbery. After all, orchestral music is not for everyone, as ticket prices ensure.
The Portsmouth Sinfonia was a remedy to all that. It put classical music back into the hands and mouthpieces, of the people. It brought a sense of lightheartedness and irreverence into the stuffy halls that were so often filled with dour, stuffy, serious people listening in such a serious way to so serious music. The Porstmouth Sinfonia made the symphony fun again, and showed that the canon of the classics shouldn’t just be left to the experts. Musical virtue wasn’t just for virtuosos, but could be celebrated by anyone who was sincere in their love of play.
Still the Sinfonia was also more than that. It was an incubator for creative musicians and a doorway from which they could launch and explore what composer Alvin Curran has called the “new common Practice”, that grab bag of twentieth century compositional tools, tricks, and approaches, from the seriality of Schoneberg to the madcap tomfoolery of Fluxus. This book shows some of these explorations through the voices of the members of the Sinfonia as they recollect their ten year experiment at playing, and being playful with, the classical hits of the ages.
As Brian Eno noted in the liner notes to Portsmouth Sinfonia Plays the Popular Classics, essential reading that is provided in the book, “many of the more significant contributions to rock music, and to a lesser extent avant-garde music, have been made by enthusiastic amateurs and dabblers. Their strength is that they are able to approach the task of music making without previously acquired solutions and without a too firm concept of what is and what is not musically possible.” Thus they have not been brainwashed, I mean trained, to the strict standards and world view of the classical career musician.
Gavin Bryars, who was another founding member of the orchestra speaks to this in an interview with Michael Nyman, also included in the book. He said, “Musical training is geared to seeing your output in the light of music history.” Such training is what can make the job of the classical musician stressful and stifling. Stressful because of the degree of perfection players are required to achieve, and stifling because deviation, creative or otherwise, is disavowed and un-allowed. I’m reminded of how Karlheinz Stockhausen, when exploring improvisation and intuitive music had to work really hard at un-training his classically trained ensemble of musicians in the matter of being freed from the score.
The amateurs in the Portsmouth Sinfonia were free from the weight of musical history. If a wrong note was played, and many were, they could just get on with it, and let it be. This created performances full of humor and happy accidents even as they tried render the music correct as notated.
Training and discipline in music give can give a kind of perfectionists freedom as it relates to playing with total accuracy, but takes that freedom away when it comes to experimenting and exploration. Under the strictures of the conductor’s baton, playing in the symphony seems to be more about taking marching orders from a dictator than playing equally with a group of fellow musicians. John Farley, who took on the role of conductor within the Sinfonia, held his baton lightly. He wasn’t so much telling the other musicians how to play, or even keeping time, but acting out the part of what an audience expects of a conductor, acting as something of a foil for the musicians he was collaborating with in the performance.
One of the essential texts included in this book is “Collaborative Work at Portsmouth” written by Jeffrey Steele in 1976. His piece shows how the Sinfonia really grow out of social concerns and looking at new ways to work together. Steele’s essay allies itself from the start with the constructivist movement of art, which he had been involved with as a painter. Constructivism was more concerned with the use of art in practical and social contexts. Associated with socialism and the Russian avant-garde, it took a steely eyed look at mysticism and the spiritual content so often found in painting and music, on the one hand, and the academicism music can degenerate into on the other. The Portsmouth Sinfonia coalesced in a dialectical resolution between these two tendencies. Again, the opposite of one bad idea is usually another. The Sinfonia bypassed these binary oppositions to create a third pole.
A version of Steele’s essay was originally supposed to be included in an issue of Christopher Hobbs Experimental Musical Catalogue (EMC). A “Porstmouth Anthology” had been planned as an issue of the Catalogue, and a dummy of the publication even made, but that edition of EMC never came out. It has been rescued here in this book. Other rescued bits include a selection of correspondence.
Besides the populist implications, and the permission given to enthusiastic amateurs to take center stage, the book explores the ideas, philosophies and development of the various artists and musicians who made up the Sinfonia itself in the recollections section of the book where Ian Southwood, David Saunders, Suzette Worden, Robin Mortimore and the groups manager and publicist Martin Lewis all reflect on their time as members. Reading these you get the sense that the whole thing was a real community effort, a collaborative effort where everyone had a role and took initiative in whatever ways they could.
A long essay by Christopher M. Reeves, one of the editors of the book, puts the whole project into historical and critical context. Reeves writes of their “transition from intellectual deconstrunction to punchline symphony is a trajectory in art that has little precedent, and points to a more general tendency in the arts throughout the 1970s, in the move from commenting or critiquing dominant culture, to becoming subordinate to it.” His essay goes from the groups origins as a cross-disciplinary adventure to their eventual appropriation by the mainstream as a kind of novelty music you might here on an episode of Dr. Demento’s radio show.
Just how serious was the Sinfonia supposed to be taken?
Reeve’s puts it thus, “It is within this question that the Sinfonia found a sandbox, muddying up the distinctions between seriousness and goofing off, intellectual exercises and pithy one liners.” The Sinfonia’s last album was titled Classical Muddly. The waters left behind by them are still full of silt and only partially clear. This book does a good job at straining their efforts through a sieve and presenting the reader with the material and textual ephemera the group left behind, all in a beautifully made tome that is itself a showcase of the collaborative spirit found in the Portsmouth Sinfonia.
Robert Mortimore had told Melody Maker’s Steve Lake in 1974, “The Sinfonia came about partly as a reaction against Cardew [and his similar Scratch Orchestra]. He had the classical training and his audience was very elitist. But he wasn’t achieving anything. We listened, thought, ‘well, why don’t we have a go, it can’t be all that difficult. Y’know if Benjamin Britten and Sir Adrian Boult can do it, why can’t we?”
In this time when so many artistic and musical institutions are underfunded, the Portsmouth Sinfonia can serve as a model. By having trained musicians play instruments they did not originally know how to play, and by having untrained musicians pick an instrument and be a part of an ensemble, they showed that with diligence anyone can bring the western canon of classical music to life, and often do it with much more humor and life than can be heard in contemporary concert halls.
Just maybe people are tired of being told how to think and what to do. Or how to play an instrument, and what “good” music should be played on that instrument. The Worlds Worst is a reminder of the inspiring example of the Portsmouth Sinfonia, and the accomplishments that can be made when amateurs and in-experts take to the world’s stage and have fun making a raid on the western classical canon, wrong notes and all.
The Worlds Worst: A Guide to the Portsmouth Sinfonia edited by Christopher M. Reeves and Aaron Walker is available from Soberscove Press.
Just as the folks inside the Sound-House of the BBC’s Radiophonic Workshop continued to refine their approach and techniques to electronic music, another older sound house back across the pond in America continued to research new “means to convey sounds in trunks and pipes, in strange lines and distances”. Where the BBC Radiophonic Workshop used budget friendly musique concrete techniques to create their otherworldly incidental music, the pure research conducted at Bell Laboratories was widely diffused and the electronic music systems that arose out of those investigations were incidental and secondary byproducts. The voder and vocoder were just the first of these byproducts.
Hal Alles was a researcher in digital telephony. The fact that he is remembered as the creator of what some consider the first digital additive synthesizer is a quirk of history. Other additive synthesizers had been made at Bell Labs, but these were software programs written for their supersized computers.
Alles needed to sell his digital designs within and without a company that had been the lords of analog, and it needed to be interesting. The synthesizer he came up with, was his way of demonstrating the companies digital prowess, and entertaining his internal and external clients at the same time. What he came up with was called the Bell Labs Digital Synthesizer or sometimes the Alles Machine or ALICE.
It should be noted that Hal bears no relation to the computer in 2001: A Space Odyssey. The engineer recalls those heady days in the late sixties and 1970s. “As a research organization (Bell labs), we had no product responsibility. As a technology research organization, our research product had a very short shelf life. To have impact, we had to create ‘demonstrations’. We were selling digital design within a company with a 100 year history of analog design. I got pretty good at 30 minute demonstrations of the real time capabilities of the digital hardware I was designing and building. I was typically doing several demonstrations a week to Bell Labs people responsible for product development. I had developed one of the first programmable digital filters that could be dynamically reconfigured to do all of the end telephone office filtering and tone generation. It could also be configured to play digitally synthesized music in real time. I developed a demo of the telephone applications (technically impressive but boring to most people), and ended the demo with synthesized music. The music application was almost universally appreciated, and eventually a lot of people came to just hear the music.”
Max Mathews was one of the people who got to see one of these demos, where the telephonic equipment received a musical treatment. Mathews was the creator of the MUSIC X series of computer synthesis programming languages. He was excited by what Alles was doing and saw its potential. He encouraged the engineer to develop a digital music instrument.
“The goal was to have recording studio sound quality and mixing/processing capabilities, orchestra versatility, and a multitude of proportional human controls such as position sensitive keyboard, slides, knobs, joysticks, etc,” Mathews said. “It also needed a general purpose computer to configure, control and record everything. The goal included making it self-contained and ‘portable’. I proposed this project to my boss while walking back from lunch. He approved it before we got to our offices.”
Harmonic additive synthesis had already been used back in the 1950s by linguistics researchers who were working on speech synthesis and Bell Labs was certainly in on the game. Additive synthesis at its most basic works by adding sine waves together to create timbre. The more common technique until that time had been subtractive synthesis, which used filters to remove or attenuate the timbre of a sound.
Computers were able to do additive synthesis with wavetables that had been pre-computed, but it could also be done by mixing the output of multiple sine wave generators. This is what Karlheinz Stockhausen basically did with Studie II, though he achieved the effect through by building up layers of pure sine waves on tape rather than with a pre-configured synth or computer set up.
That method is laborious. A machine that can do it for you goes a long way towards being able to labor at other things while making music.
ALICE was a hybrid machine in that it used a mini-computer to control a complex bank of sound generating oscillators. The mini-computer was an LSI-11, by the Digital Equipment Corporation, a cost reduced version of their PDP-11 in production for twenty years starting in 1970. This controlled the 64 oscillators whose output whose was then mixed to create a number of distinct sounds and voices. It had programmable sound generating functions and the ability to accept a number of different input devices.
The unit was outfitted with two 8-inch floppy drives supplied by Heathkit; they made their own version of the LS-11 and sold it as the H11. AT&T rigged it out with one of their color video monitors. A custom converter was made that sampled the analog inputs and transferred them to 7 bit digital resolution 250 times a second. There were a number of inputs used to work with ALICE in real time: two 61-key piano keyboards, 72 sliders alongside various switches, and four analog joysticks just to make sure the user was having fun. These inputs were interpreted by the computer which in turn controlled the outputs sent to sound generators as parameters. The CPU could handle around 1,000 parameter changes per second before it got bogged down.
The sound generators themselves were quite complex. A mere 1,400 integrated circuits were used in their design. Out of the 64 oscillators the first bank of 32 were used as master signals. This meant ALICE could be expected to achieve 32 note polyphony. The second set was slaved to the masters and generated a series of harmonics. If this wasn’t enough sound to play around with, ALICE was also equipped with 32 programmable filters and 32 amplitude multipliers. With the added bank of 256 envelope generators ALICE had a lot of sound potential and sound paths that could be explored through her circuitry. All of those sounds could mixed in many different ways into the 192 accumulators she was also equipped with. Each of the accumulators was then sent to one of the four 16-bit output channels then reconverted from digital back into analog on the audio output.
Waveforms were generated by looking up the amplitude for a given time in a 64k word ROM table. There were a number of tricks Alles programmed into the table to reduce the number of calculations the CPU needed to run. 255 timers outfitted with 16 FIFO stacks controlled the whole shebang. The user put events into a timestamp sorted queue that fed it all into the generator.
Though the designers claimed the thing was portable, all the equipment made it weigh in at a hefty 300 pounds, making it an unlikely option for touring musicians. As the worlds first true digital additive synthesizer it was quite the boat anchor.
Completed in 1976, only one full-length composition was recorded for the machine, though a number of musicians, including Laurie Spiegel whose work will be explored later, played the instrument in various capacities. For the most part though the Alles Synth was brushed aside; even if the scientists and engineers at Bell Labs were tasked to engage in pure research they still had business to answer to. A use was found for Hal’s invention in terms of marketing was found once again in 1977.
In that year the Motion Picture Academy was celebrating the 50th anniversary of the talkies. The sound work for The Jazz Singer, the first talking picture, had been done by Western Electric, with their Vitaphone system technology. The successful marriage of moving image and sound first seen and heard in that movie wouldn’t have been possible without the technology developed by the AT&T subsidiary and Ma Bell was still keen to be in on the commemoration of the film. ALICE is what they chose to use as the centerpiece for the event.
A Bell Labs software junky by the name of Doug Bayer was brought in to improve the operating system of the synth and try to make the human interface a bit more user friendly. The instrument was flown to Hollywood at considerable risk. The machine was finicky enough without transporting it. Taking it on a plane where it could get banged up, whacking out all of its components in just one bump, and potentially sending it into meltdown mode was not out of the question.
So they hired musician and composer Laurie Spiegel, who’d already been working at the Labs without pay, to be filmed playing ALICE. This would be shown in the event that the musician they hired to play it live, Roger Powell, wouldn’t be able to do so due to malfunction. This film is the only recording of it in performance left in known existence.
Yet to hear how the Bell Labs Digital Synthesizer sounds look no further than Don Slepian’s album Sea of Bliss. Max Mathews had hired Slepian to work with the synth as an artist in residence between 1979 and 1982. Don had been born into a scientific family. From an early age he demonstrated technical talent and musical ability. He had begun making music in 1968, programming his own computers, soldering together his own musical circuits, and experimenting with tape techniques. As a member of the Defense Advanced Research Projects Agency (DARPA) Don worked as a tester on the early iteration of the internet and dor a time he lived in Hawaii and played as a synthesizer soloist with the Honolulu Symphony. All of this made him a perfect fit as artist in residence at Bell Labs.
The results his work are on the album: epic length cuts of deep ambient music bringing relaxation and joy to the listener. It’s the audio version of taking valium. Listen to it and feel the stress of life melt away.
Don Slepian described his 1980 masterpiece for the online Ambient Music Guide. “It’s stochastic sequential permutations (the high bell tones), lots of real time algorithmic work, but who cares? It's pretty music: babies have been born to it, people have died to it, some folks have played it for days continuously. No sequels, no formulas. It was handmade computer music."
The Bell Labs Digital Synthesizer was soon to leave its birthplace after Don had done his magic with the machine. In 1981 ALICE was disassembled and donated to the TIMARA Laboratories at the Oberlin Conservatory of Music.
Oberlin, and by extension TIMARA (Technology in Music and Related Arts) has a history that reaches back to the very beginning of electronic music, in the mid-19th century. None other than Elisha Gray was an adjunct physics professor at the college. He is considered by some as the father of the synthesizer due to his invention of the musical telegraph and his seventy plus patents for inventions that were critical in the development of telecommunications, electronic music and other fields. If it had not been for Gray’s electromechanical oscillator, Thaddeus Cahill would never have been able to create that power hungry beast of an instrument, the Telharmonium.
The Music Conservatory at Oberlin dates back to 1865 and they joined the ranks of those radio and television stations who built electronic music studios with the opening of TIMARA in 1967. The department was founded by Olly Wilson as a response to the demand for classes in electronics from composition students. It became the first of a number of departments in the American higher education scene to create a space for experimentation in analog synthesis and mixed media arts.
Though ALICE is now enshrined in one of the many sound laboratories at TIMARA her influence continued to be felt not long after she was sequestered there. A number of commercial synthesizers based on the Alles design were produced in 1980s.
The Atari AMY sound chip is a case in point and was the smallest of the products to be designed. It stood for Additive Music sYnthesis. It still had 64-oscillators but they were reduced to a single-IC sound chip. A chip that had numerous design issues. Additive synthesis could now be done with less, though it never really got into the hands of users. It was scheduled to be used on a new generation of 16-bit Atari computers and for the next line of game consoles and by their arcade division. AMY never saw the light of day in any configuration. Even after Atari was sold in 1984, she remained waiting in the dark to get used on a project, but was cut from being included in new products after many rounds at the committee table, where so many dreams wind up dead.
Still other folks in the electronic music industry made use of the principles first demonstrated by ALICE. The Italian company Crumar and Music Technologies’ of New York got into a partnership to create Digital Keyboards. Like Atari they wanted to resize the Alles Machine, make it smaller. They came up with a two-part invention using a Z-80 microcomputer and a single keyboard with limited controls. They gave it the unimaginative name Crumar General Development System and it sold in 1980 for $30,000 buckaroos. Since it was out of the price range of your average musician, they marketed the product to music studios. Wendy Carlos got her hands on one and the results can be heard on the soundtrack to Tron.
Other companies got into the game and tried to produce something similar at lower cost, but none of these really managed to find a good home in the market due to the attached price tag. When Yamah released the DX7 in 1983 for $2,000 the demand for additive synths tanked. The DX7 implemented FM synthesis and enabled it to achieve many of the same effects as ALICE with as few as two oscillators. FM synthesis and its relationship to FM radio modulation will be looked at in detail in another article.
It had all started out as a way for Hal Alles to look at potential problems in digital communications, such as switching, distortion, and echo. It ended up becoming a tool for extending human creativity.
Read the other articles in the Radiophonic Laboratory series.
Just as Daphne Oram was stepping out of the BBC Radiophonic Workshop, another lady was stepping in. Though Delia Derbyshire may not be a household name, the sound of her music is certainly embedded in the brains of several generations of Science Fiction fans, as she realized the iconic score for the Doctor Who theme song in the Workshop studios. With the original Doctor Who series lasting for twenty-six continuous seasons from 1963 to 1989, the song has touched the lives of millions of people around the world. I give credit to my own love of electronic music to my being a fan of Doctor Who since I was ten years old.
I remember the first time I watched, catching a rerun of an episode late one Saturday night on the local PBS station, while my parents and grandparents visited at my great-grandparents house and all those adults were talking and playing scrabble around the kitchen table. The show was like a revelation. It was the fifth Doctor, played by Peter Davison. Not only was the storyline a subject of fascination, but the sounds, and the way they melded with the visuals transported my imagination. I became a fan at that moment and ever since Doctor Who has been my favorite TV show. Though my first love remains the original series, and my first Doctor, the first few seasons of the 2005’s Doctor Who revival exceeded my expectations and I continue to tune in.
There is one area where I am a Doctor Who purist though. That is where the theme song is concerned. Each new regeneration of the travelling Time Lord saw the producers of the show making slight adjustments to the song. Eventually it came to a point where, though the theme was the same, they did not use the original version as recorded, and essentially, created by Delia Derbyshire. It’s quite a shame because there was magic in that mix.
The original tune was written by Ron Grainer, but he didn’t have anything to do with the production, how it was made. The project for realizing it and arranging it for electronics was given to Delia.
But how did she end up at the BBC in the first place?
She had been a bright girl, learning to read and write at an early age, and started training on the piano at age eight, but like many of us who have grown up as part of the working or middle-class it was radio that opened up her world. Delia said “the radio was my education”. Being involved with radio also ended up being her fate. After graduating from Barr’s Hill Grammar School in 1956 she was accepted by both Oxord and Cambridge. This was “quite something for a working class girl in the 'fifties, where only one in 10 were female,” she said. She ended up going to Girton College, Cambridge, because of a mathematics scholarship she had received.
Despite some success with the mathematical theory of electricity, she claimed to have not done so well in school at the time. So she switched her focus to include music, specializing in medieval and modern music history, while graduating with a BA in mathematics. She also received a diploma, or what the British call a licentiate, from the Royal Academy of Music in the study of pianoforte.
While in school she had developed an interest in the musical possibilities of everyday objects. This would later find its full expression in the musique concrete she would make and master at the BBC. While still in school in 1958 she also had the opportunity to visit the Worlds Fair in Brussels where she experienced Edgard Varèse's Poème Électronique installed in Le Corbusier's pavilion. Varèse's work was a touchstone for the new generation of electronic musicians as Daphne had also experienced this work at the Fair.
Upon finishing her schooling she approached the university career office for advice. The pieces had been arranged on the board of her life but she needed help with making her next move. She told the counselor she had an interest in “sound, music and acoustics, to which they recommended a career in either deaf aids or depth sounding.” With their advice wanting, she made a move on her own and tried to get a gig at Decca Records, but was told no. No women were employed in the recording studio of the label.
In lieu of a job with Decca she scored a position with the UN in Geneva as a piano and math teacher to the children of various consuls and diplomats. Later she worked as an aid to Gerald G. Gross, who worked in diplomatic functions and oversaw conferences for the International Telecommunications Union. Eventually she moved back home to Coventry where she taught at a primary school. This was followed by a brief stint in the promotions department at Boosey & Hawkes, a music publisher.
The following year in 1960 she stepped into the BBC as trainee assistant studio manager. Her first job there was working on the Record Review, a program where hoity-toity critics gave their highfalutin opinions on classical music recordings. Just like Daphne Oram, she had a well-developed sense of where to drop the needle on any given platter. Delia said "some people thought I had a kind of second sight. One of the music critics would say, ‘I don't know where it is, but it's where the trombones come in’ and I'd hold it up to the light and see the trombones and put the needle down exactly where it was. And they thought it was magic."
Of this time period she further elaborated, “It was very exciting, especially on the music shows. All the records had to be spun in by hand and split second timing was essential. When tapes came in I used to mark them with yellow markers to ensure that one followed another, and that there were no embarrassing gaps in between,”
Not long after she had started working on the Record Review she heard about the Sound-House Daphne Oram had helped create, the Radiophonic Workshop, and she knew she wanted to be in the Sound-House, developing and working in the new field of electronic and electro-acoustic music, exploring the widest parameters of musical research.
When she approached the heads of Central Programme Operation with her wish to work in the Radiophonic Workshop, they were baffled and puzzled. The Workshop wasn’t a place most people sought out to work in, it was a place people were assigned, no doubt with grumbling resentment. It was a place only the eccentric, or visionary, would choose to go.
“I had done some composing but I had a running battle with the B.B.C. to let me specialise in this field. Eventually they gave me three months to prove I was good -- and I'm still here,” she noted in a newspaper article.
In 1962 Delia got here wish and was assigned to the Radiophonic Workshop in Maida Vale. For the next decade and a year she gave the BBC a herculean effort in the creation of sound and music for about 200 radio and television shows.
“I have to sense the mood which the producer is trying to achieve. He may want something abstract, or it may be a piece with changing moods which have to correspond to specific cues in either dialogue or graphic designs.”
The next year was the year Doctor Who came to broadcast. The theme song was one of the first on television to be made entirely with electronics.
Brian Hodgson, who worked with Delia at the Workshop, and also produced a lot of incidental music for Doctor Who commented on her work on the theme. “It was a world without synthesisers, samplers and multi-track tape recorders; Delia, assisted by her engineer Dick Mills, had to create each sound from scratch. She used concrete sources and sine- and square-wave oscillators, tuning the results, filtering and treating, cutting so that the joins were seamless, combining sound on individual tape recorders, re-recording the results, and repeating the process, over and over again.”
Interviewed about the theme on a 1964 episode of the radio show Information Please she said, “the music was constructed note by note without the use of any live instrumentalists at all,” and went on to demonstrate the use of various oscillators, including the workshops famous wobbulator, which she said was “simply an oscillator which wobbles”.
It was a laborious process and the Radiophonic Workshop had become the perfect laboratory for the great works of sonic separation, granulation, elaboration and final distillation of the musical substance.
To create the Doctor Who theme each note was individually recorded, cut, spliced. Some of the base materials used for the process included a single plucked string, white noise, and the harmonic waveforms of test-tone oscillators. The bass line was the single plucked string. The pattern for the bass was made by splicing it, in versions that had been sped up or slowed down to create the perfect pitch, over and over again. The swoop of the lower bass layer was made through careful and calculated tweaking of the oscillators pitch. The melody was played on a keyboard attached to a rack of oscillators while the bubbling hiss and fry of some etheric vapor was made by filtering white noise and then arranging it in time on tape. Some of the notes were also redubbed at varying volumes to create the necessary dynamics heard in the song.
With all the basic materia in the laboratory now prepared, ready with the proper pitch and volume, it all needed to be conjoined. To do this the first step involved taking a line of music –the bass, melody, or vaporous bubbles of white noise- and trimming each note to length by cutting the tape and sticking them all together in the right order. Next further rectifications were required, distilling these elements down further and further until a final mix was completed.
At the time, there were no multitrack tape machines to ease the process. A method to mix it all together had to be improvised. Each separate portion of the song on individual reels of tape was played on separate tape machines with the outputs mixed together. Getting it all to synchronize was just one of the obstacles as not all tape players play back at exactly the same speed, and not all of them stay in sync once started. A number of submixes, or distillations, were created and these in turn synced together before the music could finally be said to be finished.
When Ron Grainer first heard Delias realization of his score he was more than delighted and said "Did I really write this?"
Delia relplied,"Most of it."
Grainer made a valiant effort to give Delia credit as a co-composer of the theme. His attempt was blocked by the bureaucrats at the BBC who had the official policy of keeping the members of the Workshop anonymous and only giving credit to the group as a whole. Delia was not credited on screen for her work until the 50th anniversary special of Doctor Who.
Even so, her tenure in the Workshop was off to a grand start and she continued to produce music for radio, television and beyond.
Between 1964-65 Delia got to expand her palette of sound across the canvas of radio in collaboration with playwright Barry Bermange in a series of four pieces called Inventions for Radio. These pieces were broadcast on the BBC’s Third Programme and involved interviews with people on the street on such heavy subjects as dreams and the existence of God, collaged against a background of electronic soundscapes and strange noises. It was a new form of documentary radio art.
Working with Bermange, the voices of the interviewees were edited in a non-linear way, creating insightful juxtapositions. For the episode on dreams she used one of her favorite musical sources, a green metal lightbulb shade being struck. The sound, as always, was later manipulated in the studio.
And even though her work for the Workshop continued to remain anonymous her reputation as a musician and electronic composer started to spread to some of the senior officials at the communications behemoth. Martin Esslin, the Head of Radio Drama, sent a memo to Desmond Briscoe, than head of the Workshop, noting his regret that Delia Derbyshire and her co-worker John Harrison were not able to receive credit for the work they had on a production of “The Tower”.
He wrote, “I have just been listening to the playback of the completed version of ‘The Tower’ and should like to express my deep appreciation for the excellent work done on this production by Delia Derbyshire and John Harrison. This play set them an extremely difficult task and they rose to the challenge with a degree of imaginative intuition and technical mastery which deserves the highest admiration and which will inevitably earn a lion's share of any success the production may eventually achieve. I only wish that it were possible for the names of contributors of this calibre to be mentioned in the credits in the Radio Times and on the air. But failing this I should like to register the fact that I regard their contribution to this production as being at least of equal importance to that of the producer himself.”
UNIT DELTA PLUS, KALEIDOPHON & WHITE NOISE
As Delia’s reputation grew, she began work on other projects outside the umbrella of the BBC. She joined forces with her friend and fellow Radiophonic Workshop member Brian Hodgson, along with Peter Zinovieff, the creator and founder of the EMS synthesizer, to establish Unit Delta Plus. The purpose of this organization was to promote and create electronic music. A studio Zinovieff had built in a shed behind his townhouse at 49 Deodar Road in Putney served as their operational headquarters.
Zinovieff had followed the research of Max Mathews and Jean-Claude Risset at Bell Labs. He had also read the David Alan Luce MIT thesis from 1963, “the Physical Correlates of Nonpercussive Musical Instrument Tones.” You know, the kind of thing you read on a rainy day. These were some of the influences on his own work. The three were quite the trio.
They participated in a few experimental and electronic music festivals. In 1966 they demonstrated their electronic prowess at The Million Volt Light and Sound Rave. This was the same event where The Beatles had been commissioned to create an avant-garde sound piece. They came up with song Carnival of Light in response had its only public playing.
Though there were intervening projects, the next major one outside of the BBC was to mark another landmark in the history of electronic music. It all get sparked when Derbyshire and Hodgson met David Vorhaus.
Vorhaus recalls, “I met Brian Hodgson and Delia Derbyshire, who were then in a band called Unit Delta Plus. I was on my way to an orchestral gig when the conductor told me that there was a lecture next door on the subject of electronic music. The lecture was fantastic and we got on like a house on fire, starting the Kaliedophon studio about a week later!"
Vorhaus was a classical musician, trained as a bass player. He also happened to be a physics graduate and electronic engineer. The three were an electrical storm of creative energy. Together they created the Kaleidophon studio at 281-283 Camden High Street, where they made music and sound for a variety of London theatres. They also made library music, contributing many tracks to the Standard Music Library, a firm set up in collaboration with London Weekend Television (ITV) and Bucks Music Group in 1968 to provide the music for hit TV shows. These recordings were done under pseudonyms. Derbyshire’s compositions were credited to Li De La Russe, something of an anagram with a reference to her auburn hair to boot. A number of these songs made it onto the ITV shows The Tomorrow people and Timeslip, which rivaled Doctor Who.
When not working on a commission they worked on their first album as the band White Noise, titled An Electric Storm.
The album is a masterpiece, spanning genres of giddy electro-pop to the more austere and serious sonorities. It spans a deep emotional gamut and is an excellent and dizzying listen from start to finish. Released on the Island label, it was something of a sleeper album, or what some call a perennial seller. It is one of those albums that didn’t do as great when it was first released as it has done over time. Now it is a continual best seller. Considering the difficulties the band had in even getting it onto a label makes their achievement even more remarkable.
Though the name White Noise lives on with David Vorhaus, Hodgson and Delia left the project and the studio after the first album.
MUSIC OF SPHERES AND I.E.E.100
A number of other commissions, recordings and events took place as the last years of the sixties unspooled. She made music for a film by Yoko Ono, contributed to Guy Woolfenden’s electronic score for Macbeth produced by the Royal Shakespeare Company and collaborated with Anthony Newley for a demo song called Moogies Bloogies that has never seen an official release.
In 1970 Delia worked on an episode for the TV show series Biography that detailed the life of Johannes Kepler, the renaissance astronomer who showed that planets orbit the sun in ellipses, not perfect circles. The episode was titled, I measured the Skies and was taken from his epitaph which read:
I measured the skies, now the shadows I measure,
Sky-bound was the mind, earth-bound the body rests.
In his book Harmonices Mundi from 1619 Kepler explored the relationships between musical harmony and congruence in geometrical forms and physical phenomena and related his third law of planetary motion.
Medieval philosophers had spoken of the music of the spheres as metaphor. Kepler discovered actual physical harmonies in planetary motion, finding harmonic proportions in the differences between the maximum and minimum angular speeds of a planet in its orbit.
A newspaper article by Christine Edge that came out around the time explained, “Kepler had interpreted the sounds made by the planets into scale notes, and Delia subjected them to her own gliding scale of electronic sounds.” A few years later she revisited the Music of the Spheres, this time producing a piece for a segment on Kepler in Joseph Bronowski's 1973 TV series The Ascent of Man. Her short piece accompanies a simple computer graphic being shown on the screen.
Delia was in her own sphere and orbit, and as her velocity accelerated the people around started to notice its wobble.
In 1971 the International Institute of Electrical Engineers turned 100. The BBC commemorated the anniversary with the Radiophonic Workshop in Concert event on the 19th May. Delia composed the piece I.E.E. 100 for the program, but the tape almost didn’t survive. She looked to radio and the history of electrical engineering for inspiration.
She said, “I began by interpreting the actual letters, I.E.E. one hundred, in two different ways. The first one in a morse code version using the morse for I.E.E.100. This I found extremely dull, rhythmically, and so I decided to use the full stops in between the I and the two E's because full stop has a nice sound to it: it goes di-dah di-dah di-dah.
I wanted to have, as well as a rhythmic motive, to have musical motive running throughout the whole piece and so I interpreted the letters again into musical terms. 'I' becomes B, the 'E' remains and 100 I've used in the roman form of C."
Further elements of the piece included many touchstones of the history of telecommunications from, the development of electricity in communication from the earliest telephone to the Americans landing on the moon. She sampled the voice of Mr Gladstone congratulating Mr Edison on inventing the phonograph, used the opening and closing down of Savoy Hill, where the BBC had their initial recording studios with the voice of Lord Reith, the first general manager of the BBC, and Neil Armstrong speaking as he stepped onto the surface of the moon.
“The powerful punch of Delia's rocket take-off threatened the very fabric of the Festival Hall,” Desmond Briscoe wrote.
This was one of the events where Delia’s chronic perfectionism began to show itself, having a deleterious effect on her ability to finish work, despite being a professional who had tackled numerous large projects. She was working on the piece up to the last minute the night before the event, making edits, trying to make it live up to the rigorous standards she set for herself. Brian Hodgson was in charge of directing the program, and he was aware that Delia might have a breakdown and do something to the tape, so he called upon one of the Workshops engineers to secretly make a second copy of the final version of the work and to give it to him.
Hodgson’s intuition and assessment of the matter was quite correct. He said of the incident, “I said to Richard [the engineer] ‘Run another set in Room 12, don't tell Delia you're doing it, and that copy bring to me in the morning, because I have an awful feeling she was going to destroy the tape.’ And he did that. And she came in the next morning in tears, around 11 o'clock. And said, ‘I've destroyed the tape, what are we going to do?’ I don't think she ever forgave me for that.”
Two years later she would leave the BBC, fed up. In an interview on Radio Scotland she said, “Something serious happened around '72, '73, '74: the world went out of tune with itself and the BBC went out of tune with itself... I think, probably, when they had an accountant as director general. I didn't like the music business.”
She spent a brief time working at Brian Hodgson’s Electrophon Studio, before quitting that too. It was hard for her to quit radio though, as it is for many who’ve been hooked and tried to give it up. She got a gig working as a radio operator. She says of the time, “Crazy, crazy, crazy! I was the best radio operator Laing Pipelines ever had! I answered a job in the paper for a French speaking radio operator. I just had to sleep - everything was out of tune, so I went to the north of Cumbria. It was twelve miles south of the border. I had a lovely house built from stones from Hadrian's Wall. I was in charge of three transmitters in a disused quarry. I did not want to get involved in a big organisation again. I'd fled the BBC and I thought - oh, Laing's... a local family firm! Then I found this huge consortium between Laing's and these two French companies.”
By 1975 she’d stopped producing music for public consumption. According to Clive Blackburn, “in private, she never stopped writing music either. She simply refused to compromise her integrity in any way. And ultimately, she couldn't cope. She just burnt herself out. An obsessive need for perfection destroyed her."
Yet in the 1990’s she started seeing the electronic music she had championed starting to come into its own. Pete Kember, a member of the psychedelic noise rock band Spacemen 3 sought Delia out and befriended her. Kember had amassed a collection of synthesizers and electronic music gear as part of his musical research and interest. He was embarking on a new project called Spectrum making the kind of music she had been at the forefront of in previous decades.
Delia’s life had become chaotic though. The ravages of alcohol abuse were catching up with her body. Just as she started to work on public music again with Peter in 2001, she died of renal failure. A short 55-second collaboration they had made, called Synchrodipidity Machine (Taken from an Unfinished Dream) was released after she had departed and was dedicated to her memory. Kember credited her with "liquid paper sounds generated using fourier synthesis of sound based on photo/pixel info (B2wav - bitmap to sound programme)."
After she died 267 reel-to-reel tapes and a box of a thousand papers were found in her attic. These were entrusted to Mark Ayres of the BBC and in 2007 were given on permanent loan to the University of Manchester. Almost all the tapes were digitised in 2007 by Louis Niebur and David Butler, but none of the music has been published due to copyright complications.
Her life was an unfinished dream, and it is a shame she did not stick around long enough to see the credit that was later bestowed on her for her generous contributions to electronic music.
The BBC Radiophonic Workshop: The First 25 Years by Desmond Briscoe, BBC 1983
Special Sound: The Creation and Legacy of the BBC Radiophonic Workshop by Louis Neibur, Oxford, 2010
If you liked this article check out the rest in the Radiophonic Laboratory series.
As co-founder of the BBC Radiophonic Workshop –the unit created in 1958 that produced sound effects, incidental sounds and music for radio and television –Daphne Oram held a key place in the history of electronic music. Alongside F.C. Judd she was one of the first proponents of musique concrète in the UK. Her development of the Oramics system, a drawn sound making technique that involves inscribing waveforms and shapes directly onto 35mm film stock, also made her an innovative, if arcane, inventor of new musical technology. Daphne also gets the credit for being the first woman to design and construct a musical instrument, and the first to set up an independent personal electronic music studio.
Oram was born to James and Ida Oram on 31 December 1925 in Wiltshire, England. She was taught music at an early age, starting with piano and organ before moving on to composition. Her father was a coal merchants manager, but was also an amateur archaeologist, and during the 1950s was president of the Wiltshire Archaeological Society. Here childhood home was within 10 miles of the stone circle of Averbury and 20 miles from Stonehenge. Her mother was an amateur artist. It seems that her parents interest in history and the arts lent itself to Daphne’s blossoming in the field of music and technology.
At the age of seventeen the young Daphne was offered a place at the Royal College of Music but chose instead to take on a Junior Studio Engineer position at the BBC. She worked in part behind the scenes during live concerts at Albert Hall to ‘shadow’ the musicians, being ready to play a pre-recorded version of the music for broadcast in the event the radio was disrupted by the enemy actions of the Germans –not an unlikely fear just a year after the Blitz.
Graham Wrench was just a lad at the time but got to know Daphne through his father who was a musician in the London Symphony Orchestra. Many years he worked with Daphne as an engineer on her Oramics system. He said of her work for the BBC at the time, "Daphne's job involved more than just setting the levels. She had a stack of records, and the printed scores of whatever pieces the orchestra was due to play. If anything went wrong in the auditorium she was expected to switch over seamlessly from the live orchestra to exactly the right part of the record!”
Her other duties included the creation of sound effects for radio shows as well as keeping the broadcast levels of sound balanced and mixed. It was during this time period that she started to become aware of new developments in synthesized sound and started to make her own experiments with tape recorders late into the night, staying to work in the BBC studios long after her co-workers and colleagues had popped off to the pub or gone home for the evening. Cutting, splicing, playing backwards, looping, speeding up and slowing down, were all tape techniques she learned and became expert at.
In the 1940’s she also composed an orchestral work that is now considered by some to be the first electro-acoustic composition. The piece was titled Still Point and involved the use of turntables, a double orchestra, and five microphones. The BBC rejected the piece from their programming schedule and it remained unheard for seventy years. It was resurrected by Shiva Feshareki who performed it with the London Contemporary Orchestra for the first time on June 24, 2016. A revised version was performed again by Fesharek and the LCO alongside James Bulley following Oram’s composition notes.
We Also Have Sound-Houses
Despite the rejection of her innovative score the BBC promoted her to become a music studio manager in the 1950s. It was around this time she travelled to RTF studios in Paris where Pierre Schaeffer had been hard at work in his development of musique concrète. Daphne began a crusade for the creation of a studio at the BBC dedicated to the creation of electronic and musique concrete for use in radio and television programs. She demonstrated her vision of what this music could be when she was commissioned to compose music for the play Amphitryon 38 in 1957, producing the BBC’s first entirely electronic score. It was made using a sine wave oscillator, self-designed filters, and a tape recorder.
The production and piece were a success and these led to further commissions for electronic music. Fellow work colleague and electronic musician Desmond Briscoe also started to receive commissions for a number of other productions. One of the most significant was a request for electronic music to accompany Samuel Beckett’s All that Fall, which also was produced in 1957. The demand for electronic music was there, and the BBC finally gave in, giving Oram and Briscoe the go-ahead, and the budget, to establish the BBC Radiophonic Workshop.
The focus of the Workshop was to provide sound effects and theme music for all of the corporation's output, including the science fiction serial Quatermass and the Pit (1958–59) and "Major Bloodnok's Stomach" for the radio comedy series The Goon Show.
One of Daphne’s guiding stars at the workshop came from a passage in the unfinished utopian and proto-science fiction novel The New Atlantis penned by Sir Francis Bacon in . The novel depicts the crew of a European ship lost at sea somewhere in the Pacific west of Peru. Eventually the reach a mythical island called Bensalem. There isn’t much plot in the book, but the set up allowed Bacon to reveal his vision of an age of religious tolerance, scientific inquiry, and technological progress. In the New Antlantis Solomon’s House is a state-sponsored scientific institution that teases out the secrets of nature and investigates all phenomena, including music and acoustics. His book went on to form the basis for the establishment of the Royal Society. Daphne found one passage in the book to be both prophetic, as well as something of a mission statement. She posted the following passage from the book on the door of the Radiophonic Workshop:
“We have also sound-houses, where we practice and demonstrate all sounds and their generation. We have harmonies, which you have not, of quarter-sounds and lesser slides of sounds. Divers instruments of music likewise to you unknown, some sweeter than any you have, together with bells and rings that are dainty and sweet. We represent small sounds as great and deep, likewise great sounds extenuate and sharp; we make divers tremblings and warblings of sounds, which in their original are entire. We represent and imitate all articulate sounds and letters, and the voices and notes of beasts and birds. We have certain helps which set to the ear do further the hearing greatly. We also have divers strange and artificial echoes, reflecting the voice many times, and as it were tossing it, and some that give back the voice louder than it came, some shriller and some deeper; yea, some rendering the voice differing in the letters or articulate sound from that they receive. We have also means to convey sounds in trunks and pipes, in strange lines and distances.”
Yet even before a year was out her own ambition for the sound-house she had worked so hard to establish, came at loggerheads with the station executives. The inciting incident seemed to be her attendance at the Brussels World’s Fair and the Journées Internationales de Musique Expérimentale exhibition she was sent to attend. It was there where she heard Edgard Varèse demonstration of his ground breaking Poème électronique. And she heard other electronic music that was pushing the boundaries of the possible further.
This exalting experience created a deep dissatisfaction in her when she returned to work and the music department refused to put electronic music at the forefront of their activities and agenda. The realm of the possible had smacked up against the wall of the permissible. So Daphne resigned from the workshop with the hope of establishing her own studio.
In the hindsight of an outsider it seems this move may not have been the most strategic. Yet it did give her the freedom to develop her own electronic music instrument, Oramics, ill-fated as it was on a practical level.
Immediately after leaving the BBC in 1959, Oram began setting up her Oramics Studios for Electronic Composition in Tower Folly, in a former oasthouse (a building designed for drying hops prior to brewing) near Wrotham, Kent. The technique she created there involved the innovative use of 35mm film stock. Shapes drawn or etched onto the film strips could be read by photo-electric cells and transformed into sounds.
According to Oram, "Every nuance, every subtlety of phrasing, every tone gradation or pitch inflection must be possible just by a change in the written form."
While innovative, the Oramics technique was also expensive and Daphne met the financial pressure of having her own studio by opening it up and working as a commercial composer. Being director of the studio gave her complete control and freedom to experiment, but it also meant dealing with the stress of making economically viable. For the first few years she made music for commercial films, sound installations and exhibits as well as material for television and radio. She made the electronic sounds featured in Jack Clayton’s 1961 psychological horror film The Innocents. She also collaborated with opera singers created material for concert works.
These pressures eased in 1962 when she was given a grant of £3,550 (equivalent to £76,000 in today’s money). She was able to put more effort into building her drawn sound instrument.
In 1965 she reconnected with Graham Wrench, a few years after she had bumped into him at the IBC recording studio where she had brought in some tape loops for a commercial. She was in need of an engineer and technician and asked Wrench if he wanted the job, so he drove down with his wife to check things out.
Graham said of the visit, “on a board covering a billiard table in an adjoining reception room was displayed the electronics for Oramics. There wasn't very much of it! She had an oscilloscope and an oscillator that were both unusable, and a few other bits and pieces — some old GPO relays, I remember. Daphne didn't seem to be very technical, but she explained that she wanted to build a new system for making electronic music: one that allowed the musician to become much more involved in the production of the sound. She knew about optical recording, as used for film projectors, and she wanted to be able to control her system by drawing directly onto strips of film. Daphne admitted the project had been started some years before, but no progress had been made in the last 12 months. I said I knew how to make it work, so she took me on. I left my job with the Medical Research Council and started as soon as I could.”
Graham was able to help her build the system up, drawing on his experience as a radar specialist in the RAF. He started by designing a time-base for the waveform generator. To do this he needed to make his own photo-transistors which were too expensive to buy commercially, by scraping off the paint of regular transistors, still pricey at the time as they had only been on the market a few years.
The waveform-generator itself worked in the same fashion as an oscilloscope, but in reverse. It used a “six‑inch CRT [Cathode Ray Tube] mounted inside a lightproof box, with a 5x4‑inch photographic slide carrier fixed to the front of its screen. Mounted some distance in front of the CRT was a photomultiplier tube, arranged so as to detect light from anywhere on the screen. In the slide carrier was placed a transparency with an image of the required waveform; but this was not, as generally believed, simply a line drawing. The shape was filled in with solid black below the line and was left transparent above it, looking rather like the silhouette of a mountain range.
Across the bottom of the CRT screen a dot of light was made to trace a horizontal line by scanning repeatedly from left to right along the 'X' axis. If the beam happened to be obscured by the lower, opaque part of the drawn waveform, no light would be detected by the photomultiplier tube. If so, the beam was told to move higher up the screen until the photomultiplier could see it. In this way the moving dot of light was forced to follow exactly whatever profile was drawn on the transparency. Altering the voltage of the CRT's Y‑axis deflection plates controlled the up and down movement of the dot. The charge on these plates is very high — usually several hundred Volts. But if fluctuations in the Y‑axis voltage were scaled down to within just a Volt or so, it could be connected to an audio amplifier… And that is exactly how the Oramics machine generated its sound: the audio output was tapped off the Y-axis voltage of the CRT.
Whatever shape was placed in front of the screen became just one cycle of a repeating waveform. The speed at which the dot of light travelled across the screen on the 'X' axis was controlled by the time‑base unit, and was adjustable over a very large range so that the speed of the scan dictated the frequency of the sound it produced. If the beam travelled across the screen 440 times every second, it would scan the drawn waveform 440 times, producing a pitch of 440 Hertz, or the 'Concert A' above middle C.”
He had also created an analog digital system by dividing the film into four usable tracks, “each of which can be set to on or off by putting a spot of paint in the appropriate place on the film, to be read by a photo‑cell. Remember how the binary system works? Well, if each strip of film has four tracks, we can use them as four places of binary digits. The track on the lower edge of the film does nought or one; the next one up does nought and two; the next does nought and four; the top‑most track does nought and two again: hence, weighted binary. So it's very simple to 'program' each strip of film with a number — it only has to be between nought and nine — just by painting up to four spots on the film.
"Imagine that you've put a waveform picture in the scanner. If you'd like that sound to play at a frequency of 440 Hertz, then you go first to the strip of film that programs the hundreds of cycles per second. There are four available film‑strips of four tracks each, so just put a spot on the third track of the third film from the bottom (the hundreds). Then go to the film strip that programs the tens of cycles per second, and do the same. That's it — you've programmed 440 Hertz! When the film is run, those two spots of paint will be read by the photo‑cells, which in turn, control latching relays that switch in banks of resistors and make the time‑base run at whatever frequency. So you see, it is digitally controlled — but not how you'd imagine it! I know it seems a strange way to play a tune, but with a bit of practice it becomes quite intuitive.”
He also developed the means to control volume with the system by means of an optical system where a light is faded up and down to change the audio level by means of a photo‑resistor. He also figured out how to create tremolo and vibrato. The system had become vary flexible in the sonics it was able to produce. Being able to draw a sound gave amazing freedom in creating rich envelopes of music.
Sadly Graham, who had done so much to develop the system, was let go by Daphne following an illness that some believed had been a brain hemorrhage, but which was never fully diagnosed. Graham believed it was a nervous breakdown caused by her long working hours and perhaps the 5hz subharmonic frequencies caused by the Oramics machine, which he later fixed by adding a high pass filter to remove the subsonics. The reason for his release was never made clear. It was a real shame because Graham had done a lot of work to get the system as she had envisioned it in place.
Other engineers and technicians came in and copied what he had done to expand the Oramics system while Daphne continued to compose, research, and think about the implications of electronic music from a philosophical perspective. She turned her attention to the subtle nuances of sound that composers using traditional instruments had never been able to control before. She applied this research to the study of perception itself, and how the human ear influences the way the brain apprehends the world. Oramics came to encompass a study of vibrational phenomena, and she divided her system into two distinct parts the commercial and the mystical.
In her detailed notebooks Daphne defined Oramics as "the study of sound and its relationship to life."
Over the decades Daphne had lectured on electronic music and studio techniques.
Throughout her career, Oram lectured on electronic music and studio techniques. In the early seventies she was commissioned to write a book on electronic music. She didn’t want it become a how-to book, so instead took a philosophical and meditative approach to the subject. An Individual Note of Music, Sound and Electronics was published in 1972 and reissued in 2016.
Later in the 1970s Oram began a second book, which never saw print but survives as a manuscript. Titled, The Sound of the Past - A Resonating Speculation, in this work the influence of her fathers interest in archaeology can be seen. In it she speculates and muses on the subject of archaeological acoustics and proposes a theory, backed by research, suggesting that Neolithic chambered mounds and ancient sites like Stonehenge and the Great Pyramid in Egypt were used as resonators, and could be used to amplify sound. Her research suggested that ancient peoples, through their knowledge of sound and acoustics, may have been able to use these places for long distance communication.
By the time the 1980s rolled around she was engaged by the Acorn Archimedes computer company to work on the development of a software version of Oramics for their machine, receiving a grant from the Ralph Vaughan Williams Trust. She had wished to continue the mystical side of her sound research, but the continuing financial struggles for such a project left that dream mostly unfulfilled.
In the 1990’s Oram suffered from two strokes that eventually led her away from her work and into a nursing home. She died in 2003.
In her book Daphne wrote, "We will be entering a strange world where composers will be mingling with capacitors, computers will be controlling crotchets and, maybe, memory, music and magnetism will lead us towards metaphysics."
It is true we are living in that strange world where computers control and Internet of Things, and smart fabrics are weaved by machines. It remains to be seen if the philosophers and spiritually minded musicians of today will marry their love of all things electrical and electromagnetic with the long memory necessary for us to understand the fundamental nature of reality.
An archive of her recordings can be listened to free here: http://www.ubu.com/sound/oram.html
A contemporary reinterpretation of her music from the BBC archives can be found here: https://ecstaticrecordings.bandcamp.com/album/sound-houses
IS THERE ANY ESCAPE FROM NOISE?
In our machine dominated age there is hardly any escape from noise. Even in the most remote wilderness outpost planes will fly overhead to disrupt the sound of the wind in the trees and the birds in the wind. In the city it is so much part of the background we have to tune in to the noise in order to notice it because we’ve become adept at tuning it out. Roaring motors, the incessant hum of the computer fan, the refrigerator coolant, metal grinding at the light industrial factory down the street, the roar of traffic on I-75, the beep of a truck backing up, these and many other noises are all part of our daily soundscape.
Throughout human history musicians have sought to mimic the sounds around them, the gentle drone of the tanpura, a stringed instrument that accompanies sitar, flute, voice and other instruments in classical Indian music, was said to mimic the gentle murmur of the rivers and streams. Should it be a surprise then, that in the nineteenth and twentieth century musicians and composers started to mimic the sounds of the machines around them? In bluegrass and jazz there are a whole slew of songs that copied the entrancing rhythms of the train. As more and more machines filled up the cities is at any wonder that the beginnings of a new genre of music –noise music- started to emerge? Is it any wonder, that as acoustic and sound technology progressed, our music making practices also came to be dominated by machines.
THE ART OF NOISES
And just what is music anyway? There are many definitions from across the span of time and human culture. Each definition has been made to fit the type, style and particular practice or praxis of music.
In his 1913 manifesto The Art of Noises the Italian Futurist thinker Luigi Russolo argues that the human ear has become accustomed to the speed, energy, and noise of the urban industrial soundscape. In reaction to those new conditions he thought there should be a new approach to composition and musical instrumentation. He traced the history of Western music back to Greek musical theory which was based on the mathematical tetrachord of Pythagoras. This did not allow for harmony. This changed during the middle-ages first with the invention of plainchant in Christian monastic communities. Plainchant employs the modal system and this is used to work out the relative pitches of each line on the staff, and was the first revival of musical notation after knowledge of the ancient Greek system was lost. In the late 9th century, plainsong began to evolve into organum, which led to the development of polyphony. Until then the chord did not exist, as such.
Russolo thought that the chord was the "complete sound." He noted that in history chords developed slowly over time, first moving from the "consonant triad to the consistent and complicated dissonances that characterize contemporary music." He pointed out that early music tried to create sounds that were sweet and pure, and then it evolved to become more and more complex. By the time of Schoenberg and the twelve tone revolution of serial music musicians sought to create new and more dissonant chords. These dissonant chords brought music ever closer to his idea of "noise-sound."
With the relative quiet of nature and pre-industrial cities disturbed Russolo thought a new sonic palette was required. He proposed that electronics and other technology would allow futurist musicians to substitute for the limited variety of timbres available in the traditional orchestra. His view was that we must "break out of this limited circle of sound and conquer the infinite variety of noise-sounds." This would be done with new technology that would allow us to manipulate noises in ways that never could have been done with earlier instruments. In that, he was quite correct.
Russolo wasn’t the only one thinking of the aesthetics of noise, or seeking new definitions of music. French Modernist composer Edgar Varèse said that “music is organized sound.” It was a statement he used as a guidepost for his aesthetic vision of "sound as living matter" and of "musical space as open rather than bounded". Varèse thought that "to stubbornly conditioned ears, anything new in music has always been called noise", and he posed the question, "what is music but organized noises?" An open view of music allows new elements to come into the development of musical traditions, where a bound view would try to keep out those things out that did not fit the preexisting definition.
Out of this current of noise music initiated in part by Russolo and Varese a new class of musician would emerge, the musician of sounds.
MUSICIAN OF SOUNDS
Fellow Frenchmen Pierre Schaeffer developed his theory and practice of musique concrète during the 1930s and ‘40s and saw it spread to people such as Karlheinz Stockhausen, the founders of the BBC Radiophonic Workshop, F.C. Judd and many others in the 50’s. Musique concrète was a practical application of Russolo’s idea of “noise-sound” and exploration of expanded timbres possible through then new studio techniques. It was also a way of making music according to the “organized sound” definition and was distinct from previous methods by being the first type of music completely dependent on recording and broadcast studios.
In musique concrète sounds are sampled and modified through the application of audio effects and tape manipulation techniques, then reassembled into a form of montage or collage. It can feature any sounds derived from any recordings of musical instruments, the human voice, field recordings of the natural and man-made environment or sounds created in the studio. Schaeffer was an experimental audio researcher who combined his work in the field of radio communications with a love for electro-acoustics. Because Schaeffer was the first to use and develop these studio music making methods he is considered a pioneer of electronic music, and one of the most influential musicians of the 20th century. These recording and sampling techniques which he was the first to use and practice are now part of the standard operating procedures used by nearly all record production companies around the world. Schaeffer’s efforts and influence in this area earned him the title “Musician of Sounds.”
Schaeffer, born in 1910, had a wide variety of interests throughout his eighty-five years on this planet. He worked variously across the fields of composing, writing, broadcasting, engineering, and as a musicologist and acoustician. His work was innovative in science and art. It was after World War II that he developed musique concrète, all while continuing to write for essays, short novels, biographies and pieces for the radio. Much of his writing was geared towards the philosophy and theory of music, which he then later demonstrated in his compositions.
It is interesting to think of the influences on him as a person. Both his parents were musicians, his father a violinist, and his mother a singer, but they discouraged him from pursuing a career in music and instead pushed him into engineering. He studied at the the École Polytechnique where he received a diploma in radio broadcasting. He brought the perspective and approach of an engineer with his inborn musicality to bear on his various activities.
Schaeffer got his first telecommunications gig in 1934 is Strasbourg. The next year he got married and the couple had their first child before moving to Paris where he began work at Radiodiffusion Française (now called Radiodiffusion-Télévision Française, RTF). As he worked in broadcasting he started to drift away from his initial interests in telecommunications towards music. When these two sides met he really began to excel.
After convincing the management at the radio station of the alternate possibilities inherent in the audio and broadcast equipment, as well as the possibility of using records and phonographs as a means for making new music he started to experiment. He would records sounds to phonographs and speed them up, slow them down, play them backwards and run them through other audio processing devices, and mixing sounds together. While all this is just par for the course in today’s studios, it was the bleeding edge of innovation at the time.
With these mastered he started to work with people he met via the RTF. All this experimentation had as a natural outgrowth a style that leant itself to the avant-garde of the day. The sounds he produced challenged the way music had been thought of and heard. With the use of his own and his colleagues engineering acumen new electronic instruments were made to expand on the initial processes in the audio lab, which eventually became formalized as the Club d’Essai, or Test Club.
In 1942 Pierre founded the Studio d'Essai, later dubbed the Club d'Essai at RTF. The Club was active in the French resistance during World War II, later to become a center of musical activity. It started as an outgrowth of Schaeffer’s radiophonic explorations, but with a focus on being radio active in the Resistance on French radio. It was responsible for the first broadcasts to liberated Paris in August 1944. He was joined in the leadership of the Club by Jacques Copeau, the theatre director, producer, actor, and dramatist.
It was at the Club where many of Schaeffer’s ideas were put to the test. After the war Schaeffer had written a paper that discussed questions about how sound recording creates a transformations in the perception of time, due to the ability to slow down and speed up sounds. The essay showed his grasp of sound manipulation techniques which were also demonstrated in his compositions.
In 1948 Schaeffer initiated a formal “research in to noises” at the Club d'Essai and on October 5th of that year presented the results of his experimentation at a concert given in Paris. Five works for phonograph (known collectively as Cinq études de bruits—Five Studies of Noises) including Etude violette (Study in Purple) and Etude aux chemins de fer (Study of the Railroads), were presented. This was the first flowering of the musique concrete style, and from the Club d’Essai another research group was born.
GRM: Groupe de Recherche de Musique Concrète
In 1949 another key figure in the development of Musique Concrète stepped onto the stage. By the time Pierre Henry met Pierre Schaeffer via Club d’Essai the twenty-one year percussionist-composer old had already been experimenting with sounds produced by various objects for six years. He was obsessed with the idea of integrating noise into music, and had already studied with the likes of Olivier Messiaen, Nadia Boulanger, and Félix Passerone at the Paris Conservatoire from 1938 to 1948.
For the next nine years he worked at the Club d'Essai studio at RTF. In 1950 he collaborated with Schaeffer on the piece Symphonie pour un homme seul. Two years later he scored the first musique concrète to appear in a commercial film, Astrologie ou le miroir de la vie. Henry remained a very active composer and scored for a number of other films and ballets.
Together the two Pierres were quite a pair and founded the Groupe de Recherche de Musique Concrète (GRMC) in 1951. This gave Schaeffer a new studio, which included a tape recorder. This was a significant development for him as he previously only worked with phonographs and turntables to produce music. This sped up the work process, and also added a new dimension with the ability to cut up and splice tape in new arrangements, something not possible on a phonograph. Schaeffer is generally acknowledged as being the first composer to make music using magnetic tape.
Eventually Schaeffer had enough experimentation and material under his belt to publish À la Recherche d'une Musique Concrète ("In Search of a Concrete Music") in 1952, which was a summation of his working methods up to that point.
Schaeffer remained active in other aspects of music and radio throughout the ‘50s. In 1954 he co-founded Ocora, a music label and facility for training broadcast technicians. Ocora stood for the “Office de Coopération Radiophonique”. The purpose of the label was to preserve via recordings, rural soundscapes in Africa. Doing this kind of work also put Schaeffer at the forefront of field recording work, and in the preservation of traditional music. The training side of the operation helped get people trained to work with the African national broadcasting services.
His last electronic noise etude was realized in 1959, the "Study of Objects" (Etudes aux Objets).
For Pierre Henry’s part, two years after leaving the RTF, he founded with Jean Baronnet the first private electronic studio in France, the Apsone-Cabasse Studio. Later Henry made a tribute to composing his Écho d'Orphée.
A CONCRETE LEGACY
usique remains concrete. Schaeffer had known of the “noise orchestras” of his predecessor Luigi Russolo, but took the concept of noise music and developed it further by making it clear that any and all sounds had a part to play in the vocabulary of music. He created the toolkit later experimenters took as a starting point. He was the original sampler. In all his work he emphasized the role of play, or jeu, in making music. His ide of jeu in music came from the French verb jouer. It shares the same dual meaning as the English word play. To play is to have two things at once: to make pleasing sounds or songs on a musical instrument, and to engage with things as way of enjoyment and recreation. Taking sounds and manipulating them, seeing what certain processes will do to them, is at the heart of discover and play inside the radiophonic laboratory. The ability to play opens up the mind to new possibilities.
This article originally appeared in the April 2020 edition of the Q-Fiver.
If you enjoyed this article please consider reading the rest of the Radiophonic Laboratory series.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.