From the ice cold farms and fields of Michigan to the halls of MIT and then onwards to Bell Labs at Murray Hill, Claude Shannon was a mathematical maverick and inveterate tinkerer. In the 1920s, in those places where the phone company had not deigned to bring their network, around three million farmers built their own by connecting telegraph keys to the barbed wire fences that stretched between properties. As a young boy Shannon rigged up one of these “farm networks so he and one his friend who lived half a mile away could talk to each other at night in Morse code. He was also the local kid people in the town would bring their radios to when they needed repair and he got them to work. He had the knack.
He also had an aptitude for the more abstract side of a math and his mind could handle complex equations with ease. At the age of seventeen he was already in college at the University of Michigan and had published his first work in an academic journal, a solution to a math problem presented in the pages of American Mathematical Monthly. He did a double major in school and graduated with degrees in electrical engineering and mathematics then headed off to MIT for his masters.
While there he got under the wing of Vannevar Bush. Vannevar had followed in the footsteps of Lord Kelvin, who had created one of the world’s first analog computers, the harmonic analyzer, used to measure the ebb and flow of the tides. Vannevar’s differential analyzer was a huge electromechanical computer that was the size of a room. It solved differential equations by integration, using a wheel-and-disc mechanisms to perform the integration.
At school he was also introduced to the work of mathematician George Boole, whose 1854 book on algebraic logic The Laws of Thought laid down some of the essential foundations for the creation of computers. George Boole had in turn taken up the system of logic developed by Gottfried Wilhelm Leibniz. Might Boole have also been familiar with Leibniz’s book De Arte Combinatoria? In this book Leibniz proposed an alphabet of human thought, and was himself inspired by the Ars Magna of Ramon Lull. Leibniz wanted to take the Ars Magna, or “ultimate general art” developed by Lull as a debating tool that helped speakers combine ideas through a compilation of lists, and bring it closer to mathematics and turn it into a kind of calculus. Shannon became the inheritor of these strands of thought, through their development in the mathematics and formal logic that became Boolean algebra.
Between working with Bush’s differential analyzer and his study of Boolean algebra, Shannon was able to design switching circuits. This became the subject of his 1937 master thesis, A Symbolic Analysis of Relay and Switching Circuits.
Shannon was able to prove his switching circuit could be used simplify the complex and baroque system of electromechanical relays used in AT&T’s routing switches. Then he expanded his concept and showed that his circuits could solve any Boolean algebra problem. He finalized the work with a series of circuit diagrams.
In writing his paper Shannon took George Boole’s algebraic insights and made them practical. Electrical switches could now implement logic. It was a watershed moment that established the integral concept behind all electronic digital computers. Digital circuit design was born.
Next he had to get his PhD. It took him three more years, and his subject matter showed the first signs of multidisciplinary inclination that would later become a dominant feature of information theory. Vannevar Bush compelled him to go to Cold Spring Harbor Laboratory to work on his dissertation in the field of genetics. For Vannevar the logic was that if Shannon’s algebra could work on electrical relays it might also prove to be of value in the study of Mendelian heredity. His research in this area resulted in his work An Algebra for Theoretical Genetics, for which he received his PhD in 1940.
The work proved to be too abstract to be useful and during his time at Cold Spring Harbor he was often distracted. In a letter to his mentor Vannevar he wrote, “I’ve been working on three different ideas simultaneously, and strangely enough it seems a more productive method that sticking to one problem… Off and on I have been working on an analysis of some of the fundamental properties of general systems for the transmission of intelligence, including telephony, radio, television, telegraphy, etc…”
With a doctorate under his belt Shannon went on to the Institute of Advanced Study in Princeton, New Jersey where his mind was able to wonder across disciplines and where he rubbed elbows with other great minds, including on occasion, Albert Einstein and Kurt Gödel. He discussed science, math and engineering with Hermann Weyl and John Von Neumann. All of these encounters fed his mind.
It wasn’t long before Shannon went elsewhere in New Jersey, to Bell Labs. There he got to rub elbows with other great minds such as Thornton Fry and Alan Turing. His prodigious talents were also being put to work for the war effort.
It started with a study of noise. During WWII Shannon had worked on the SIGSALY system that was used for encrypting voice conversations between Franklin D. Roosevelt and Winston Churchill. It worked by sampling the voice signal fifty times a second, digitizing it, and then masking it with a random key that sounded like the circuit noise so familiar to electrical engineers.
Shannon hadn’t designed the system, but he had been tasked with trying to break it, like a hacker, to see what its weak spots were, to find out if it was an impenetrable fortress that could withstand the attempts of an enemy assault.
Alan Turing was also working at Bell Labs on SIGSALY. The British had sent him over to also make sure the system was secure. If Churchill was to be communicating on it, it needed to be uncrackable. During the war effort Turing got to know Claude. The two weren’t allowed to talk about their top secret projects, cryptography, or anything related to their efforts against the Axis powers but they had plenty of other stuff to talk about, and they explored their shared passions, namely, math and the idea that machines might one day be able to learn and think.
Are all numbers computable? This was a question Turing asked in his famous 1937 paper On Computable Numbers. He had shown the paper to Shannon. In it Turing defined calculation as a mechanical procedure or algorithm.
This paper got the pistons in Shannon’s mind firing. Alan had said, “It is always possible to use sequences of symbols in the place of single symbols.” Shannon was already thinking of the way information gets transmitted from one place to the next. Turing used statistical analysis as part of his arsenal when breaking the Enigma ciphers. Information theory in turn ended up being based on statistics and probability theory.
The meeting of these two preeminent minds was just one catalyst for the creation of the large field and sandbox of information theory. Important legwork had already been done by other investigators who had made brief excursions into the territory later mapped out by Shannon.
Telecommunications in general already contained within it many ideas that would later become part of the theories core. Starting with telegraphy and Morse code in the 1830s common letters expressed with the least amount of variation, as in E, one dot. Letters not used as often have a longer expression, such as B, a dash and three dots. The whole idea of lossless data compression is embedded as a seed pattern within this system of encoding information.
In 1924 Harry Nyquist published the exciting Certain Factors Affecting Telegraph Speed in the Bell System Technical Journal. Nyquist’s research was focused on increasing the speed of a telegraph circuit. One of the first things an engineer runs into when working on this problem is how to transmit the maximum amount of intelligence on a given range of frequencies without causing interference in the circuit or others that it might be connected to. In other words how do you increase speed and amount of intelligence without adding distortion, noise or create spurious signals?
In 1928, Ralph Hartley, also at Bell Labs, wrote his paper the Transmission of Information. He made it explicit that information was a measurable quantity. Information could only reflect the ability of the receiver to distinguish that one sequence of symbols had been intended by the sender rather than any other, that the letter A means A and not E.
Jump forward another decade to the invention of the vocoder. It was designed to use less bandwidth, compressing the voice of the speaker into less space. Now that same technology is used in cellphones as codecs to compress the voice and so more lines of communication can be used on the phone companies allocated frequencies.
WWII had a way of producing scientific side effects, discoveries that would break on through to affect civilian life after the war. While Shannon worked on SIGSALY and other cryptic work he continued to tinker on other projects. Shannon’s paper was one of the things he tinkered and had profound side effects. Twenty years after Hartley addressed the way information is transmitted, Shannon stated it this way, "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point."
In addition to the ideas of clear communication across a channel Information theory also brought the following ideas into play:
-The Bit, or binary digit. One bit is the information entropy of a binary random variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known.
-The Shannon Limit: A formula for channel capacity. This is the speed limit for a given communication channel.
-Within that limit there must always be techniques for error correction that can overcome the noise level on a given channel. A transmitter may have to send more bits to a receiver at a slower rate but eventually the message will get there.
His theory was a strange attractor in a chaotic system of noisy information. Noise itself tends to bring diverse disciplinary approaches together, interfering in their constitution and their dynamics. Information theory, in transmitting its own intelligence, has in its own way, interfered with other circuits of knowledge it has come in contact with.
A few years later psychologist and computer scientist J.C. R. Licklider said, “It is probably dangerous to use this theory of information in fields for which it was not designed, but I think the danger will not keep people from using it.”
Information theory encompasses every other field it can get its hands on. It’s like a black hole, and everything in its gravitational path gets sucked in. Formed at the spoked crossroads of cryptography, mathematics, statistics, computer science, thermal physics, neurobiology, information engineering, and electrical engineering it has been applied to even more fields of study and practice: statistical inference, natural language processing, the evolution and function of molecular codes (bioinformatics), model selection in statistics, quantum computing, linguistics, plagiarism detection. It is the source code behind pattern recognition and anomaly detection, two human skills in great demand in the 21st century.
I wonder if Shannon knew when he wrote ‘A Mathematical Theory of Communication’ for the 1948 issue of the Bell Systems Technical Journal that his theory would go on to unify, fragment, and spin off into multiple disciplines and fields of human endeavor, music just one among a plethora.
Yet music is a form of information. It is always in formation. And information can be sonified and used to make music. Raw data becomes audio dada. Music is communication and one way of listening to it is as a transmission of information. The principles Shannon elucidated are form of noise in the systems of world knowledge, and highlight one way of connecting different fields of study together. As information theory exploded it was quickly picked up as a tool among the more adventurous music composers.
Information theory could be at the heart of making the fictional Glass Bead Game of Herman Hesse a reality. Herman Hesse also dropped several hints and clues in his work that connected it with the same thinkers whose work served as a link to Boolean algebra, namely Athanasius Kircher, Lull and Leibniz who were all practitioners and advocates of the mnemonic and combinatorial arts. Like its predecessors, Information Theory is well suited to connecting the spaces between different fields. In Hesse’s masterpiece the game was created by a musician as a way of “represent[ing] with beads musical quotations or invented themes, could alter, transpose, and develop them, change them and set them in counterpoint to one another.” After some time passed the game was taken up by mathematicians. “…the Game was so far developed it was capable of expressing mathematical processes by special symbols and abbreviations. The players, mutually elaborating these processes, threw these abstract formulas at one another, displaying the sequences and possibilities of their science.”
Hesse goes on to explain, “At various times the Game was taken up and imitated by nearly all the scientific and scholarly disciplines, that is, adapted to the special fields. There is documented evidence for its application to the fields of classical philology and logic. The analytical study had led to the reduction of musical events to physical and mathematical formulas. Soon after philology borrowed this method and began to measure linguistic configurations as physics measured processes in nature. The visual arts soon followed suit, architecture having already led the way in establishing the links between visual art and mathematics. Thereafter more and more new relations, analogies, and correspondences were discovered among the abstract formulas obtained this way.”
In the next sections I will explore the way information theory was used and applied in the music of Karlheinz Stockhausen.
Read the rest of the Radiophonic Laboratory series.
A Mind at Play: How Claude Shannon Invented the Information Age by Jimmy Soni and Rob Goodman, Simon & Schuster, 2018
The Information: a history, a theory, a flood by James Gleick, Pantheon, 2011
The Glass Bead Game by Herman Hesse, translated by Clara and Richard Winston, Holt, Rinehart and Winston, 1990
Information Theory and Music by Joel Cohen, Behavioral Science, 7:2
Information Theory and the Digital Age by Aftab, Cheung, Kim, Thakkar, Yeddanapudi
Logic and the art of memory: the quest for a universal language, by Paolo Rossi, The Athlone Press, University of Chicago, 2000.
“There is more in man and in music than in mathematics, but music includes all that is in mathematics.”—Peter Hoffman
Infotainment is usually thought of as light entertainment peppered with superficial “facts” and forgettable news. Yet another kind of infotainment exists, a musical kind that is based on mathematical algorithms. It is true entertainment that is filled with true information and though it is mathematically modeled none of it is fake.
In the twentieth century interest in the multidisciplinary fields of Information Theory and Cybernetics led to dizzy bursts of creativity when their ideas were applied to making new music. These disciplines applied rigorous math to the study of communication systems and how a signal transmitted from one person can cut through the noise of other spurious signals to be received by another person. They also made explicit the role of feedback inside of a system, how signals can amplify themselves and trigger new signals. All of this was studied complex equations and formulas.
Yet there is nothing new about the relationship between music and math.
Algorithmic music has been made for centuries. It can be traced all the way back to Pythagoras, who thought of music and math as inseparable. If music can be formalized in terms of numbers, music can also be formalized as information or data. The “data” the ancients used to drive their compositions was the movement of the stars. Ptolemy is known to us most for his geocentric view of the cosmos and the ordered spheres the celestial bodies traveled on. Besides being an astronomer Ptolemy was also a systematic musical theorist. He believed that math was the basis for musical intervals and he saw those same intervals at play in the spacing of heavenly bodies, each planet and body corresponding to a certain modes and notes.
Ptolemy was just one of many who believed in the reality of the music of the spheres. Out of these ancient Greek investigations into the nature of music and the cosmos came the first musical systems. The musician who used them was thus a mediator between the cosmic forces of the heavens above and the life of humanity here below.
Western music went through myriad changes across the intervening centuries after Ptolemy. World powers rose and fell, new religions came into being. Out of the mystical monophonic plainchant uttered by Christian monks in candlelit monasteries polyphony arose, and it called for new rules and laws to govern how the multiple voices were to sing together. This was called “canonic” composition. A composer in this era (15th century) would write a line for a single voice. The canonic rule gave the additional singers and voices the necessary instruction. For instance one rule would be to for a second voice to start singing the melody begun by one voice again after a set amount of time. Other rules would denote inversions, retrograde movement, or other practices as applied to the music.
From this basis the rules, voices, and number of instruments were enlarged through the renaissance until the time of the era of “Common Practice”, roughly between 1650 to 1900. This period encompassed baroque music, and the classical, romantic and impressionist movements. The 20th and 21st century are now giving birth to what Alvin Curran has called the New Common Practice.
In the Common Practice Era tonal harmony and counterpoint reigned supreme, and a suite of rhythmic and durational patterns gave form to the music. These were the “algorithmic” sand boxes composers could play in.
The New Common Practice, according to Curran encompasses, “the direct unmediated embracing of sound, all and any sound, as well as the connecting links between sounds, regardless of their origins, histories or specific meanings; by extension, it is the self guided compositional structuring of any number of sound objects of whatever kind sequentially and/or simultaneously in time and in space with any available means.” I’ve begun to think of this New Common Practice as embracing the entire gamut of 20th and 21st century musical practices: serialism, atonality, musique concrete, electronics, solo and collective improvisation, text pieces, and the rest of it.
One vital facet of the New Common Practice is chance operations, or the use of randomizing procedures to create compositions. Chance operations have a direct relation to information theory, but this approach can already be seen making cultural inroads in the 18th century when games of chance had a brief period of popularity among composers and the musical and mathematically literate. These are a direct precursor to the deeper algorithmic musical investigations that have started to flourish in the 20th century.
Much of this original algorithmic music work was done the old school way, with pencil, sheets of paper, and tables of numbers. This was the way composers plotted voice-leading in Western counterpoint. Chance operations have also been used as one way of making algorithmic music, such as the Musikalisches Würfelspiel or musical dice game, a system that used dice to randomly generate music from tables of pre-composed options. These games were quite popular throughout Western Europe in the 18th century and a number of different versions were devised. Some didn’t use dice but just worked on the basis of choosing random numbers.
In his paper on the subject Stephen Hedges wrote how the middle class in Western Europe were at the time enamored with mathematics, a pursuit as much at home in the parlors of the people as in the classroom of professors. "In this atmosphere of investigation and cataloguing, a systematic device that would seem to make it possible for anyone to write music was practically guaranteed popularity.”
The earliest known example was created by Johann Philipp Kirnberger with his "The Ever-Ready Minuet and Polonaise Composer" in 1757. C. P. E. Bach's came out with his musical dice game "A method for making six bars of double counterpoint at the octave without knowing the rules" five years later in 1758. In 1780 Maximilian Stadler published "A table for composing minuets and trios to infinity, by playing with two dice". Mozart was even thought to have gotten in on the dice game in 1792 when an unattributed version made an appearance from his music publisher a year after the composer’s death. This has not been authenticated to be by the maestro’s hand, but as with all games of possibility, there is a chance.
These games may have been one of the many inspirations behind The Glass Bead Game by Herman Hesse. This novel was one of the primary literary inspirations and touchstones for the young Karlheinz Stockhausen. The Glass Bead Game portrays a far future culture devoted to a mystical understanding of music. It was at the center of the culture of the Castalia, that fictional province or state devoted to the pursuit of pure knowledge.
As Robin Maconie put it the Glass Bead Game itself appears to be “an elusive amalgam of plainchant, rosary, abacus, staff notation, medieval disputation, astronomy, chess, and a vague premonition of computer machine code… In terms suggesting more than a passing acquaintance with Alan Turing’s 1936 paper ‘On Computable Numbers’, the author described a game played in England and Germany, invented at the Musical Academy of Cologne, representing the quintessence of intellectuality and art, and also known as ‘Magic Theater’.”
Hesse wrote his book between 1931 and 1943. The interdisciplinary game at the heart of the book prefigures Claude Shannon’s explosive Information Theory which was established in his 1948 paper A Mathematical Theory of Communication. His paper in turn bears a debt to Alan Turing, whom Shannon met in 1942. Norbert Wiener also published his work on Cybernetics the same year as Shannon. All of these ideas were bubbling up together out of the minds of the leading intellectuals of the day. Ideas about computable numbers, the transmission of information, communication, and thinking in systems, all of which would give artists practical tools for connecting one field to another as Hesse showed was possible in the fictional world of Castalia.
Robin Maconie again had the insight to see the connection between the way Alan Turing visualized “a universal computing machine as an endless tape on which calculations were expressed as a sequence of filled or vacant spaces, not unlike beads on a string”.
As the Common Practice era of western music came to an end at the close of the 19th century, the mathematically inclined serialism came into its own, and as the decades wore on games of chance made a resurgence, defining much of the music of the 20th century. With the advent of computers the paper and pencil method have taken a temporary backseat in favor of methods that introduce programmed chance operations.
Composers like John Cage took to the I Ching with as much tenacity as the character Elder Brother did in Hesse’s book. Karlheinz Stockhausen meanwhile used his music as means to make connections between myriad subjects and to create his own unique ‘Magic Theater’. Cybernetics and Information Theory each contributed to thinking of these and other composers.
Dice Music in the Eighteenth Century, pp. 184–185, Music and Letters 59: 180–87.
Conceptualizing music: cognitive structure, theory and analysis, by Lawrence M. Zbikowski, Oxford, 2002
The New Common Practice by Alvin Curran
Other planets: the complete works of Karlheinz Stockhausen 1950–2007, Rowman & Littlefield Publishers, 2016
A set of musicians dice have been made that offer up numerous possibilities for the practicing musician. Using random process doesn't just have to be for avant-garde composers anymore!
"The Musician’s Dice are patented, glossy black 12-sided dice, engraved in silver with the chromatic scale. They can be used in any number of ways – they bring the element of chance into the musical process. They're great for composing Aleatory and 12 tone-music, and as a basis for improvisation – they’re really fun in a jam session. They also make an effective study tool: they can be used as “musical flash cards” when learning harmony, and their randomness makes for fresh and challenging exercise in sight-singing and ear training. Plus, they look really cool on the coffee table, and give you a chance to throw around words like "aleatory.""
Below two musicians play around with using these dice.
Read the rest of the Radiophonic Laboratory series.
At Bell Labs Max Mathews was the granddaddy of all its music makers. If you use a computer to make or record music with, he is your granddaddy too. In 1957 Max wrote a program for a digital computer called Music I. It was a landmark demonstration in the ability to write code to command a machine to synthesize music. Computers can do things and play things that humans alone cannot. Music I opened up a world of new timbral and acoustic possibilities. This was a perfect line of inquiry for the director of Bell Laboratories Behavioral and Acoustic Research Center where Mathews explored a spectrum of ideas and technologies between 1955 and 1987. Fresh out of MIT where he received a Sc.D in electrical engineering Mathews was ready to get to work and Music I was only the beginning of a long creative push in technology and the arts.
Max’s corner of the sprawling laboratory in Murray Hill, New Jersey carried out research in speech communication, speech synthesis, human learning and memory, programmed instruction, the analysis of subjective opinions, physical acoustics, industrial robotics and music.
Max followed the Music I program with II, III, IV and V, each iteration taking its capabilities further and widening its parameters. These programs carried him through a decade of work and achievement. As noted in the chapter on the Synthesis of Speech, Max had created the musical accompaniment to “Daisy: A Bicycle Built for Two” later made famous by the fictional computer HAL in Stanley Kubrick’s 2001: A Space Odyssey.
Starting in 1970 he started working with Richard Moore to create the GROOVE system. It was intended to be a “musician-friendly” computer environment. The other programs broke incredible new ground, but the use of them leaned more towards those who could program computers and write code in their esoteric languages, than the average musician or composer of the time. GROOVE was the next step in bringing it to its potential users. It was a hybrid digital-analog system that stood for Generating Realtime Operations On Voltage-controlled Equipment.
Max notes, “Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wroteMusic 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. TheIBM 704 and its siblings were strictly studio machines–they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.”
But Chowning hadn’t discovered FM Synthesis at the time GROOVE was being created. It was still the 70’s and affordable computers and synthesizers had yet to make it into the homes outside of the most devoted hobbyists. GROOVE was a first step to making computer music in real time. The set up included an analog synth with a computer and monitor. The computer’s memory made it appealing to musicians who could store their manipulations of the interface for later recall. It was a clever workaround the limitations of each technology. The computer was used for its ability to store the musical parameters while the synth was used to create the timbres and texture without relying on digital programming. This set up allowed creators to play with the system, fine tune what they wanted it to do, for later re-creation.
Bell Labs had acquired a Honeywell DDP224 computer from MIT to use specifically for sound research. This is what GROOVE was built on. The DDP-24 was a 24-bit transistor machine that used magnetic core memory to store data and program instructions. That it had disk storage also meant it was possible for libraries of programming routines to be written. This allowed the users to create customized logic patterns. A composition could be tweaked, adjusted and mixed in real time on the knobs, controls, and keys. In this manner a piece could be reviewed as a whole or in sections and then replayed from the stored data.
When the system was first demonstrated in Stockholm at the 1970 conference on Music and Technology organized by UNESCO, music by Bartok and Bach was played. A few years later Laurie Spiegel would grasp the unique compositional possibilities of the system and take it to the max.
In the meantime Max himself was a guy in demand. IRCAM (Institute de Recherche et Coordination Acoustique/Musique) in France brought him on board as a scientific advisor as they built their own state of the art sound laboratory and studios in France between 1974 and 1980.
In 1987 Max left his position at Bell Labs to become a Professor of Music (Research) at Stanford University. There he continued to work on musical software and hardware, with a focus on using the technology in a live setting. “Starting with the GROOVE program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the Radio-Baton, plus a program, the Conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.”
Today the MUSIC I software Max wrote through many versions lives on in the software suite of Max / MSP. Named in honor of Max Mathews, the software is a powerful visual programming language that is now functional for multimedia performance that has grown out of its musical core. The program has been alive, well and growing for more than thirty years and has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations. The software is designed and maintained by the company Cycling ’74.
Building off the gains in musical software developed by Mathews, Miller Smith Puckette (MSP) started to work on a program originally called The Patcher at IRCAM in 1985. This first version for Macintosh had a graphical interface that allowed users to create interactive scores. It wasn’t yet powerful enough to do real time synthesis. Instead it used MIDI and similar protocols to send commands to external sound hardware.
Four years later Max/FTS (Faster Than Sound) was developed at IRCAM. This version could be ported to the IRCAM Signal Processing Workstation (ISPW) for the NeXT computer system. This time around it could do real time synthesis using an internal hardware digital signal processor (DSP) making it a forerunner to the MSP extensions that would later be added to Max. 1989 was also the year the software was licensed to Opcode who promptly launched a commercial version at the beginning of the next decade.
Opcode held onto the program until 1997. During those years a talented console jockey named David Zicarelli further extended and developed the promise of Max. Yet Opcode wanted to cancel their run with the software. Zicarelli new it had even further potential. So he acquired the rights and started his own company called Cycling ’74. Zicarelli’s timing proved to be fortuitous as Gibson Guitar ended up buying Opcode, and then after they owned it for a year, ceasing its existence. Such is the fabulous world of silicon corporate buy outs.
Miller Smith Puckette had in the meantime released the independent and open-source composition tool Pure Data (Pd). It was a fully redesigned tool that still fell within the same tradition as his earlier program for IRCAM. Zicarelli, sensing that a fruitful fusion could be made manifest, released Max/MSP in 1997, the MSP portion being derived from Puckette’s work on PureData. The two have been inseparable ever since.
The achievement meant that Max was now capable of real time manipulation of digital audio signals sans dedicated DSP hardware. The reworked version of the program was also something that could work on a home computer or laptop. Now composers could use this powerful tool to work in their home studios. The musical composition software that had begun on extensive and expensive mainframes was now available to those who were willing to pay the entry fee. You didn’t need the cultural connections it took to work at places like Bell Labs or IRCAM. And if you had a computer but couldn’t afford the commercial Max/MSP you could still download Pd for free. The same is true today.
Extension packs were now being written by other companies, contributing to the ecology around Max. In 1999 the Netochka Nezvanova collective released a suite of externals that added extensive real-time video control to Max. This made the program a great resource for multimedia artists. Various other groups and companies continued to tinker and add things on.
It got to the point where Max Mathews himself, well into his golden years, was learning how to use the program named after him. Mathews has received many accolades and appointments for his work. He was a member of the IEEE, the Audio Engineering Society, the Acoustical Society of America, the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences. He held a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française.
Mathews died of old age at 84 due to complications from pneumonia on April 21, 2011 in San Francisco. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren.
Max Mathews. “Horizons in Computer Music,” March 8-9, 1997, Indiana University
Read the rest of the Radiophonic Laboratory series.
Just as the folks inside the Sound-House of the BBC’s Radiophonic Workshop continued to refine their approach and techniques to electronic music, another older sound house back across the pond in America continued to research new “means to convey sounds in trunks and pipes, in strange lines and distances”. Where the BBC Radiophonic Workshop used budget friendly musique concrete techniques to create their otherworldly incidental music, the pure research conducted at Bell Laboratories was widely diffused and the electronic music systems that arose out of those investigations were incidental and secondary byproducts. The voder and vocoder were just the first of these byproducts.
Hal Alles was a researcher in digital telephony. The fact that he is remembered as the creator of what some consider the first digital additive synthesizer is a quirk of history. Other additive synthesizers had been made at Bell Labs, but these were software programs written for their supersized computers.
Alles needed to sell his digital designs within and without a company that had been the lords of analog, and it needed to be interesting. The synthesizer he came up with, was his way of demonstrating the companies digital prowess, and entertaining his internal and external clients at the same time. What he came up with was called the Bell Labs Digital Synthesizer or sometimes the Alles Machine or ALICE.
It should be noted that Hal bears no relation to the computer in 2001: A Space Odyssey. The engineer recalls those heady days in the late sixties and 1970s. “As a research organization (Bell labs), we had no product responsibility. As a technology research organization, our research product had a very short shelf life. To have impact, we had to create ‘demonstrations’. We were selling digital design within a company with a 100 year history of analog design. I got pretty good at 30 minute demonstrations of the real time capabilities of the digital hardware I was designing and building. I was typically doing several demonstrations a week to Bell Labs people responsible for product development. I had developed one of the first programmable digital filters that could be dynamically reconfigured to do all of the end telephone office filtering and tone generation. It could also be configured to play digitally synthesized music in real time. I developed a demo of the telephone applications (technically impressive but boring to most people), and ended the demo with synthesized music. The music application was almost universally appreciated, and eventually a lot of people came to just hear the music.”
Max Mathews was one of the people who got to see one of these demos, where the telephonic equipment received a musical treatment. Mathews was the creator of the MUSIC X series of computer synthesis programming languages. He was excited by what Alles was doing and saw its potential. He encouraged the engineer to develop a digital music instrument.
“The goal was to have recording studio sound quality and mixing/processing capabilities, orchestra versatility, and a multitude of proportional human controls such as position sensitive keyboard, slides, knobs, joysticks, etc,” Mathews said. “It also needed a general purpose computer to configure, control and record everything. The goal included making it self-contained and ‘portable’. I proposed this project to my boss while walking back from lunch. He approved it before we got to our offices.”
Harmonic additive synthesis had already been used back in the 1950s by linguistics researchers who were working on speech synthesis and Bell Labs was certainly in on the game. Additive synthesis at its most basic works by adding sine waves together to create timbre. The more common technique until that time had been subtractive synthesis, which used filters to remove or attenuate the timbre of a sound.
Computers were able to do additive synthesis with wavetables that had been pre-computed, but it could also be done by mixing the output of multiple sine wave generators. This is what Karlheinz Stockhausen basically did with Studie II, though he achieved the effect through by building up layers of pure sine waves on tape rather than with a pre-configured synth or computer set up.
That method is laborious. A machine that can do it for you goes a long way towards being able to labor at other things while making music.
ALICE was a hybrid machine in that it used a mini-computer to control a complex bank of sound generating oscillators. The mini-computer was an LSI-11, by the Digital Equipment Corporation, a cost reduced version of their PDP-11 in production for twenty years starting in 1970. This controlled the 64 oscillators whose output whose was then mixed to create a number of distinct sounds and voices. It had programmable sound generating functions and the ability to accept a number of different input devices.
The unit was outfitted with two 8-inch floppy drives supplied by Heathkit; they made their own version of the LS-11 and sold it as the H11. AT&T rigged it out with one of their color video monitors. A custom converter was made that sampled the analog inputs and transferred them to 7 bit digital resolution 250 times a second. There were a number of inputs used to work with ALICE in real time: two 61-key piano keyboards, 72 sliders alongside various switches, and four analog joysticks just to make sure the user was having fun. These inputs were interpreted by the computer which in turn controlled the outputs sent to sound generators as parameters. The CPU could handle around 1,000 parameter changes per second before it got bogged down.
The sound generators themselves were quite complex. A mere 1,400 integrated circuits were used in their design. Out of the 64 oscillators the first bank of 32 were used as master signals. This meant ALICE could be expected to achieve 32 note polyphony. The second set was slaved to the masters and generated a series of harmonics. If this wasn’t enough sound to play around with, ALICE was also equipped with 32 programmable filters and 32 amplitude multipliers. With the added bank of 256 envelope generators ALICE had a lot of sound potential and sound paths that could be explored through her circuitry. All of those sounds could mixed in many different ways into the 192 accumulators she was also equipped with. Each of the accumulators was then sent to one of the four 16-bit output channels then reconverted from digital back into analog on the audio output.
Waveforms were generated by looking up the amplitude for a given time in a 64k word ROM table. There were a number of tricks Alles programmed into the table to reduce the number of calculations the CPU needed to run. 255 timers outfitted with 16 FIFO stacks controlled the whole shebang. The user put events into a timestamp sorted queue that fed it all into the generator.
Though the designers claimed the thing was portable, all the equipment made it weigh in at a hefty 300 pounds, making it an unlikely option for touring musicians. As the worlds first true digital additive synthesizer it was quite the boat anchor.
Completed in 1976, only one full-length composition was recorded for the machine, though a number of musicians, including Laurie Spiegel whose work will be explored later, played the instrument in various capacities. For the most part though the Alles Synth was brushed aside; even if the scientists and engineers at Bell Labs were tasked to engage in pure research they still had business to answer to. A use was found for Hal’s invention in terms of marketing was found once again in 1977.
In that year the Motion Picture Academy was celebrating the 50th anniversary of the talkies. The sound work for The Jazz Singer, the first talking picture, had been done by Western Electric, with their Vitaphone system technology. The successful marriage of moving image and sound first seen and heard in that movie wouldn’t have been possible without the technology developed by the AT&T subsidiary and Ma Bell was still keen to be in on the commemoration of the film. ALICE is what they chose to use as the centerpiece for the event.
A Bell Labs software junky by the name of Doug Bayer was brought in to improve the operating system of the synth and try to make the human interface a bit more user friendly. The instrument was flown to Hollywood at considerable risk. The machine was finicky enough without transporting it. Taking it on a plane where it could get banged up, whacking out all of its components in just one bump, and potentially sending it into meltdown mode was not out of the question.
So they hired musician and composer Laurie Spiegel, who’d already been working at the Labs without pay, to be filmed playing ALICE. This would be shown in the event that the musician they hired to play it live, Roger Powell, wouldn’t be able to do so due to malfunction. This film is the only recording of it in performance left in known existence.
Yet to hear how the Bell Labs Digital Synthesizer sounds look no further than Don Slepian’s album Sea of Bliss. Max Mathews had hired Slepian to work with the synth as an artist in residence between 1979 and 1982. Don had been born into a scientific family. From an early age he demonstrated technical talent and musical ability. He had begun making music in 1968, programming his own computers, soldering together his own musical circuits, and experimenting with tape techniques. As a member of the Defense Advanced Research Projects Agency (DARPA) Don worked as a tester on the early iteration of the internet and dor a time he lived in Hawaii and played as a synthesizer soloist with the Honolulu Symphony. All of this made him a perfect fit as artist in residence at Bell Labs.
The results his work are on the album: epic length cuts of deep ambient music bringing relaxation and joy to the listener. It’s the audio version of taking valium. Listen to it and feel the stress of life melt away.
Don Slepian described his 1980 masterpiece for the online Ambient Music Guide. “It’s stochastic sequential permutations (the high bell tones), lots of real time algorithmic work, but who cares? It's pretty music: babies have been born to it, people have died to it, some folks have played it for days continuously. No sequels, no formulas. It was handmade computer music."
The Bell Labs Digital Synthesizer was soon to leave its birthplace after Don had done his magic with the machine. In 1981 ALICE was disassembled and donated to the TIMARA Laboratories at the Oberlin Conservatory of Music.
Oberlin, and by extension TIMARA (Technology in Music and Related Arts) has a history that reaches back to the very beginning of electronic music, in the mid-19th century. None other than Elisha Gray was an adjunct physics professor at the college. He is considered by some as the father of the synthesizer due to his invention of the musical telegraph and his seventy plus patents for inventions that were critical in the development of telecommunications, electronic music and other fields. If it had not been for Gray’s electromechanical oscillator, Thaddeus Cahill would never have been able to create that power hungry beast of an instrument, the Telharmonium.
The Music Conservatory at Oberlin dates back to 1865 and they joined the ranks of those radio and television stations who built electronic music studios with the opening of TIMARA in 1967. The department was founded by Olly Wilson as a response to the demand for classes in electronics from composition students. It became the first of a number of departments in the American higher education scene to create a space for experimentation in analog synthesis and mixed media arts.
Though ALICE is now enshrined in one of the many sound laboratories at TIMARA her influence continued to be felt not long after she was sequestered there. A number of commercial synthesizers based on the Alles design were produced in 1980s.
The Atari AMY sound chip is a case in point and was the smallest of the products to be designed. It stood for Additive Music sYnthesis. It still had 64-oscillators but they were reduced to a single-IC sound chip. A chip that had numerous design issues. Additive synthesis could now be done with less, though it never really got into the hands of users. It was scheduled to be used on a new generation of 16-bit Atari computers and for the next line of game consoles and by their arcade division. AMY never saw the light of day in any configuration. Even after Atari was sold in 1984, she remained waiting in the dark to get used on a project, but was cut from being included in new products after many rounds at the committee table, where so many dreams wind up dead.
Still other folks in the electronic music industry made use of the principles first demonstrated by ALICE. The Italian company Crumar and Music Technologies’ of New York got into a partnership to create Digital Keyboards. Like Atari they wanted to resize the Alles Machine, make it smaller. They came up with a two-part invention using a Z-80 microcomputer and a single keyboard with limited controls. They gave it the unimaginative name Crumar General Development System and it sold in 1980 for $30,000 buckaroos. Since it was out of the price range of your average musician, they marketed the product to music studios. Wendy Carlos got her hands on one and the results can be heard on the soundtrack to Tron.
Other companies got into the game and tried to produce something similar at lower cost, but none of these really managed to find a good home in the market due to the attached price tag. When Yamah released the DX7 in 1983 for $2,000 the demand for additive synths tanked. The DX7 implemented FM synthesis and enabled it to achieve many of the same effects as ALICE with as few as two oscillators. FM synthesis and its relationship to FM radio modulation will be looked at in detail in another article.
It had all started out as a way for Hal Alles to look at potential problems in digital communications, such as switching, distortion, and echo. It ended up becoming a tool for extending human creativity.
Read the other articles in the Radiophonic Laboratory series.
IS THERE ANY ESCAPE FROM NOISE?
In our machine dominated age there is hardly any escape from noise. Even in the most remote wilderness outpost planes will fly overhead to disrupt the sound of the wind in the trees and the birds in the wind. In the city it is so much part of the background we have to tune in to the noise in order to notice it because we’ve become adept at tuning it out. Roaring motors, the incessant hum of the computer fan, the refrigerator coolant, metal grinding at the light industrial factory down the street, the roar of traffic on I-75, the beep of a truck backing up, these and many other noises are all part of our daily soundscape.
Throughout human history musicians have sought to mimic the sounds around them, the gentle drone of the tanpura, a stringed instrument that accompanies sitar, flute, voice and other instruments in classical Indian music, was said to mimic the gentle murmur of the rivers and streams. Should it be a surprise then, that in the nineteenth and twentieth century musicians and composers started to mimic the sounds of the machines around them? In bluegrass and jazz there are a whole slew of songs that copied the entrancing rhythms of the train. As more and more machines filled up the cities is at any wonder that the beginnings of a new genre of music –noise music- started to emerge? Is it any wonder, that as acoustic and sound technology progressed, our music making practices also came to be dominated by machines.
THE ART OF NOISES
And just what is music anyway? There are many definitions from across the span of time and human culture. Each definition has been made to fit the type, style and particular practice or praxis of music.
In his 1913 manifesto The Art of Noises the Italian Futurist thinker Luigi Russolo argues that the human ear has become accustomed to the speed, energy, and noise of the urban industrial soundscape. In reaction to those new conditions he thought there should be a new approach to composition and musical instrumentation. He traced the history of Western music back to Greek musical theory which was based on the mathematical tetrachord of Pythagoras. This did not allow for harmony. This changed during the middle-ages first with the invention of plainchant in Christian monastic communities. Plainchant employs the modal system and this is used to work out the relative pitches of each line on the staff, and was the first revival of musical notation after knowledge of the ancient Greek system was lost. In the late 9th century, plainsong began to evolve into organum, which led to the development of polyphony. Until then the chord did not exist, as such.
Russolo thought that the chord was the "complete sound." He noted that in history chords developed slowly over time, first moving from the "consonant triad to the consistent and complicated dissonances that characterize contemporary music." He pointed out that early music tried to create sounds that were sweet and pure, and then it evolved to become more and more complex. By the time of Schoenberg and the twelve tone revolution of serial music musicians sought to create new and more dissonant chords. These dissonant chords brought music ever closer to his idea of "noise-sound."
With the relative quiet of nature and pre-industrial cities disturbed Russolo thought a new sonic palette was required. He proposed that electronics and other technology would allow futurist musicians to substitute for the limited variety of timbres available in the traditional orchestra. His view was that we must "break out of this limited circle of sound and conquer the infinite variety of noise-sounds." This would be done with new technology that would allow us to manipulate noises in ways that never could have been done with earlier instruments. In that, he was quite correct.
Russolo wasn’t the only one thinking of the aesthetics of noise, or seeking new definitions of music. French Modernist composer Edgar Varèse said that “music is organized sound.” It was a statement he used as a guidepost for his aesthetic vision of "sound as living matter" and of "musical space as open rather than bounded". Varèse thought that "to stubbornly conditioned ears, anything new in music has always been called noise", and he posed the question, "what is music but organized noises?" An open view of music allows new elements to come into the development of musical traditions, where a bound view would try to keep out those things out that did not fit the preexisting definition.
Out of this current of noise music initiated in part by Russolo and Varese a new class of musician would emerge, the musician of sounds.
MUSICIAN OF SOUNDS
Fellow Frenchmen Pierre Schaeffer developed his theory and practice of musique concrète during the 1930s and ‘40s and saw it spread to people such as Karlheinz Stockhausen, the founders of the BBC Radiophonic Workshop, F.C. Judd and many others in the 50’s. Musique concrète was a practical application of Russolo’s idea of “noise-sound” and exploration of expanded timbres possible through then new studio techniques. It was also a way of making music according to the “organized sound” definition and was distinct from previous methods by being the first type of music completely dependent on recording and broadcast studios.
In musique concrète sounds are sampled and modified through the application of audio effects and tape manipulation techniques, then reassembled into a form of montage or collage. It can feature any sounds derived from any recordings of musical instruments, the human voice, field recordings of the natural and man-made environment or sounds created in the studio. Schaeffer was an experimental audio researcher who combined his work in the field of radio communications with a love for electro-acoustics. Because Schaeffer was the first to use and develop these studio music making methods he is considered a pioneer of electronic music, and one of the most influential musicians of the 20th century. These recording and sampling techniques which he was the first to use and practice are now part of the standard operating procedures used by nearly all record production companies around the world. Schaeffer’s efforts and influence in this area earned him the title “Musician of Sounds.”
Schaeffer, born in 1910, had a wide variety of interests throughout his eighty-five years on this planet. He worked variously across the fields of composing, writing, broadcasting, engineering, and as a musicologist and acoustician. His work was innovative in science and art. It was after World War II that he developed musique concrète, all while continuing to write for essays, short novels, biographies and pieces for the radio. Much of his writing was geared towards the philosophy and theory of music, which he then later demonstrated in his compositions.
It is interesting to think of the influences on him as a person. Both his parents were musicians, his father a violinist, and his mother a singer, but they discouraged him from pursuing a career in music and instead pushed him into engineering. He studied at the the École Polytechnique where he received a diploma in radio broadcasting. He brought the perspective and approach of an engineer with his inborn musicality to bear on his various activities.
Schaeffer got his first telecommunications gig in 1934 is Strasbourg. The next year he got married and the couple had their first child before moving to Paris where he began work at Radiodiffusion Française (now called Radiodiffusion-Télévision Française, RTF). As he worked in broadcasting he started to drift away from his initial interests in telecommunications towards music. When these two sides met he really began to excel.
After convincing the management at the radio station of the alternate possibilities inherent in the audio and broadcast equipment, as well as the possibility of using records and phonographs as a means for making new music he started to experiment. He would records sounds to phonographs and speed them up, slow them down, play them backwards and run them through other audio processing devices, and mixing sounds together. While all this is just par for the course in today’s studios, it was the bleeding edge of innovation at the time.
With these mastered he started to work with people he met via the RTF. All this experimentation had as a natural outgrowth a style that leant itself to the avant-garde of the day. The sounds he produced challenged the way music had been thought of and heard. With the use of his own and his colleagues engineering acumen new electronic instruments were made to expand on the initial processes in the audio lab, which eventually became formalized as the Club d’Essai, or Test Club.
In 1942 Pierre founded the Studio d'Essai, later dubbed the Club d'Essai at RTF. The Club was active in the French resistance during World War II, later to become a center of musical activity. It started as an outgrowth of Schaeffer’s radiophonic explorations, but with a focus on being radio active in the Resistance on French radio. It was responsible for the first broadcasts to liberated Paris in August 1944. He was joined in the leadership of the Club by Jacques Copeau, the theatre director, producer, actor, and dramatist.
It was at the Club where many of Schaeffer’s ideas were put to the test. After the war Schaeffer had written a paper that discussed questions about how sound recording creates a transformations in the perception of time, due to the ability to slow down and speed up sounds. The essay showed his grasp of sound manipulation techniques which were also demonstrated in his compositions.
In 1948 Schaeffer initiated a formal “research in to noises” at the Club d'Essai and on October 5th of that year presented the results of his experimentation at a concert given in Paris. Five works for phonograph (known collectively as Cinq études de bruits—Five Studies of Noises) including Etude violette (Study in Purple) and Etude aux chemins de fer (Study of the Railroads), were presented. This was the first flowering of the musique concrete style, and from the Club d’Essai another research group was born.
GRM: Groupe de Recherche de Musique Concrète
In 1949 another key figure in the development of Musique Concrète stepped onto the stage. By the time Pierre Henry met Pierre Schaeffer via Club d’Essai the twenty-one year percussionist-composer old had already been experimenting with sounds produced by various objects for six years. He was obsessed with the idea of integrating noise into music, and had already studied with the likes of Olivier Messiaen, Nadia Boulanger, and Félix Passerone at the Paris Conservatoire from 1938 to 1948.
For the next nine years he worked at the Club d'Essai studio at RTF. In 1950 he collaborated with Schaeffer on the piece Symphonie pour un homme seul. Two years later he scored the first musique concrète to appear in a commercial film, Astrologie ou le miroir de la vie. Henry remained a very active composer and scored for a number of other films and ballets.
Together the two Pierres were quite a pair and founded the Groupe de Recherche de Musique Concrète (GRMC) in 1951. This gave Schaeffer a new studio, which included a tape recorder. This was a significant development for him as he previously only worked with phonographs and turntables to produce music. This sped up the work process, and also added a new dimension with the ability to cut up and splice tape in new arrangements, something not possible on a phonograph. Schaeffer is generally acknowledged as being the first composer to make music using magnetic tape.
Eventually Schaeffer had enough experimentation and material under his belt to publish À la Recherche d'une Musique Concrète ("In Search of a Concrete Music") in 1952, which was a summation of his working methods up to that point.
Schaeffer remained active in other aspects of music and radio throughout the ‘50s. In 1954 he co-founded Ocora, a music label and facility for training broadcast technicians. Ocora stood for the “Office de Coopération Radiophonique”. The purpose of the label was to preserve via recordings, rural soundscapes in Africa. Doing this kind of work also put Schaeffer at the forefront of field recording work, and in the preservation of traditional music. The training side of the operation helped get people trained to work with the African national broadcasting services.
His last electronic noise etude was realized in 1959, the "Study of Objects" (Etudes aux Objets).
For Pierre Henry’s part, two years after leaving the RTF, he founded with Jean Baronnet the first private electronic studio in France, the Apsone-Cabasse Studio. Later Henry made a tribute to composing his Écho d'Orphée.
A CONCRETE LEGACY
usique remains concrete. Schaeffer had known of the “noise orchestras” of his predecessor Luigi Russolo, but took the concept of noise music and developed it further by making it clear that any and all sounds had a part to play in the vocabulary of music. He created the toolkit later experimenters took as a starting point. He was the original sampler. In all his work he emphasized the role of play, or jeu, in making music. His ide of jeu in music came from the French verb jouer. It shares the same dual meaning as the English word play. To play is to have two things at once: to make pleasing sounds or songs on a musical instrument, and to engage with things as way of enjoyment and recreation. Taking sounds and manipulating them, seeing what certain processes will do to them, is at the heart of discover and play inside the radiophonic laboratory. The ability to play opens up the mind to new possibilities.
This article originally appeared in the April 2020 edition of the Q-Fiver.
If you enjoyed this article please consider reading the rest of the Radiophonic Laboratory series.
In 1988, the same year Negativland was pioneering the concept and practice of the Teletour, another maverick experimental music composer produced a radio concert like no other before or since. His name is Alvin Curran and the piece in question was his Crystal Psalms, a concerto for musicians in six European nations, simultaneously performed, mixed and broadcast live in stereo to listeners stretched from Palermo, Italy to Helsinki, Finland via six separate but synchronized radio stations.
The name of the radio concerto came from an event that Curran wanted to commemorate with the solemnness it was due; Kristallnacht otherwise known as Crystal Night or Night of the Broken Glass. It had happened fifty years before the broadcast on November 9th and 10th in Germany. This was the date of the November Pogroms when civilian and Nazi paramilitary forces mobbed the streets to attack Jewish people and their property. This horrendous event was dubbed Kristallnacht due to all the broken glass left on the ground after the windows of their stores, buildings and synagogues were smashed.
On Kristallnacht rioters destroyed 267 synagogues throughout Germany, Austria and the Sudetenland. They ransacked and set fire to homes, hospitals and schools. 30,000 Jewish men were rounded up and sent to concentration camps. This was the opening prelude before the sick opus of the Third Reich’s genocide. It was Hitler’s green light, ramping up his twisted plans. The Third Reich had moved on from economic, political and social persecution to physical violence and murder. The Holocaust had begun.
The year before the 50th anniversary of Kristallnacht a number of cultural and arts organization had begun making plans for a series of worldwide memorial events. Alvin Curran was in on some of these conversations. Curran had long been part of a vanguard group of ex-pat American composers living in Italy. He was also a founding member of the collective acoustic and electronic improvisation group Musica Elettronica Viva, sometimes known as a Million Electron Volts or simply MEV. They formed in Rome in 1966 and are still active today.
Started by three young Americans with Masters degrees in music composition from Yale and Princeton, MEV combined an Ivy-League classical pedigree with a tendency towards musical anarchism. Just as their music often involved chance operations, or the use of random procedures, the members of the group met by chance (or was it Providence?) on the banks of the Tiber River in Rome in 1965. Without scores, without conductors, they went like bold explorers into the primeval past of music, and its future. Curran says of the band, “….Composers all, nurtured in renowned ivy gardens; some mowed lawns. They met in Rome, near the Cloaca Maxima—and without further ado, began like experimental archeologists to reconstruct the origins of human music. They collected shards of every audible sound, they amplified the inaudible ones, they declared that any vibrating object was itself ‘music,’ they used electricity as a new musical space and cultural theory, they ultimately laid the groundwork for a new common practice. Every audible gurgle, sigh, thump, scratch, blast, every contrapuntal scrimmage, every wall of sound, every two-bit drone, life-threatening collision, heave of melodic reflux that pointed to unmediated liberation, wailing utopias, or other disappearing acts—anything in fact that hinted at the potential unity among all things, space, and times—were MEV’s ‘materia prima.’”
Curran draws from this same ‘materia prima’ as a prolific musician and composer and by the 1980’s had an established solo career. At the time of this writing that solo career is now long and storied. Crystal Psalms is just one of his many innovative works. It is also just one of a number of pieces he created specifically for radio. To my knowledge it is the most technically complex of the pieces he has written for radio.
Crystal Psalms was unique in its conception and required hard dedicated work to pull off. Perhaps that is why these kind of radio events are rare. Of course their rarity could also be due to the lack of imagination on the part of the corporate media that dominates the airwaves. The project brought together over 300 people, including musicians and technicians, in six major European cities. These musicians and technicians, separated into groups at these six locations, could not see or hear what was happening at the other locations. Yet together they performed as a unified ensemble to realize Curran’s score. In commemorating a dark and destructive moment of human history Curran demonstrated our creative possibilities for international artistic and technological collaboration.
Curran organized the concert in the fall of 1987 at a meeting in Rome. The producers from each of the six radio stations were there. These included Danmarks Radio; Hessicher Rundfunk, Germany, ORF, Austria; Radio France; RAI, Italy; VPRO, Holland. The RAI in Rome was chosen to be the main technical center, and HQ, probably due to the fact that this was the facility closest to the composer. Alvin wrote the music between May and September at his home in Poggidoro, about an hour drive outside the city.
The score was written for six groups of complementary ensembles –one group at each station in each country. These ensembles consisted of a mixed chorus (16-32 voices), a quartet of strings or winds, a percussionist and accordionist. Each of these six groups was conducted independent of each other. And even though they were separated by large distances in space, each of the ensembles played in time together. To accomplish this a recorded time track was heard by each conductor that kept them all synchronized.
Besides the live music, pre-recorded tapes were also used. These tapes were filled with the sounds of Jewish life. Among those heard was the ancient shofar (a ritual ram's horn that has been a mainstay in Curran’s music), recordings of the Yemenite Jews praying at Jerusalem’s Western Wall (the “Wailing” Wall). Other sounds on the tape included children from Roman Jewish orphanage, recordings of many famous Eastern European cantors sourced from various sound archives. Curran even included sounds from his family. He recorded his young niece singing her Bat Mitzvah prayers and his father singing in Yiddish at a family get-together. Birds, trains, and ship horns make appearances. But throughout it all is the sound of breaking glass. Meanwhile the live chorus is singing fragments from the Renaissance Jewish composers Salomone Rossi from Italy and another named Caceres from a famous Portuguese synagogue in Amsterdam. Curran also used choral fragments from versions of the Jewish liturgy composed Lewandowski and Sulzer in the 19th century.
Crystal Psalms is made up of two long sections, 24 minutes, and 29 minutes. tructured in two contiguous sections. In the first there is a ton of percussion created from fallen and thrown objects. Amidst all these heavy sounds he used an 18-voice polyphonic structure to weave an increasingly dense texture from the musical fragments being carried by each "voice". As these fragments repeat the weave is brought ever closer together.
In the second part elements from the pre-recorded tape are more apparent. It moves from one moment to the next, one location or place in time before jumping to something else. Curran says, “Here tonal chords are anchored to nothing, innocent children recite their lessons in the midst of raging international chaos.” Idling cars, Yiddish lullaby’s, are separated by glass breaking, and all undergirded by moments on the accordion, organ and fiddles. A familiar melody will quickly disappear when blasted by noise. A solemn choir sings amidst the sound of someone shuffling through the debris. Fog horns drift in and out as telephones go unanswered. The listener with an ear for classical music will recognize bits of Verdi’s “Va Pensiero” turned into a menacing loop. At the end of it all, the cawing of menacing crows, a murder of crows, who have come feed off the destruction.
Curran writes of his piece that “There is no guiding text other than the mysterious reccurring sounds of the Hebrew alphabet and the recitation of disconnected numbers in German, so the listeners, like the musicians, are left to navigate in a sea of structured disorder with nothing but blind faith and the clothes on their backs -- survivors of raw sonic history.”
The event of the radio broadcast was for Curran a very special moment. In creating it, this experience of human artistic and technological collaboration, existed for him alongside the memory of the inhuman pogrom memorialized on its 50th anniversary. Curran say, “By focusing on this almost incomprehensible moment in our recent history, I do not intend to offer yet another lesson on the Holocaust, but simply wish to make a clear personal musical statement and to solicit a conscious act of remembering -- remembering not only this moment of unparalleled human madness of fifty years ago, but of all crimes against humanity anywhere anytime. Without remembering there is no learning; without learning no remembering. And without remembering and learning there is no survival.”
The radio concert was a one off event, never to be performed live again. However recordings from each of the stations involved were made and in 1991 Alvin remixed these into an album. Writing about all of this I’m reminded of something the American folk-singer and storyteller Utah Philips said in regards to memory. “…the long memory is the most radical idea in this country. It is the loss of that long memory which deprives our people of that connective flow of thoughts and events that clarifies our vision, not of where we're going, but where we want to go.”
Let us remember then, the stories in history, personal or global, we would do well not to repeat and those other stories where people work together towards a common good. Just as this day is the product of all our past actions, so tomorrow will be built on what we do today.
Crystal Psalms, New Albion records, 1994
This article originally appeared in the March issue of the Q-Fiver, the newsletter of the Oh-Ky-In Amateur Radio Society.
Read the rest of the Radiophonic Laboratory series.
Before Sirius XM was launched St. GIGA existed in an orbit of its own, an orbit that broadcast its content in harmony with the movement of the Pacific tides. The Japanese company became the first Satellite Digital Audio Broadcast Corportion formed as a subsidiary of the satellite TV company WOWOW. Transmission tests commenced on November 30, 1990 and regular transmissions started at the end of March, 1991. The company adopted a commercial free broadcasting model but to listen to St. GIGA you needed a subscription. The subscription was worth the money though, because the soothing content of their programs was like nothing else before or since. With a receiver set to 11.8042 GHz the pioneering satellite radio station known as St. GIGA took listeners on a gentle journey of ebb and flow.
When parent company WOWOW decided to expand into the realm of radio they knew they would need some help. As business executives they were all in agreement that they weren’t cool and knew nothing about music. To come up with the name they solicited a poll to everyday “persons on the streets” and St. GIGA was selected. Yet they remained in the dark about what to put on the air. They were in need of a creative director to format the content of the satellite service and the searchlight landed on Hiroshi Yokoi. Yokoi had just worked on the popular J-Wave FM station founded in 1988 and which still broadcasts today on 81.3 mhz in Tokyo.
Yokoi was considered an innovator in the field, as was J-Wave. J-Wave's slogan is "The Best Music on the Planet," and the programmers aren’t mere DJs, they are known as "navigators" or nabigētā, and they guide listeners on voyages of discovery. J-Wave’s music could be considered to be the equivalent of top 40 but one of their innovations was the use of hundreds of different jingles to separate programs from commercials. These jingles are played at the same decibel level and are variations on a single melody; the jingles and give the station a unique sonic signature and identity. In 1994 J-Wave also moved to being simulcast via satellite and some of its programs became syndicated on various community radio stations throughout Japan. Due to his work on J-Wave the execs at WOWWOW thought Yokoi would be a good fit for St. GIGA.
Soon after he signed on Yokoi crafted a radical and artistic proposal for the station concept. The men in suits who controlled the money reacted with skepticism. Yet after a few months of traditional broadcasting the executives adopted Yokoi's concept for a probation period. Later he was given full discretion to shape the programming and future course of St. GIGA.
What Yokoi had in mind was a “Tide of Sound.” The concept was quite revolutionary. To tie in with the concept, the station motto became, "I'm here. — I'm glad you're there. — We are St.GIGA." This was a tip of the hat to Kurt Vonnegut's science fiction novel The Sirens of Titan in which the alien life forms called harmoniums communicate using only the phrases "Here I am" and "So glad you are." Yokoi was also influenced by writer Kevin W. Kelley's book The Home Planet. Kelley’s book was a collection of color photographs taken in space capturing the beauty of planet earth. The photos were pared with personal accounts of the experience of seeing earth from space by astronauts and cosmonauts. These two influences formed a communication methodology that broke new ground in the world of broadcasting.
As part of Yokoi’s concept the St. GIGA broadcasts followed no externally fixed program schedule. It was not based on a solar calendar week, where a certain show would recur every Sunday at 7 PM. Instead Yokoi had the genius to base the transmissions around a tide table. Themes for broadcasts were based on a cyclical motif and tried to approximate the current tidal cycle according to the Rule of Twelfths throughout a 24-hour day.
The Rule of Twelfths is an approximation to a sine wave curve. The formula can be used as a rule of thumb for estimating a changing quantity where both the quantity and the steps are easily divisible by 12. It has been typically used for estimating the height of the tide. The rate of flow in a tide increases smoothly to a maximum halfway point between high and low tide, before smoothly decreasing to zero again. The rule is also used to make predictions on the change in day length over the seasons.
Tidal changes are non-linear. This means that in the first hours of a tidal shift the tide might not rise or fall very much, yet as the cycle progresses the rising or falling will accelerate through the mid hours. The Rule of Twelfths applies to the semidiurnal tide - a tide having two high waters and two low waters during a tidal day, which is exactly what happens in most locations. The semidiurnal tide period lasts for a period of 12 hours and 25.2 minutes from low to high tide, and then repeats back to low tide again. The full and new moons also have effects on the tide, as do the first and third quarter moons.
The transmissions of St. GIGA followed this pattern in a unique way, mimicking the swell of the tides and the course of the moon. With his “Tide of Sounds” broadcasting process the end of one show and the beginning of another was not demarcated or clearly defined as folks are used to hearing on the radio. Instead, gradually, using the Rule of Twelfths songs of one genre would flow into and intersperse with songs and material from the prior genre until the new genre, just like a high or low ocean tide, became predominant. Yokoi designed it this way so that listeners could relax into waves of sound "like a baby sleeps in the womb." These "Tide of Sounds" broadcasts operated under the awesome principle of "No Commercials, No DJs, No News Broadcasts, No Talk." If only more radio stations would follow this principle and ethic. Of course this absence of commercials and talk was only possible because the service was subscription based.
Besides the timing of the broadcasts the content was also informed by St. GIGA’s tidal and lunar oriented schedule. It was heavy on ambient music, smooth jazz and field recordings from the natural world. One of the programs was called “Tide Table” and featured live environmental sound broadcasts of waves crashing on the ocean shore. The "Tide of Sounds" broadcasts often featured high-quality digital recordings of nature sounds accompanied by spoken word narration by the "Voice." The part of the "Voice" was played by a number of notable Japanese poets and actors including Ryo Michiko among others. "Voice" performances often consisted of all new poetry composed specifically for the show.
Ambient music, environmental sound recordings and poetry? It sounds perfect. I wonder what other funding models might be developed to breathe new life into this kind of innovative broadcast format? It seems like this mode could be set up and used by low-power community FM or AM stations, or on Part 15 compliant hobby broadcasting stations.
Due to the popularity of the environmental sound recordings and the overall library of material they played, St. GIGA was able to fund field recording trips to collect “biomusic” a term that includes bird songs, whale songs, dolphins, or the sounds of other animals and plants in their natural landscape. Biomusic recording artists were sent to places such as England, the Canary Islands, Mikonos, Venice, Bali, Tahiti, Martinique, Hanson Island (BC), and Maui, all to capture and create and transmit new worlds of sound for the listeners.
Ambient musicians were also commissioned to create original albums and works for the satellite station. Kim Cascone, under his Heavenly Music Corporation moniker, made and released the album Lunar Phase for broadcast from the bird. The album includes the song “St. Giga” and was released in 1995. It was from listening to this record that I learned of St. GIGA in the first place and went on to track down some of the recordings from the station that fans have made available on youtube. The Heavenly Music Corporation was a perfect fit for St. GIGA because the music is both heavenly, and in this instance, came down from the heavens.
The satellite gained something of a cult following and fanzines such as BSFan Journal and G-Mania sprang up to write about the music and report on the allied ambient, mood, and electronic scene in Japan.
St.GIGA also released CDs of their music on their own label and the popular American ambient label Hearts of Space (also a fabulous late night radio show). A number of thematic books were published at the high tide of the satellites popularity including the multi-volume St.GIGA Stylebook and Current of dreams: An introduction to St.GIGA programming. This contained the full text of Yokoi's original concept proposal. Later books included Trends in Dreaming - St.GIGA's Hiroshi Yokoi's General Office.
Despite all this by the mid ‘90s the company was in financial trouble. The popularity of the satellite had peaked and was starting to flow back into the ocean. The market for ambient and related forms of music was not as strong as had been initially anticipated. Plus there was the pesky problem of a financial recession in Japan. Then there was the related issue of strapped consumers not wanting to invest in the expensive antennas and tuners needed to pick up the broadcasts. So St. GIGA formed a partnership with Nintendo. Because that’s what you do if you are a popular Japanese satellite radio company in financial trouble. At this point Nintendo had become the largest shareholder in the company and with their influence the Tide of Sound broadcasts were cut back in order to bring some of their own programming on board.
With the video game company kicking them some dough, they started to broadcast digitally encoded games to owners of the Super Famicoms system between the spring of 1995 and the summer of 2000.The Super Famicom was the Japanese version of the Super Nintendo Entertainment System. Nintendo made an accessory component to work with the Super Famicom called the Satellaview. This was a satellite modem never released in America or Europe. The Satellaview allowed the users to connect to St.GIGA. During a special segment called Super Famicom Hour game data was broadcast. During this transmission people could download games to the Satellaview's internal memory or an optional Memory Pak. Super Famicom Hour actually lasted from noon to two am, so it took away a good chunk of time from St. GIGA’s original programming.
Unlike other services offered by competitors the Satellaview did not have online multiplayer capabilities. This was due to the one-way nature of commercial satellite radio. Despite this limited amounts of data could be sent back through the radio connection. The service featured numerous quizzes and other competitions which required players to send their answers back up to the bird
Another new service related to the games was called SoundLink. CD-quality sound was streamed through the St. GIGA satellite connection to accompany real time play of video games such as the three versions of BS Zelda. The SoundLink included a fully-voiced "narrator" who would guide and give helpful hints and advice to the players throughout the game. Because the SoundLink required a live broadcast of music with a voice track, some games could only be played at the time of transmission. After the last broadcast of the SoundLink data was over, that game could never be played ever again. Some time-sensitive games were split into separate transmissions on different days to allow for the play of longer games.
Due to the rewritability of the cartridges and the fact that SoundLink broadcasts were streamed live and not downloaded during the noon-2AM Super Famicom “Hour” time slot, and because the games have never been rereleased by Nintendo, they have become extremely rare. Yet some can be played in partial emulation. This has been achieved by the extreme level of devotion and skill in this corner of high-geekdom. The subculture of collectors and game enthusiasts have exerted much effort engaged in electronic archaeology by extracting old data from heavily rewritten data cartridges in order to try to reproduce these games via emulation.
SoundLink also featured a type of enhanced magazine. This functioned as a mashup of a radio drama mixed with images and text. Unlike all other Satellaview content, SoundLink content was only available for an additional fee of ¥600 a month.
As St.GIGA’s tide continued to ebb out it broadcast talk shows and entertainment news programs about celebrity idols, as well as a variety show. The shows were slotted to match the schedules of video game and pop culture addicted students as the station's audience had shifted radically, much to the disappointment of its original devotees, the ambient music fans. Before long the station had ceased transmissions of all "Time & Tide" programs including the much-admired Tidal Currents show. Fan publications such as BSFan Journal became replaced by ‘zines that focused on the video game content. Towards the end of its life St.GIGA had focused all of its energies on Satellaview transmissions.
Until 1999 the Satellaview service was controlled by both St. GIGA and Nintendo. After 1999 St. GIGA was the sole controller of the service, as Nintendo broke its partnership with the radio station due to a dispute. However, the service was only turned off in 2000. By 2001 St. GIGA was nearly bankrupt.
Around this time Yokoi the director had also been stricken with cancer. After his death in March of 2003 St.GIGA was rechristened Club COSMO under the leadership of Shinichi Matsuo. Broadcasts continued until October 1, when the company was forced to sell its licensing rights to World Independent Networks Japan Inc. (WINJ). WireBee immediately began bankruptcy procedures, and all recording instruments and 241 tapes of nature sounds were auctioned off at open market for a total divided sale price of ¥5 million.
St. GIGA had reached low tide. It is my hope that it and Hioroshi Yokoi, the man who made it so brilliant, remains in orbit in a heavenly and oceanic musical realm.
Read the other articles in the RADIOPHONIC LABORATORY serie.
Even in the strange and eccentric world of the ham radio operator, Fred Judd G2BCX (1914–1992) was something of an outlier and maverick. Fred designed two well-known antennas, the Slim Jim and the ZL Special. Both of these are now antenna standards. Fred was also an advocate of early British electronic music, inventing or modifying the tools he needed to make this adventurous music along the way. G2BCX was the quintessential tinkerer; a man who loved audio, radio, and the new possibilities for music being opened up by the careful application of capacitors.
As a radar technician in the armed forces during WWII Fred had the opportunity to develop his electrical aptitude and became a full blown engineer. After the war he found a spot working for the Kelvin Hughes company where he researched and developed marine radar devices. To this day Kelvin Hughes continues to create navigation and surveillance systems.
Fred was a man of strong ambition, and the day job in electronics wasn’t enough to keep him satisfied. As part of his side hustle he wrote articles for hobbyist magazines on radio and the new remote control models coming to market. The first of his 11 published books hit the shelves in 1954. When Amateur Tape Recording (ATR) magazine was launched in 1959 he joined the staff as technical editor and wrote on all kinds of topics connected to tape, electronics and hi-fi.
The slim jim antenna for which G2BCX remains famous among hams is itself a variation on the J-Pole. The J-pole is at the time of this writing a 110 year old design, first invented by Hans Beggerow in 1909 for use on Zeppelin airships. In that regard, the J-Pole, commonly made of copper, can also be considered a steampunk antenna. Trailed behind the airship, the J-Pole was made of a single element, one half wavelength long radiator with a quarter wave parallel tuning stub for the feedline. By 1936 this design had been refined into the J configuration and given the J Antenna name in 1943, now just called a J Pole.
Fred introduced his J-pole variant in 1978. He derived the name from its slim profile and the J type matching stub (J Integrated Matching). It has similar performance and characteristics to a simple or folded Half-wave antenna and identical to the traditional J-pole construction. Judd found the Slim Jim produces a lower takeoff angle and better electrical performance than a 5/8 wavelength ground plane antenna. Slim Jim antennas made from ladder transmission line use the existing parallel conductor for the folded dipole element.
The ZL special antenna came from another variant Judd made, this time on the 2-element horizontal phased array created by George Prichard ZL3MH –hence the name ZL Special in tribute to Prichard’s work. L.B. Cebik, W4RNL has written up a detailed analysis of this design at: http://www.antentop.org/w4rnl.001/mu5a.html.
It can be presumed that when Fred wasn’t at work, or on the air as a ham, he was engaged in another aspect of his electronics hobby: making circuits sing. He also wrote one of the first how-to books in the world for making electronic music in 1961, titled Electronic Music and Musique Concrete. It included circuit diagrams alongside practical do-it-yourself tips. (A copy of this tome is available from the Public Library of Cincinnati along with his Radio and Electronic Hobbies book.)
Around this time he also promoted the creation of electronic music via lectures and demonstrations at amateur tape recording clubs all around Britain. As an editor and writer for the Amateur Tape Recording magazine he had access to these clubs and lots of street cred within them. Fred started putting out 7” records of electronic music which were made available through the magazine. Judd was also the editor of Practical Electronics magazine. Chris Carter was an avid reader of both of these magazines and spent time building a lot of the circuits Judd published. Chris Carter went on to be a founding member of Throbbing Gristle, the first industrial music band. Chris continued to innovate in electronic music with his wife Cosey Fan Tutti as Chris & Cosey and latter Carter Tutti.
As any sci-fi movie or old-time radio show buff will know, one of the things electronic music is perfect for is making sound effects, and Fred became adept at making his own. Have you ever flipped around on the tube and come across the strange sci-fi puppet show Space Patrol? Broadcast in 1963 on the ITV network it was the first on British television show to have a composed electronic music soundtrack running throughout the whole series. Fred made those sounds himself using the techniques of tape manipulation, loops and tone generators in his home studio in London.
The Castle record label and its sister label Contrast issued a range of sound effects discs that he made in his studio, including 3 discs of electronic music. These tracks were later issued by library label Studio G, who specialized in providing stock music and sounds, on the Electronic Age album.
Fred also prototyped and built his own synthesizer. This simple voltage controlled, keyboard-operated unit was used to generate, shape and switch electronic sounds. The feat was small but impressive as it predated the Synket, Moog and Buchla synths.
Fred was also interested in the visualization of electronic sounds. One can imagine he knew his way around an oscilloscope and other test equipment. His tinkering in this area led to his Chromasonics system. By running a pulse generator and amplifier into a modified black and white tv that had a high speed color scanning wheel placed in front of the screen Judd was able to make trippy abstract patterns that moved in accordance with the sound input from oscillators or tape recordings. At the 1963 Audio Fair in London he demonstrated Chromasonics with much acclaim, but interest from electronics firm Stuzzi never made it to commercial development.
From the late 1970s Judd continued to operate as a ham from his home in Cantley, Norfolk. Towards the end of his life, he built several detailed reconstructions of early electrical devices including a Wimshurst machine and Edison phonograph. He was honoured by the University of East Anglia for constructing a working replica of apparatus used by Heinrich Hertz, but it seems that none of this equipment, the Chromasonics apparatus or his experimental music-making machinery has survived. He became a silent key in 1992.
In 2010 all of his remaining original quarter inch tapes have been cataloged and deposited with the British Library Sound Archive. In 2011 Ian Helliwell made a documentary on Judd called Practical ElectronicaA retrospective album gathering together as much of his experimental music as can be located, titled Electronics Without Tears was released by the Public Information label. It also contained an official biography of Judd written by Helliwell. It is available from their bandcamp page at: https://publicinformation.bandcamp.com/album/electronics-without-tears.
Here is a short bibliography of books by Fred C. Judd:
Radio control for model ships, boats and aircraft. London: Data publications, 1954.
Electronic music and musique concrète. London : N. Spearman, 1961.
Tape recording for everyone. Blackie, 1962.
Radio and electronic hobbies. London: Museum Press, 1963.
Circuits for audio and tape recording. Haymarket Press, 1966.
Electronics in music. London: Spearman, 1972.
Amateur radio. Newnes Technical Books, 1980.
Two-metre antenna handbook. Newnes Technical, 1980.
CB radio. Newnes Technical, 1982.
Radio wave propagation : (HF bands). London : Heinemann, 1987.
Electronics Without Tears, Public Information, Biography by Ian Helliwell
This article originally appeared in theJune 2019 issue of the Q-Fiver. (All the articles in the Radiophonic Laboratory series have appeared first in various issues of the Q-Fiver.)
Magnetic Lemniscate: A Brief History of the Tape Loop
Sometimes, if the day has been hectic, when I get home I just want to kick back, relax and put on a record. Or a cassette. I still have hundreds of hours of music stored on tape, one of the finest mediums of storage ever invented. This privilege of being able to listen to recorded audio is unique in human history, and my ability to soak in the musical glow from my hi-fi system with my feet propped up and my head in my hands was built on the sweat of many researchers. The phonograph, loudspeaker and microphones all proclaimed that the age of audio had arrived. The promises made by this tech only cracked the door ajar. There was still a bolt in place on the other side barring further entry. The invention of magnetic tape recording proved to be the golden skeleton key responsible for unlocking the door to the studio of the audio engineer, and from there many other rooms in the mansion of new media.
Inside the tape studio it is possible to cut. Splice. Rewind. Fast forward. Edit. Create a new sequence for creative playback. The practice of recording and editing audio using magnetic tape was an obvious improvement over the previous electro-mechanical methods. The leap in audio fidelity alone was a dramatic feat. Further, it allowed for new practices of editing. It allowed for repetition, a key aspect of music, and so the loop was born. Splice. Snip. Audio on magnetic tape had established itself as simply superior. The analog tape recorder made it possible to erase. Audio mistakes could be fixed at less cost by recording over a previous recording, something not possible on the shellac and vinyl based medium of the phonograph. The edit turned into an art form as tape had the advantage of being cut. Spliced, it could be joined back together in an endless profusion of edits. Music could be rearranged, deranged, or removed.
From 1950 onwards magnetic tape quickly became the standard medium for audio master recording in the music and broadcast radio industries. This led to the development of hi-fi stereo recordings for the domestic market. If the day has been hectic, just kick back with some Les Baxter or the exotica of Martin Denny and let it transport you away from the work of the daily grind. Now in hi-fidelity, and turning at 33 1/3 rpm, longer songs and longer sounds mean more time to chill in the lounge. Sonically edited the album now offered to audio engineers the same plasticity of arrangement known to film directors. The many new combinations available became mind boggling and cinematic.
When I think of tape, I think primarily of its role in audio and video storage. I think of the way it revolutionized sound recording, reproduction and broadcasting. It allowed radio, which had always been broadcast live, to be recorded for later or repeated airing. I think of how I sat with a radio and it’s built in cassette player to tape those late night radio shows. To be listened to again and again. But there was also data storage on tape. Remember tape drives? They were a key technology in early computer development, allowing unparalleled amounts of data to be mechanically created, stored for long periods of time, and rapidly accessed.
When I think of tape I think of iron oxide. It’s on tape and it’s also in your blood. It’s the stuff responsible for giving it that bright red color. It’s the stuff that holds the memory of a recording on the tape making it magnetic. The memory is in the blood. Iron oxide stores the genetic memory of music. Editing a tape splices the DNA of sound. Perhaps it is this magnetic resonance of the iron oxide, a shared connection with a vital and elemental force that has given tape such a place of prominence in electronic music. Perhaps it was the way the tapes could be manipulated, slowed down, sped up, chopped up and put into new patterns, which made tape such a dream. This medium of preservation and creation is in the very blood of electronic music.
With the invention of the tape loop the dream of creating infinite music was realized. The use of the pause button had been put on hold. Tape loops are spools of magnetic tape used to create repetitive, rhythmic musical patterns or dense layers of sound when played on a tape recorder. Sound is recorded on a section of magnetic tape and this tape is cut and spliced end-to-end, creating a circle which can be played over and over again, continuously, over and over. This is usually done on a reel-to-reel machine, though industrious lo-fi recording artists have been known to rig their own cassette tapes into loops. The loop originated with the musique concrète work of Pierre Schaeffer in the 1940s. He used the simultaneous playing of tape loops to create phrase patterns and rhythms. Musical experimentalists continued to explore the possibilities of this method on through the 1950s and 60s. Devotees of the tape loop included Steve Reich, Terry Riley, Karlheinz Stockhausen and Brian Eno.
The medium is perfect for creating phase patterns, rhythms, textures, and timbres. When the speed of a loop is accelerated to a sufficient degree a sequence of events originally perceived as a rhythm now is heard as a pitch. The variation of the rhythm in the original recording produces different timbres in the sped up sound. Tape can also be slowed down, causing the music to drop in pitch and for sounds to be stretched. Tape was also used to create echo systems. The first delay effects were made using tape loops improvised on reel-to-reels by shortening or lengthening the loop of tape and adjusting the read and write heads, to create an echo whose time parameters could be adjusted. This delayed signal may either be played back multiple times, or played back into the recording again, to create the sound of a repeating, decaying echo.
Being the pioneer he was Stockhausen made extensive use of loops in Gesang der Jünglinge (1955–56) and Kontakte (1958–60) and he used the technique for live performance in Solo (1965–66). Steve Reich was the composer to use the technique the most, specifically in his "phasing" pieces Come Out (1966) and It's Gonna Rain (1965).
In the realm of popular music it was used to great effect in the 60’s and 70’s. Think of the psychedelic music of the Beatles on the White album and of its use in the progressive rock and ambient genres. A standard loop on a standard reel-to-reel is at most a few seconds long. This is not enough for some composers. To create a longer loop a standard practice was to use two reel-to-reels or for even longer stretches of tape, to run them around mic stands, or even door knobs. Perhaps the best known album made with this technique was Brian Eno’s Music for Airports: Ambient 1. This recording ushered in the vast and sprawling genre of ambient. In creating his 1978 landmark Eno reported that for one song, "the tape loops was seventy-nine feet long and the other eighty-three feet".
Enter William Basinski
Texas born Basinski is a classically trained clarinetist who studied jazz saxophone and composition at North Texas State University in the late 1970s. At the age of twenty in 1978 he became inspired by the techniques of Steve Reich and Brian Eno and started the process of developing his own musical vocabulary using old reel-to-reel tape decks. Basinski experimented with short looped melodies. When played against themselves the loops created a pleasant feedback. Working with this discovery he created his singular meditative, melancholy style within the drone and ambient genres.
Basinki’s first release was Shortwave Music. First created in 1983, it wasn’t released until 1998 when Carsten Nicolai's Raster-Noton label put it out in a small vinyl edition. It was followed by his shortwave magnum opus The River. Basinski writes, "As a young composer in the early 1980’s I was experimenting with tape loops: recording and mixing them with sounds coming from the airwaves. The idea was to capture music out of the ether. In NYC, there was a very powerful radio station, I can’t remember the call letters, but it was the station that played American popular standards….that is, the ‘1001 Strings’ smoothed out, de-syncopated versions of the American popular standards: what was commonly referred to then as Muzak, or ‘elevator music’. In those days, there was no Prozac, only Muzak to smooth out the seams and ease the tension of hectic neurotic life in the mid-late 20th century. At any rate, this station was so powerful, it could be picked up by simply running a wire across the floor, so frequently I was picking up background transmissions in my recordings. Since it was inevitable and I had no choice in the matter, I began experimenting with recording off the radio small loops of string intros, outros and interludes randomly in my primitive studio in Brooklyn. I would then slow them down a couple of speeds and as if peering into a microscope, to see what I could discover beneath the glossy surface. Frequently, these loops held great depth and melancholy. This appealed to me greatly and I created a vast archive of these loops to later experiment with. I am still using this archive to this day.”
Having this library of ‘found’ material became very important to his work, as it became the basis for many future albums and releases. Something else he found at a thrift store was also important, the machine that would provide his radio static. “I bought a wonderful old Hallicrafters shortwave radio at the Goodwill around the corner and began listening to that. The sounds coming from this magical device were awesome. The idea that one could hear transmissions from ‘behind the Iron Curtain’ or Japan or London was thrilling and mysterious. The waves of shifting static and interstellar particle showers were mind-boggling to a young man who grew up in the shadow of the space race.
I was having a problem with a 60 Hz ground loop hum in my recordings. I had no idea what was causing it at the time…probably our fluorescent lights…just that it bothered me and I couldn’t figure out how to get rid of it. So I decided to try to mask it with the shortwave radio static. I would set the Hallicrafters on a pleasing in-between-stations setting teeming with showers of sparkling static and record live while mixing my loops. The results were extraordinary. The Hallicrafters would sometimes shift focus as if responding to the music coming from the loops. Occasionally a distant station from the Middle East perhaps, would slide into range just for a moment like a lingering column of cigarette smoke swirling slowly in a spotlight. I was very encouraged and excited. I didn’t know if I was really a composer, or if this was music, but to me it was magic! I loved it and was in my laboratory every night after work, like Dr. Frankenstien, just waiting to see what fascinating and strange sounds would bubble up next. The results of this period of experimentation were the Shortwave Music pieces and ultimately, the 90 minute masterwork of the series, The River. It would be over 25 years before these pieces would be released to the public."
Even though it wasn’t until the late 90’s that his music saw release on a label Basinski remained very active in the NYC music scene. He was a member of many bands including the Gretchen Langheld Ensemble and House Afire. In 1989, he opened his own performance space, "Arcadia" at 118 N. 11th Street. In the 1990s he helped put together many intimate underground shows at his space for artists like Diamanda Galás, Rasputina, The Murmurs, and Antony as well as his own experimental electronic/improvisation band, Life on Mars. In 2000, he made a film titled Fountain with artists James Elaine and Roger Justice.
In August and September 2001 Basinski started work on what would become his most recognizable piece, the epic four-volume album The Disintegration Loops. The album is made up of old tape loops whose quality had degraded. In an attempt to salvage these loops by recording them onto a digital format, the magnetic iron oxide ferrite on the tapes slowly crumbled. With each pass of the tape over the head on the reel-to-reel deck more and more of the iron oxide fell off. The loops were allowed to play for extended periods as they deteriorated further, with increasing gaps and cracks and spaces in the music. These sounds were treated further with a spatializing reverb effect to further enhance their haunting aura. Basinski was able to capture the sound of their disintegration and the results were beautiful and stunning. The disintegration of these tapes was made all the more poignant as he finished his work on them on the morning of 9/11. Basinksi sat on the roof of his apartment building in Brooklyn with friends listening to the finished project as the World Trade Center towers collapsed. The artwork that accompanies the album features stills of footage he shot of the NYC skyline in the aftermath of the attack. In September 2012, the record label Temporary Residence reissued the entire Disintegration Loops series as a 9xLP box set, marking the project's 10-year anniversary as well as its impending induction into the National September 11 Memorial & Museum.
The creation of the Disintegration Loops was something of an accident, timestamped by their own destruction and the terrible tragedy of 9/11. The four albums are perfect as a reminder of the beauty to be found in imperfection, as a reminder of our own transience, of our own ultimate disintegration, of how the iron oxide in our blood will once again return to dust.
Live wires :a history of electronic music by Daniel Warner, Reaktion Books Ltd, London, England, 2017.
William Basinki’s website: http://www.mmlxii.com
Holger Czukay was another musician who was fascinated with the sounds of shortwave listening. He brought his love of radio and communications technology on board with him when he helped to found the influential krautrock band Can in 1968. Shortwave listening continued to inform Czukay’s musical practice in his solo and other collaborative works later in his career. It all got started when he worked at a radio shop as a teenager.
Holger had been born in the Free City of Danzig in 1938, the year before the outbreak of World War II. In the aftermath of the war his family was expelled from the city when the Allies dissolved its status as free city-state and made it become a part of Poland. Growing up in those bleak times his formal primary education was limited, but he made up for it when he found work at a radio repair shop. He had already developed an interest in music and one his ideas was to become a conductor, but fate had other plans for him. Working with the radios day in and day out he developed a fondness for broadcast radio. In particular he found unique aural qualities in the static and grainy washes of the radio waves coming in across the shortwave bands. At the shop he also became familiar with basic electrical repair work and rudimentary engineering. All of this would serve him well when building the studio for Can. In his work with the band he not only played bass and other instruments but acted as the chief audio engineer.
He spoke about this time, and his fascination with the mystery of electricity, in an interview. “When I was fourteen or fifteen years old, I didn't know if I wanted to become a technician or a musician. And when you are so young you think the one has to exclude the other. So in the very beginning I thought I am sort of a musical wonder-child, and want to become a conductor and that was very very serious, but there was no chance to get educated as I was a refugee after the war. And then, suddenly, electricity. Electricity was such a fascinating thing - it was something. And then I became the boy in a shop who carries the radios to repair them and carries them back again. That was so-called three-dimensional radio, before stereo. There was one front speaker in the radio and at the side, there were two treble speakers which gave an image of spatial depth. I must say these radios sounded fantastic.”
In 1963 at the age of twenty-five he Czukay decided to pursue the musical side of his vocation and begin studying under Karlheinz Stockhausen at the Cologne Courses for New Music. This is where he met up with Irmin Schmidt, another founding member of Can, who was also a student of Stockhausen’s. As much as Can itself was one of the guiding forces of Krautrock, or Kosmiche music as it was also called, a broad style of experimental rock music developed in Germany in the late 60s. Krautrock was for the most part divorced from the traditional blues and rock and roll influences of British and American rock music scenes of the time. Krautrock featured more electronic elements and contributed to the further development electronic music and ambient music as well as the birth of post-punk, alternative rock and New Age music. Stockhausen himself could be thought of as one of its chief instigators, a kind of Godfather of the genre. This was due not only to his influence as a teacher of German musicians, but because of his pioneering work with the raw elements of electronic music itself at the WDR studios.
Eccentric British rock musician and author Julian Cope discusses the importance of Stockhausen’s composition Hymnen in his book Krautrock Sampler. He considered that piece in particular pivotal to the whole Krautrock movement. It’s release had “repercussions all over W. Germany, and not least in the heads of young artists. It was a huge 113 minute piece, subtitled ‘anthems for electronic and concrete sounds’. Hymnen was divided up into four LP sides, titled Region I, Region II, Region III and Region IV.” In a previous column I had discussed this piece of music as an early attempt at creating ‘world music’. With its sounds of shortwave receivers and electronics it plays anthems from various countries in an attempt to unify them. What he did with the German anthem, ‘Deutschland, Deutschland Uber Alles’ had a liberating effect on young Germany, who had grown up under the shadow of the worst kind of nationalism. Cope writes of the German publics reaction, “The left-wing didn’t see the funny side at all and accused him of appealing to the basest German feelings, whilst the right-wing hated him for vilifying their pride and joy, and letting the Europeans laugh at them. Stockhausen had just returned from six months at the University of California, where he had lectured on experimental music. Among those at his seminar’s were the Grateful Dead’s Jerry Garcia and Phil Lesh, Grace Slick of Jefferson Airplane and many other psychedelic musicians. Far from snubbing the new music Stockhausen was seen at a Jefferson Airplane show at the Filmore West and was quoted as saying that the music ‘…really blows my mind.’ So whilst the young German artists loved Stockhausen for embracing their own rock’n’roll culture, they doubly loved him for what they recognized as the beginning of a freeing of all German symbols. By reducing ‘Deutschland, Deutschland Uber Alles’ to its minimum possible length he had codified it…Stockhausen had unconsciously diffused a symbol of oppression, and so enabled the people to have it back.”
Czukay’s time studying with Stockhausen was as important to the development of Krautrock as was Hymnen itself. In fact while Stockhausen was working on Hymnen at the WDR studio during the day, Holger Czukay and the other members of a pre-Can group, the Technical Space Composers Crew, would go in and use the equipment at night to record their own album Canaxis. In the piece ‘Boat Woman’s Song’ some of Czukay’s early pioneering use of sampling can be heard. The proto-ambient pieces of music on this record were painstakingly assembled from tape loops and segments of a traditional Vietnamese folk song. In an interview Czukay spoke of the experience. “When Stockhausen left for home, we had a second key and went in and switched everything on. We went in and Canaxis was produced in one night. In one night the main song ‘Boat Woman Song’ was done. I prepared myself at night at home, so I knew exactly what I wanted to do, so in four hours the whole thing was done.” David Johnson helped Czukay and Rolf Dammers engineer the album. “He knew the studio a bit better than me. He was engineering a bit, switching on stuff, copying from one machine to another…and that was okay. In four hours the job was done.” The music on Canaxis is eerie and beautiful and haunting. It is both a part of this world, but also not of it. It seems as if it has come to us from beyond, and some fifty years later it still sounds fresh, as all timeless music does.
Stockhausen influenced Czukay in other ways. It hadn’t originally been Czukay’s intention to become a rock musician. He was more interested in classical music, which he thought was the best, with a definite leaning towards it’s avant-garde. “Therefore I went to Stockhausen as he was the most interesting person. Very radical in his thoughts. With the invention of electronic music he could replace all other musicians suddenly: that was not only an experiment; that was a revolution! I thought that is the right man, yeah? So I studied with him for about three years. Until I finally said, if a bird is ready to fly, he leaves his nest and that is what I have done.”
After leaving the nest Holger became a music teacher in his own right as a way to make a living. Later he was able to work full time as a musician, because as he often joked, he was married to a rich woman. Teachers always learn from their students though and his were teaching him about the rock and pop music of the time, playing him records of Jimi Hendrix and the Rolling Stones. The Velvet Underground and Pink Floyd's stood out to him, as did the song I am the Walrus by the Beatles. Czukay fell in love with that masterpiece of psychedelic pop. In particular he loved the way bursts of AM static and the sound of tuning between stations had been used for a musical effect at the end of the cut.
All of these influences and elements would fused together in his work with Can, a project begun while he was still a teacher. Irmin Schmidt’s mark on the band was equally massive, and he was just steeped, if not more, in the 20th century avant-garde, but exploring his contribution is not in the scope of this article. For most of his time in the band, Czukay played bass, but toward the end he gave up that instrument altogether in favor of a shortwave radio. He speaks about Stockhausen’s influence in making this switch.
“A shortwave radio is just basically an unpredictable synthesizer. You don’t know what it’s going to bring from one moment to the next. It surprises you all the time and you have to react spontaneously. The idea came from Stockhausen again. He made a piece called ‘Short Wave’ [‘Kurzwellen’]. And I could hear that the musicians were searching for music, for stations or whatever, and he was sitting in the middle of it all and the sounds came into his hands and he made music out of it. He was mixing it live – and composing it live. He had a kind of plan, but didn’t know what the plan would bring him. With Can, I would mix stuff in with what the rest of the band were playing. Also, we were searching for a singer and we didn’t find one – we tested many, but couldn’t find anyone – so I thought: ‘why not look to the radio for someone instead? The man inside the radio does not hear us, but we hear him.’” This he used without additional effects. “The radio has a VFO – an oscillator – where you can receive single side-bands, which means just half of the waves and you can decode it – it’s like a ring modulator. And that’s more than enough. The other members of Can were very open to these unpredictable uses of instruments, especially in the early days.”
His work with radios in a musical setting was a way for him to bring in energies from outside the band into their work. In his own words, “I looked for the devices to bring a different world into the group again and they had to react on that. That was the idea, working with a radio or working with tapes or working with a telephone. I even had this idea that with a transmitter, we could transmit and receive things back again. Or to call up people like today's radio shows where people call up or you call people. This sort of interaction I wanted to establish. But the group was not interested in this. So I finished with Can and went my own way. And here, I really followed this. I was working on that for a few years (with Can) but then I found it that it wasn't fun anymore. I continued alone then worked with other people.”
Can had a great run as a band from 1968 to 1979. Afterwards Czukay continued to flourish with his solo recordings, including albums like Radio Wave Surfer. The methods he developed for using radio as an instrument he termed radio painting. He continued to make solo albums and collaborate with other musicians on various project throughout the 80’s, 90’s and 2000’s. He died of unknown causes on September 5, 2017.
All of this tells you the who, what, where, when and why. But to get the full experience I invite you to blow your mind by listening to Stockhausen, Can, Holger Czukay, and other crispy Krautrock bands! There is no better place to start than with Hymnen, the Can discography.
Krautrock Sampler: one head’s guide to great Kosmische muisk 1968-onwards by Julian Cope, Head Heritage, 1996.
Justin Patrick Moore
Husband. Father/Grandfather. Writer. Green wizard. Ham radio operator (KE8COY). Electronic musician. Library cataloger.